id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2306.03060 | Accreditation of Analogue Quantum Simulators | We present an accreditation protocol for analogue, i.e., continuous-time,
quantum simulators. For a given simulation task, it provides an upper bound on
the variation distance between the probability distributions at the output of
an erroneous and error-free analogue quantum simulator. As its overheads are
independent of the size and nature of the simulation, the protocol is ready for
immediate usage and practical for the long term. It builds on the recent
theoretical advances of strongly universal Hamiltonians and quantum
accreditation as well as experimental progress towards the realisation of
programmable hybrid analogue-digital quantum simulators. | Andrew Jackson, Theodoros Kapourniotis, Animesh Datta | 2023-06-05T17:31:14Z | http://arxiv.org/abs/2306.03060v1 | # Accreditation of Analogue Quantum Simulators
###### Abstract
We present an accreditation protocol for analogue, i.e., continuous-time, quantum simulators. For a given simulation task, it provides an upper bound on the variation distance between the probability distributions at the output of an erroneous and error-free analogue quantum simulator. As its overheads are independent of the size and nature of the simulation, the protocol is ready for immediate usage and practical for the long term. It builds on the recent theoretical advances of strongly universal Hamiltonians and quantum accreditation as well as experimental progress towards the realisation of programmable hybrid analogue-digital quantum simulators.
## I Introduction
Quantum simulation is rapidly emerging as a leading application of quantum technology [1]. One key approach is analogue simulation, which proceeds by engineering many-body quantum systems in a well-controlled environment and simply allowing their dynamics to occur. As these systems increase in size and improve in performance, their computational capabilities are beginning to surpass those of existing classical computers [2; 3]. Despite improvements, they continue to be afflicted by errors. It is thus accepted that before analogue quantum simulators can tackle problems of practical or fundamental importance, methods to provide quantitative guarantees on the correctness of the outputs of error-prone analogue quantum simulators must be developed [4].
Validation of analogue quantum simulators has typically relied on tractable theoretical models incorporating errors and imperfections [1]. Another method has been to run the dynamics forward and backward for equal amounts of time which returns the system to its initial state - should there be no errors. Commonly known as Loschmidt echo, this method can detect some errors and imperfections, but cannot provide quantitative guarantees on the correctness of the outputs. More sophisticated variations have been developed that evolve the simulator from some known initial state through a closed loop in state space, eventually returning to its initial state [5]. These provide some measure of how faithfully the simulator implements the target Hamiltonian. Methods such as randomised benchmarking have also been developed for analogue quantum simulators to quantify the performance of their components [6]. However, these methods cannot provide quantitative guarantees on the correctness of the simulator's outputs either.
In this paper, we present a scalable and practical accreditation protocol that provides an upper bound on the correctness of the outputs of an analogue quantum simulator. As the outputs of all quantum simulators are classical probability distributions, our protocol places an upper bound on the variation distance between the probability distributions generated by erroneous and error-free analogue quantum simulator. We dub this task quantum accreditation.
Our protocol eliminates the need of classical simulations, thus freeing us to accredit simulations of arbitrarily large systems where quantum simulators offer the most. It is sensitive to a wide class of error processes affecting real-world analogue quantum simulators. It can be implemented on extant programmable hybrid analogue-digital quantum simulators [7]. Our work can thus be construed to solve the open problem of verifiability for analogue quantum simulators by exploiting advances in their programmability [1, Sec. V].
The two obstacles to the quantum accreditation of analogue quantum simulators lie in the analogue and quantum natures of the problem. The former engenders a variety of Hamiltonians that such simulators can and do implement [8; 9; 10; 11; 12; 13], and starkly contrasts with the mathematical formulation of universal digital quantum computation [14]. This has been a barrier to a general recipe to bound the correctness of outputs of analogue simulators. The latter is the well-acknowledged exponential cost of simulating a general interacting many-body quantum system classically.
Our protocol overcomes the lack of universality plaguing analogue quantum simulators using the recently developed notion of universal quantum Hamiltonians [15] and their strong counterpart [16]. To overcome the latter obstacle, our protocol builds upon trap-based quantum interactive proof systems [17; 18; 19]. These have already been used to develop a scalable and practical accreditation protocol for the quantum circuit model [20; 21]. It implements the simulation of interest - the 'target', together with a number of classically easy 'trap' simulations of the same size as the target. As they are implemented on the same hardware, they are subject to the same errors and the outputs of the traps imply an upper bound on the correctness of the target simulation. The number of trap computations is independent of the size and nature of the target simulation, depending rather on the accuracy and confidence with which the upper bound is sought.
## II Definitions
We begin with a formal definition of analogue quantum simulators.
**Definition 1**.: An analogue quantum simulator takes as inputs:
1. The description of an initial product state \(|\psi_{0}\rangle\),
2. A time-independent Hamiltonian, \(\mathcal{H}_{0}\),
3. A simulation duration, \(t\in\mathbb{R}\), and
4. A set of single-qubit measurements, \(\mathcal{M}\).
The simulator prepares \(|\psi_{0}\rangle\), applies the time evolution generated by \(\mathcal{H}_{0}\) to \(|\psi_{0}\rangle\) for the duration \(t\), followed by the measurements in \(\mathcal{M}\) and returns their results. These measurement outcomes will be samples from a distribution with probability measure, \(P:\Omega\rightarrow[0,1]\), where \(\Omega\) is the set of all possible outcomes of the measurements in \(\mathcal{M}\).
The accreditation of analogue quantum simulators relies on two recent advances - one theoretical and one experimental.
The first, theoretical, advance is the notion of strongly universal Hamiltonians [16], which builds on the idea that the physics of any \(\mathcal{O}(1)\)-local quantum many-body system can be'simulated' by families of 'universal' spin-lattice models [15].
**Definition 2**.: A family of Hamiltonians is strongly universal if the eigenspectrum of any \(\mathcal{O}(1)\)-local Hamiltonian can be encoded in some low-energy subspace of a Hamiltonian in the family; allowing the Hamiltonian from the family to simulate the \(\mathcal{O}(1)\)-local Hamiltonian. Moreover the translation from any \(\mathcal{O}(1)\)-local Hamiltonian to one in the strongly universal family takes time at most polynomial in any relevant parameter such as the number of qubits1 or interaction strength, and outputs a Hamiltonian where any of these parameters are increased at most polynomially (in their original values).
Footnote 1: Following Ref. [16], we use ‘qubits’ instead of ‘sites’.
From all the possible families of strongly universal Hamiltonians, we choose that of XY-interactions on a square lattice, with freely varying coefficients [15]. We focus on this Hamiltonian as its semi-translationally invariant nature [16] enables us to develop our accreditation protocol for a single form of interaction \((X_{i}X_{j}+Y_{i}Y_{j})\), where \(X_{i}\), \(Y_{i}\) denote the Pauli X and Y operators respectively on qubit \(i\). Consequently, we dub the Hamiltonian of the XY-interaction on a square lattice 'accreditable' in the rest of the paper.
**Definition 3**.: The family of accreditable Hamiltonians captures the XY-interactions on a square lattice, and has Hamiltonians of the form:
\[\mathcal{H}=\sum_{\langle i,j\rangle}\left(J_{i,j}\left[X_{i}X_{j}+Y_{i}Y_{j} \right]\right), \tag{1}\]
where \(\langle i,j\rangle\) denotes the summation is over pairs of indices labelling qubits that are neighbours on the appropriately sized square lattice and \(\forall i,j\in\mathbb{Z}\), \(J_{i,j}\in\mathbb{R}\).
Due to the strong universality of accreditable Hamiltonians, our protocol can be used to efficiently accredit any analogue quantum simulation after translating it to the former using the constructive method in Ref. [16]. This translation is efficient, but approximate with its precision captured by the spectral norm of the difference of the two Hamiltonians. This is lack of precision is independent of the analogue quantum simulator on which the accreditable Hamiltonian is subsequently executed and any errors afflicting its physical implementation. The translation - encoding and decoding operations itself, when implemented on an analogue quantum simulator is subject to its errors. This is accounted for in our protocol in Sec. III.
The second, experimental, advance is the ability to apply single-qubit and two-qubit operations mid-simulation [7], which can be thought of as'single-qubit and two-qubit gates'. We formalise this in the definition of a hybrid quantum simulator.
**Definition 4**.: A hybrid quantum simulator (HQS) takes the four inputs of the analogue quantum simulator in Definition 1 and
1. an ordered set, \(G\), of single-qubit or two-qubit quantum gates with corresponding time-stamps \(\{t_{g}|g\in G\}\) denoting when they are applied.
The HQS prepares \(|\psi_{0}\rangle\), applies the time evolution generated by \(\mathcal{H}_{0}\) to \(|\psi_{0}\rangle\) for the duration \(t\), with interruptions to apply each gate \(g\) in \(G\) at time \(t_{g}\), followed by the measurements in \(\mathcal{M}\) and returns their results. These measurement outcomes will be samples from a distribution with probability measure, \(P:\Omega\rightarrow[0,1]\), where \(\Omega\) is the set of all possible outcomes of the measurements in \(\mathcal{M}\).
Our accreditation protocol can be implemented on the state-of-the-art HQSs already extant. Instances include quantum simulators where the XY-interaction Hamiltonians are directly implementable in experimental systems, most notably using Rydberg atoms [12] are embracing the ability to perform digital gates alongside analogue simulations [7]. Recently, hybrid models of digital-analogue quantum computation have also been studied theoretically to investigate their computational capabilities [22].
For the rest of the paper we depict the operations in a HQS, including both the Hamiltonian evolutions and the gates, via circuits, as illustrated in Fig. 1. It is important to note that our 'circuits' do not necessarily fit within the limits of the quantum circuit model: The set of allowed operators in the HQS cannot be encapsulated by a finite gate-set as it contains time evolutions of any permissible Hamiltonian for arbitrary time.
## III Accredited analogue quantum simulation
All physical implementations of quantum simulators will be afflicted by error. These errors can be considered to occur in one (or more) of the inputs listed in Definitions 1 and 4 being implemented incorrectly or affected by noise. For instance, there could occur some fluctuations in the applications of the Hamiltonian \(\mathcal{H}_{0}\) or an error in the value of \(t\) applied. Consequently, the outputs actually obtained will be erroneous. Thus the probability measure over \(\Omega\ni s\), \(\tilde{P}(s)\), in the actual,
erroneous case differs from the error-free probability measure, \(P(s)\).
The objective of an accredited analogue quantum simulation is to provide - in addition to the output \(\tilde{P}(s)\) - an experimentally accessible upper-bound on the distance between \(\tilde{P}(s)\) and \(P(s)\), as captured by the total variation distance, defined in Eqn. (2) in the following definition.
**Definition 5**.: An accredited analogue quantum simulation runs on a HQS. It takes the four inputs of the analogue quantum simulator in Definition 1 and two parameters \(\alpha,\theta\in[0,1)\). It returns the outputs of the analogue quantum simulator, and an \(\epsilon\in[0,1]\), such that:
\[\text{VD}(P,\tilde{P})=\frac{1}{2}\sum_{s\in\mathbb{Q}}\left|P(s)-\tilde{P}( s)\right|\leq\epsilon, \tag{2}\]
where the \(\epsilon\) is obtained from the experimentally estimated \(\tilde{P}(s)\) with accuracy \(\theta\) and confidence \(\alpha\).
### Error model
We model any erroneous implementation of _any part_ of a HQS as their error-free implementation followed (or preceded) by an error operator such that:
1. The error operator is a completely positive trace preserving (CPTP) map applied on the HQS and its environment.
2. The error operator in distinct uses of a HQS, where each use only differs in the single-qubit gates, is independent and identically distributed (i.i.d).
3. Replacing identity gates with single-qubit gates in a simulation does not reduce the variational distance between \(P(s)\) and \(\tilde{P}(s)\).
Our error model captures a large class of errors that afflict real-world analogue quantum simulators. These include spontaneous emission, crosstalk, and particle loss via E1, and fast noise [5] such as laser fluctuations via E2. More generally, E2 captures all fluctuations in the HQS and its environment that occur on timescales faster than that implements the operations in Fig. 1. Other common practical issues such as miscalibrations [5] in the duration of time evolutions or coefficients of the Hamiltonian being applied, or unintended terms in the Hamiltonian, and incorrect state-preparation or measurement that occur repeatedly across multiple implementations of Fig. 1 are captured as well, subject to E3.
Our error model does not capture slow noise [5] processes such as those from temperature variations or degradation of device performance over implementations of Fig. 1. This is because they violate E2. We discuss means of mitigating this limitation later.
This error model affecting a HQS admits the following mathematical simplification.
**Lemma 1**.: _Any HQS affected by errors obeying E1-E3 is equivalent to the single-qubit gates being error-free and the remaining error being independent of \(A_{j},B_{j},D_{j}\) (\(\forall j\in\mathbb{N}^{\otimes N}\))._
Notably the Lemma 1 excludes \(C_{j}\) from the single-qubit gates that the remaining error is independent of. This follows from the \(C_{j}\) gates' unfortunate position in the simulation, as can be seen in the proof of Lemma 1, in Appendix A. However, this dependence in no way interferes with our quantum accreditation protocol presented below.
### Quantum Accreditation Protocol
We now sketch of our accreditation protocol, as presented in Protocol 1. Ours is a trap-based protocol, inspired by quantum interactive proof systems [18] and recently adapted for the accreditation of digital quantum computation in the circuit model [21]. As such, it is based on two different types of simulations - the target and the trap. The target simulation is the one of interest while the trap simulations are factitious ones that exist to infer the effect of error in the target simulations that implemented on a HQS obeying E1-E3.
_Target simulations_: This is the analogue quantum simulation we are actually interested in. For it to be accredited, it is implemented on a HQS as illustrated in Fig. 2. In the absence of any errors, it applies \(e^{-\mathcal{H}t}\) to \(\ket{\psi_{0}}\) followed by
Figure 1: Circuit representation of a HQS: \(N\) is the number of qubits and \(\mathcal{U}_{1}\), \(\mathcal{U}_{2}\) are arbitrary poly(\(N\))-sized circuits of single and two qubit gates. Each \(A_{j}\), \(B_{j}\), \(C_{j}\), \(D_{j}\) (for any \(j\in\mathbb{N}^{\otimes N}\)) is an arbitrary single-qubit gate. The input is fixed to \(\ket{\psi_{0}}=\ket{0}^{\otimes N}\) for convenience. As the single-qubit gates \(A_{j}\) are arbitrary, any product state can be prepared as an input to the HQS. Similarly as the single-qubit gates \(D_{j}\) are arbitrary, any product measurement can be performed on the output of the HQS.
single-qubit measurements. It requires that the initial states \(\ket{\psi_{0}}=\otimes_{j=1}^{N}(A^{\prime}_{j})\ket{0}^{\otimes n}\) be encoded to enable simulation by the strongly universal Hamiltonian. This is denoted by \(\mathcal{V}\). Similarly, after the time evolution the state must be decoded, before measurement by \(\mathcal{M}=\otimes_{j=1}^{N}(D^{\prime}_{j})Z^{\otimes n}\). This is done via \(\mathcal{V}^{-1}\).
_Trap simulations_: For any target simulation as in Fig. 2, a trap simulation can be obtained by replacing the identities (\(I\)) with single-qubit gates that invert the time evolution of the Hamiltonian and changing some pre-existing single-qubit gates depending on some random parameters, as explained in the caption of Fig. 3. The former is detailed in Sec. III.3. In the absence of any errors, the trap simulation executes the identity evolution, which will result in the all-zero output. This can be checked using resources scaling linearly with the problem size \(N\). Any deviation from an all-zero output indicates the presence of errors.
Both target and trap simulations (Figs. 2 and 3 respectively) can be implemented on a HQS (as in Fig. 1) with \(A_{i}=A^{\prime}_{i},B_{i}=C_{i}=I,D_{i}=D^{\prime}_{i},\mathcal{U}_{1}= \mathcal{V},\mathcal{U}_{2}=\mathcal{V}^{-1}\), and \(A_{i}=P_{i}H^{\mathrm{B}}Z^{\prime},B_{i}=C_{i}=C_{i},D_{i}=Z^{\prime}H^{ \mathrm{B}}P_{i},\mathcal{U}_{1}=\mathcal{V}=\mathcal{U}_{2}=\mathcal{V}^{-1}\) respectively.
The goal of the trap simulations is to detect any error on the HQS that obeys E1-E3 and provide a bound of the form in Eqn. 2. Lemma 2 below establishes a relationship between the effects of these errors in the trap simulations and the target simulation. Thus, detecting the errors in the former, via Lemma 3 enables us to bound the variational distance between the error-free and erroneous probability distributions over the measurement outcomes of the latter, as per Theorem 1.
**Lemma 2**.: _If E3 holds, the variational distance between the probability distribution over measurement outcomes of an error-free implementation and that of the erroneous implementation is greater in a trap simulation, for any value of the random parameters, than in the target simulation._
Proof.: The trap simulations are constructed so that traps can be obtained from the target by adding single-qubit gates to the target simulation in place of identity gates. E3 then implies that the variational distance between the probability distribution over measurement outcomes of an error-free implementation and that of the erroneous implementation is greater in a trap simulation than in the target simulation.
**Lemma 3** (Detection of errors).: _Any error, or combination of errors, obeying E1-E3 and occurring within a trap simulation are detected with a probability of at least \(1/2\), unless the errors cancel with each other._
The proof is provided in Appendix B.
**Theorem 1**.: _Protocol 1 performs accredited analogue simulation as per Definition 5 subject to the the error model, i.e., E1-E3, with \(N_{\mathrm{tr}}\) trap simulations, where_
\[N_{\mathrm{tr}}=\left[\frac{2}{\theta^{2}}\ln\left(\frac{2}{1-\alpha}\right) \right]+1 \tag{3}\]
This is our central result. Crucially, the additional resources required for our quantum accreditation protocol are independent of the size (\(N\)) as well as the specifics (inputs in Definition 1) of the analogue quantum simulation. The proof is provided in Appendix C.
### Design of Trap Simulation
This section provides fuller details of the trap simulation used in Sec. III.2.
#### iii.3.1 Time Inversion of Accreditable Hamiltonian
Our trap circuits use the notion of time-inversion circuits that effectively inverts the time evolution of the Hamiltonian. We show that such a circuit exists for accreditable Hamiltonians on a large class of lattices, of which the square in an instance.
**Definition 6**.: For a specific accreditable Hamiltonian, \(\mathcal{H}\), a time-inversion circuit, \(\mathcal{C}\), is an operator such that,
\[\mathcal{C}\mathcal{H}\mathcal{C}^{\dagger}=-\mathcal{H}. \tag{4}\]
We refer to this as inverting the Hamiltonian.
A circuit, \(\mathcal{C}\), conforming to Definition 6 suffices to reverse the time evolution of an accreditable Hamiltonian due to Lemmas 4 and 5.
**Lemma 4**.: _For any unitary, \(\mathcal{U}\), Hamiltonian, \(\mathcal{H}\), \(\kappa\in\mathbb{C}\) and \(t\in\mathbb{R}\):_
\[\mathcal{U}e^{-\kappa\mathcal{H}t}\mathcal{U}^{\dagger}=e^{-\kappa\mathcal{U} \mathcal{H}t}\mathcal{U}_{t} \tag{5}\]
Lemma 4 is proven in Appendix D.4.
**Lemma 5** (Inverting the Hamiltonian).: _Given \(\mathcal{C}\), \(\mathcal{H}\) as in Definition 6,_
\[\mathcal{C}e^{-\mathcal{H}t}\mathcal{C}^{\dagger}=e^{\mathcal{H}t} \tag{6}\]
Proof.: Via Lemma 4,
\[\mathcal{C}e^{-\mathcal{H}t}\mathcal{C}^{\dagger}=e^{-\mathcal{C}\mathcal{H} \mathcal{C}^{\dagger}t} \tag{7}\]
Definition 6 then implies the lemma.
The existence of a time inversion circuit, meeting the requirements of Definition 6 for an accreditable Hamiltonian, \(\mathcal{H}\) is established by Theorem 2.
**Theorem 2**.: _For any set of XY-interactions, where the interactions and qubits form a two-colourable graph with the qubits as vertices and interactions as edges, the corresponding accreditable Hamiltonian can be inverted by applying a time inversion circuit consisting of Pauli Z gates on a chromatic subset (as defined in Definition 11) of the qubits._
The proof is provided in Appendix D. Some examples of the use of time-inversion circuits to invert time evolutions are given in Appendix E.
#### iii.2.2 Traps in the error-free case
We now present the measurement statistics of the traps in the error-free case. The erroneous case is more involved and hence is presented in Appendix B. The measurement statistics are in fact quite simple: the traps return a known 'correct' result when implemented without any error occurring. This makes use of the time inversion circuits displayed above and is demonstrated in Lemma 6.
**Lemma 6**.: _The error-free implementation of a trap simulation (on \(N\) qubits) always gives the all-zero output with certainty._
Proof.: If no errors occur, the trap simulation, as in Fig. 3 gives the all-zero output with probability
\[|\langle 0|^{\text{st}N}\bar{Z}^{\prime}H^{h}\mathcal{P}\mathcal{V}^{-1} Ce^{-\gamma Ht/2}Ce^{-\gamma Ht/2}\mathcal{V}\mathcal{P}H^{h}\bar{Z}^{\prime}|0 \rangle^{\text{st}N}|^{2}, \tag{8}\]
where \(\bar{Z}^{\prime}\) denotes a Pauli \(Z\) gate on each qubit, with probability \(1/2\) (each qubit choice is independent so all possible combinations of Pauli \(Z\) gates or the identity, e.g. \(IZI\cdots IZ\), on \(N\) qubits occurs with equal probability when a \(\bar{Z}^{\prime}\) gate is implemented), and \(\mathcal{P}\) is a independent uniformly random string of Pauli (or identity) gates ( one on each qubit).
Then, Lemma 5 allows us to re-write the quantity in Eqn. 8 as
\[|\langle 0|^{\text{st}N}\bar{Z}^{\prime}H^{h}\mathcal{P} \mathcal{V}^{-1}e^{\gamma Ht/2}e^{-\gamma Ht/2}\mathcal{V}\mathcal{P}H^{h} \bar{Z}^{\prime}|0\rangle^{\text{st}N}|^{2} \tag{9}\] \[=|\langle 0|^{\text{st}N}\bar{Z}^{\prime}H^{h}\mathcal{P} \mathcal{V}^{-1}\mathcal{V}\mathcal{P}H^{h}\bar{Z}^{\prime}|0\rangle^{\text{ st}N}|^{2}=1. \tag{10}\]
## IV Discussion
We have presented a quantum accreditation protocol for quantum analogue simulations that can be applied to extant experiments and devices. It builds on the theoretical advances of strongly universal Hamiltonians and quantum accreditation as well as experimental progress towards the realisation of programmable hybrid analogue-digital quantum evolution.
Our protocol completely eliminates the need for classical simulations, freeing us to accredit simulations of arbitrarily large systems where quantum simulators offer the most. Our error model captures large classes of errors hybrid quantum simulators experience. Additionally, the resource requirements of our protocol are reasonable: the depth and time overheads are independent of the size of the system being simulated (\(N\)); the number of extra single-qubit gates required
Figure 3: Trap simulation with encoding of initial state, \(\mathcal{V}\), and decoding after the time evolution, \(\mathcal{V}^{-1}\). \(H\) denotes the Hadamard gate, \(h\in\{0,1\}\) is a random bit, \(Z^{\prime}\) denotes applying a Pauli \(Z\) gate with probability 0.5 (each instance of random operator \(Z^{\prime}\) inside the circuit is independent) and \(\otimes_{j=1}^{n}C_{j}=C\) is the time inversion circuit, \(\mathcal{H}\) is an arbitrary accreditable Hamiltonian, \(P_{j}\) is a single-qubit Pauli gate, chosen uniformly at random independently for each \(j\), and \(t\) is the duration of the simulation. Note that \(P_{j}H^{h}\bar{Z}^{\prime}\)’s (where \(Z^{\prime}\) is first and \(H^{h}\) is second in time order, but in the opposite order when written as operators) are in the same box because they can be compiled as a single gate.
is at most linear in the system size; the total duration the time evolution is applied for remains unchanged from an unaccredited simulation; and the number of trap simulations required is quadratic in the reciprocal of the required accuracy of the bound on the variational distance the protocol outputs.
Consequently, our protocol can be implemented on extant programmable hybrid analogue-digital quantum simulators [7, 12]. It is particularly amenable if the HQS implements the XY-interaction on a 2-colourable graph as it eliminates the \(\mathcal{V}\) and \(\mathcal{V}^{-1}\) operations in Figs. 2 and 3.
Our work leaves several potential avenues for improvement, centred particularly around relaxing E2 and E3 of our error model. The independence assumed in E2 contributes, via Lemma 1, to the error in HQS being independent of the single
qubit gates \(A_{j},B_{j},D_{j}\). E2 may be relaxed to allow for error that depends weakly on single-qubit-gates [21, Appendix 2]. This may be combined with a relaxed identicality assumption as well by using probability concentration inequalities more permissive than Hoeffding's inequality.
Relaxing E3 would require understanding it better in terms of its physical implications, as it is the most novel and least explored of our assumptions. Its relaxation may benefit from inverting the Hamiltonian more than once. This would however increase the overheads of quantum accreditation in terms of time and single-qubit gates.
Finally, given the trap-based nature of our accreditation protocol, it may be tempting to suggest an 'error' wherein the accreditable Hamiltonian \(\mathcal{H}\) in Figs. 2 and 3 is replaced by another \(\mathcal{H}^{\prime}\). This error will cancel in the trap simulation, thus effecting an error that our trap simulation seemingly fails to detect. As its effect will not cancel in the target simulation, this 'error' violates E3 and is mathematically disallowed by our error model. Physically, the replacement of \(\mathcal{H}\) by \(\mathcal{H}^{\prime}\)_ceteris paribus_ is unlikely to be due to noise or stochastic miscalibrations.
## V Acknowledgements
We thank Ross Grassie, Sean Thrasher, James Mills, and Raul Garcia-Patron for useful conversations. This work was supported, in part, by the UK Networked Quantum Information Technologies (NQIT) Hub (EP/M013243/1), the UKRI ExCALIBUR project QEVEC (EP/W00772X/2), and a Leverhulme Trust Early Career Fellowship.
|
2302.11193 | Manifestation of pairing modes in nuclear collisions | We discuss the possible manifestation of pairing dynamics in nuclear
collisions beyond the standard quasi-static treatment of pairing correlations.
These involve solitonic excitations induced by pairing phase difference of
colliding nuclei and pairing dynamic enhancement in the di-nuclear system
formed by merging nuclei. | A. Makowski, M. C. Barton, P. Magierski, K. Sekizawa, G. Wlazłowski | 2023-02-22T08:05:07Z | http://arxiv.org/abs/2302.11193v1 | # Manifestation of pairing modes in nuclear collisions +
###### Abstract
We discuss the possible manifestation of pairing dynamics in nuclear collisions beyond the standard quasi-static treatment of pairing correlations. These involve solitonic excitations induced by pairing phase difference of colliding nuclei and pairing dynamic enhancement in the di-nuclear system formed by merging nuclei.
## 1 Introduction
Pairing correlations play a crucial role in our understanding of the properties of nuclear systems, ranging from atomic nuclei to neutron stars [1]. The importance of pairing correlations, however, do not originate from their contribution to the energy of nuclear systems. Indeed the pairing energy is only a small fraction of the total energy of an atomic nucleus. This is because the value of a pairing gap, which sets the typical energy scale, does not exceed 3% of Fermi energy. At subnuclear densities, characteristic for the neutron star crust, it may reach at most about 5%. The importance of pairing correlations lies in the modification induced at the Fermi surface, which produces a gap in the single particle spectrum. Consequently, it facilitates large amplitude nuclear motion by suppressing dissipative effects due to single-particle excitations. Thus, the main effect originates from the gap size, which is a single number associated with Cooper pair correlation energy and can be generated within BCS theory [2]. This description is satisfactory if one describes a situation close to an adiabatic limit of nuclear motion. In this case, one effectively describes quantum evolution as going through almost static solutions obtained within the static BCS equations.
In the extreme limit of cranking approximation, the evolution of pairing is just provided by instantaneous static gap values, and the pairing gap is simply a function of collective variables describing large amplitude nuclear motion.
The question which naturally arises is whether this approach is always correct. Recent theoretical investigations of pairing dynamics indicated that even when a nucleus evolves slowly during the fission process, its motion hardly fulfils adiabatic criteria and the pairing field fluctuates rapidly in time and space [3]. Therefore, it is crucial to specify the conditions where the adiabatic approach must be abandoned and to understand possible manifestations of pairing dynamics (see eg. Refs. [3, 4, 5, 6, 7] for the description of pairing beyond the adiabatic approximation).
## 2 Nuclear collisions and pairing dynamics
Nuclear processes expected to elude adiabatic description are nuclear collisions, even at energies close to the Coulomb barrier. The best examples are provided by nuclear collisions of medium mass nuclei or those involving heavy targets. The latter ones are essential in superheavy element synthesis [8].
What can one expect concerning pairing dynamics in the case of a collision? We describe pairing as a pairing field constituting an order parameter emerging from U(1) symmetry breaking. In that case two obvious possibilities arise, defined by two fundamental modes associated with pairing: Goldstone mode and Higgs mode (see Fig. 1). They are associated with variations in the phase and the magnitude of the pairing field, respectively. Goldstone mode, in its most direct realization, leads to harmonic vibration of phase \(\phi(\mathbf{r},t)\propto\mathbf{k}\cdot\mathbf{r}-\omega t\), giving rise to Anderson-Bogoliubov phonons (see eg. [9, 10] and references therein). However, in the atomic nucleus, due to its small size, such modes cannot be unambiguously defined1. The manifestation of the Goldstone mode can also appear due to perturbation of the nuclear pairing phase induced by dynamics of collision. This situation may occur in two regimes. The first one corresponds to the case when two nuclei approach each other at subbarrier energies, and the effective phase difference of their pairing fields induces tunnelling of nucleons [11]. When a collision occurs above the barrier, solitonic excitation is generated between colliding nuclei (see Fig. 1) [4]. These two regimes have been identified and studied in ultracold atomic gases [12]. In nuclear systems, the first one has been investigated as a nuclear manifestation of the Josephson effect. Recently it has been found that an oscillating flow of neutrons occurs (analogue of AC Josephson junction) during a collision of medium-mass nuclei [13]. The
other regime, leading to solitonic excitation, has been identified in Ref.[4]. The difference between these two regimes lies in the expected outcomes. In the first case the main observable is the enhanced nucleon transfer, whereas in the other regime, one expects an additional energy barrier preventing the merging of colliding nuclei. This additional energy barrier scales with the phase difference between colliding nuclei \(\Delta\phi\) like \(\sin^{2}(\Delta\phi/2)\), which was confirmed in TDDFT calculations [4].
Spontaneous symmetry breaking generates also another effect, which leads to pairing magnitude vibrations. It can be generated in ultracold Fermi gas by tuning the coupling constant in real time, which drives the system towards the superfluid phase [14]. The characteristic feature of the Higgs mode is its energy (or frequency of oscillations), which is of the order of the static value of the pairing gap. At first it may seem that this mechanism cannot operate in nuclear systems as pairing correlations emerge from nuclear interaction and cannot be tuned at will. However, the effective strength of interaction depends on the density of states at the Fermi surface. This can undoubtedly change once the nuclear shape evolves. In particular, when two nuclei merge, a new system is formed during a nuclear collision. The single particle properties of such, usually very elongated object, are sig
Figure 1: **Left**: Schematic figure showing Goldstone and Higgs modes associated with symmetry breaking due to emergence of pairing. **Right**: snapshot from TDDFT simulation of \({}^{96}\)Zr + \({}^{96}\)Zr collision at \(E_{cm}\) = 187 MeV with opposite phases of pairing fields. The upper panel shows density distributions for protons (upper part) and neutrons (lower part). The lower panel shows an analogous magnitude of pairing field distributions with solitonic excitation visible between colliding nuclei. Details of calculations are presented in Ref. [5].
nificantly different from those of two initial nuclei. It is, therefore, possible that effectively merging two nuclei creates a system which exhibits pairing instability [5, 15]. This would correspond to an exponential increase of the strength of pairing correlations in time. This is indeed the case, as seen in Fig. 2. In the figure the quantity \(\bar{\Delta}_{n}=\frac{1}{N}\int d^{3}r|\Delta_{n}({\bf r})|\rho_{n}({\bf r})\) has been shown as a function of time. Here, \(\rho_{n}\) and \(\Delta_{n}\) stand for the neutrons density and pairing field distributions, respectively. \(N=\int\rho_{n}({\bf r})\,d^{3}r\) is total number of neutrons. The initial pairing of colliding nuclei is very weak but become strongly enhanced after collision showing clearly an instability as indicated by almost perfectly exponential growth. Although it is tempting to associate this effect with the excitation of a Higgs mode, one has to be careful. First, the time scale of the enhancement is by an order of magnitude longer than the typical time scale of Higgs mode which has to be comparable to \(\hbar/\bar{\Delta}_{n}\approx 200\,{\rm fm}/c\). Second, the excitation energy of the system is rather high. Using Thomas-Fermi approach [5] one may estimate the excitation energy related to neck formation between two nuclei during collision. For the reaction presented in Fig. 2 it reads: \(20,\ 27,\ 34\) MeV for \({}^{90}\)Zr+\({}^{90}\)Zr, \({}^{90}\)Zr+\({}^{132}\)Sn and \({}^{40}\)Ca+\({}^{208}\)Pb, respectively. These energies correspond to temperatures which are close to critical temperature. Therefore it seems unlikely to associate such a mode with inducing an actual superfluid phase and it is rather related to the increase of pairing correlations in a
Figure 2: Magnitude of an average neutron pairing gap \(\bar{\Delta}_{n}\) in collisions of several neutron magic nuclei at energies right above the Coulomb barrier (\(E_{cm}\) to static barrier ratio is shown in the legend). The collision occurs at about \(t\approx 400\,{\rm fm}/c\). Details of calculations are presented in Ref. [5].
nonequilibrium system [15].
## 3 Conclusion
We have discussed two examples of the manifestation of pairing dynamics which are predicted to occur in nuclear collisions at the energies close to the Coulomb barrier. It is essential to make a systematic assessment of the importance of these effects and to understand their role in nuclear dynamics, particularly in the quasifission process.
**Acknowledgments**
We want to thank Nicolas Schunck and his collaborators for help concerning the usage of HFBTHO (v4.0) [16]. This work was supported by the Polish National Science Center (NCN) under Contracts No. UMO-2017/27/B/ST2/02792. We acknowledge the support of Global Scientific Information and Computing Center, Tokyo Institute of Technology for resources at TSUBAME3.0 (project ID: hp220072).
|
2304.03619 | Direct Exoplanet Detection Using L1 Norm Low-Rank Approximation | We propose to use low-rank matrix approximation using the component-wise
L1-norm for direct imaging of exoplanets. Exoplanet detection by direct imaging
is a challenging task for three main reasons: (1) the host star is several
orders of magnitude brighter than exoplanets, (2) the angular distance between
exoplanets and star is usually very small, and (3) the images are affected by
the noises called speckles that are very similar to the exoplanet signal both
in shape and intensity. We first empirically examine the statistical noise
assumptions of the L1 and L2 models, and then we evaluate the performance of
the proposed L1 low-rank approximation (L1-LRA) algorithm based on visual
comparisons and receiver operating characteristic (ROC) curves. We compare the
results of the L1-LRA with the widely used truncated singular value
decomposition (SVD) based on the L2 norm in two different annuli, one close to
the star and one far away. | Hazan Daglayan, Simon Vary, Valentin Leplat, Nicolas Gillis, P. -A. Absil | 2023-04-07T12:32:43Z | http://arxiv.org/abs/2304.03619v2 | # Direct Exoplanet Detection Using
###### Abstract
We propose to use low-rank matrix approximation using the component-wise L1-norm for direct imaging of exoplanets. Exoplanet detection is a challenging task for three main reasons: (1) the host star is several orders of magnitude brighter than exoplanets, (2) the angular distance between exoplanets and star is usually very small, and (3) the speckles are very similar to the exoplanet signal both in shape and intensity. First, we empirically examine the statistical noise assumptions of the models, second, we evaluate the performance of the proposed L1 low-rank approximation (L1-LRA) algorithm based on visual comparisons and receiver operating characteristic (ROC) curves. We compare the results of the L1-LRA with the widely used truncated singular value decomposition (SVD) based on the L2 norm in two different annuli, one close to the star and one far away.
\(\ell_{1}\) norm, low-rank approximation, Laplace distribution, direct imaging, exoplanet detection
## I Introduction
In the field of exoplanet detection, the vast majority of planets (about 99%) have been detected by indirect methods. Over the past decade, we have observed the rapid development of high-contrast imaging as a promising technique for the detection of exoplanets. Although very challenging, direct imaging provides two main advantages compared to indirect methods; first, we have access to the photons of the planets, so we can obtain information about the atmospheric composition of the planets [1]. Second, it allows for the detection of planets in a shorter period of time compared to other methods, thus enabling the detection of planets on wider orbits.
There are three main reasons why very few exoplanets have been observed: (1) the small angular distance to the host star due to the distance between the planet and the earth, (2) the high contrast difference between the planet and its host star, and (3) the similarity between noises called _speckles_ and planets. Due to the small angular separation between the host star and potential exoplanets, direct imaging requires high-resolution images and, therefore, large and high resolution ground-based telescopes. This causes the light to be diffracted by atmospheric turbulence as it passes through the atmosphere. Despite the use of coronographs, such as the well-known Lyot coronagraph or more recently vortex coronagraphs [2], to reduce the high contrast caused by the brightness of the star and adaptive optics techniques to avoid the aberrations caused by the refraction of the light, it may still not be possible to detect the planet in the images. In addition, residual aberrations cause quasi-static speckles in the images that are often brighter than the planet and resemble the planet in shape.
_Angular differential imaging_ (ADI) is a widely used technique in astronomy to reduce the effects of speckle noise in the images [3]. This technique is based on observations made in pupil tracking mode, in which the star is fixed in the center of the images as the Earth rotates in a night, which causes the exoplanet to rotate around the host with time. ADI aims at building a reference point spread function (PSF) that reproduces a model of the speckle field to be subtracted from the images and aligned (in some methods, also combined) with the signal of potential exoplanets.
Several algorithms are used to build the reference PSF in ADI. The most popular ones are based on low-rank approximations. The principal component analysis (PCA) [4, 5], its annular version (AnnPCA) [6, 7], non-negative matrix factorization (NMF) [8], the local low-rank plus sparse plus Gaussian decomposition (LLSG) [9], and more recently low-rank plus sparse trajectory [10] have been proposed using low-rank approximations to build the reference PSF. Regime-switching model [11] also combines the advantages of numerous PSF subtraction techniques using low-rank approximations.
Methods based on the low-rank assumption are obtained by transforming the data cube into a matrix such that each frame corresponds to a row of the matrix. In the cases when we
fit the low-rank approximation by minimizing the Frobenius norm, e.g. by truncated SVD in PCA, this corresponds to the maximum likelihood estimator (MLE) under the i.i.d. white Gaussian noise assumption.
However, several recent lines of work observed that the residual datacube, which is obtained by subtracting the low-rank part from the original data, is more compatible with the Laplacian distribution that has heavier tails instead of Gaussian [12, 13]. Motivated by these observations, we propose to perform the low-rank background approximation using the component-wise \(\ell_{1}\) norm and to apply a data-dependent approximation.
The rest of this paper is structured as follows. In Section II, we propose an alternative to PCA in the context of exoplanet detection, namely a component-wise \(\ell_{1}\) norm low-rank matrix approximation. We investigate different statistical assumptions on the data and analyse the performances, then apply the appropriate low-rank approximation to the data and present the experimental results in Section III. Finally, we conclude in Section IV and discuss potential future works.
## II Model Assumptions
Let \(M\in\mathbb{R}^{T\times N^{2}}\) be a matrix of observations that consists of \(T\) unfolded frames with size \(N\times N\), i.e., each row of the matrix represents a single vectorized frame. The model for \(M\) proposed in [13] is expressed as
\[M=L+aP_{g}+E,\quad\mathrm{rank}(L)\leq r,\quad P_{g}\in\Lambda, \tag{1}\]
where \(L\) is the low-rank background, \(E\) is the noise, \(a\) is the intensity of the planet referred to as the _flux_, and \(P_{g}\) is the planet signature along the trajectory \(g\), from the set of all feasible trajectories \(\Lambda\).
For such models based on low-rank approximations, the choice for the rank value is crucial. Indeed, if the rank is too small, the signals of the speckles will remain in the residual cube, making it difficult to separate the signal of the planet from the speckles. Conversely, if it is too large, the signals of the planet will be captured by the low-rank matrix, and it will be challenging to find the signal of the planets in the residual cube.
When the error \(E\) is Gaussian distributed, the maximum likelihood estimator (MLE) for \(L\) is given by the minimization of the Frobenius norm. Classical methods, such as AnnPCA and LLSG fit the low-rank component using the truncated SVD:
\[\hat{L}=\underset{L}{\arg\min}\|M-L\|_{F}\quad\text{such that}\quad\mathrm{ rank}(L)\leq r, \tag{2}\]
where \(\|A\|_{F}\) denotes the entry-wise \(\ell_{2}\)-norm of \(A\) (the Frobenius norm) and then follows by subtracting the low-rank component, and identifying the planet \(aP_{g}\) in the residual matrix by solving the minimization problem
\[\hat{a}_{g}=\arg\min_{a_{g}>0}\left\|M-\hat{L}-a_{g}P_{g}\right\|_{2}, \tag{3}\]
for all possible trajectories \(P_{g}\in\Lambda\).
Recently, it has been observed that the error term has heavy tails and more closely follows the Laplacian distribution [12]. Consequently, some works [13] proposed to identify the planet companion using \(\ell_{1}\) minimization
\[\hat{a}_{g}= \arg\min_{a_{g}>0}\left\|M-\hat{L}-a_{g}P_{g}\right\|_{1}. \tag{4}\]
However, this makes the noise assumption in the low-rank speckle subtraction and in the planet identification inconsistent. Instead, we propose to fit the low-rank component using the component-wise \(\ell_{1}\) norm
\[\hat{L}=\underset{L}{\arg\min}\|M-L\|_{1}\quad\mathrm{s.t.}\quad\mathrm{rank }(L)\leq r, \tag{5}\]
followed by the planet estimation in \(\ell_{1}\) norm as stated in (4).
The \(\ell_{1}\) low-rank approximation in (5) is an NP-hard problem, even in the rank-one case [14]. Hence most algorithms to tackle (5), such as alternating convex optimization [15], the Wiberg algorithm [16], and augmented Lagrangian approaches [17], do not guarantee to find a global optimal solution, unlike in the case of PCA. Moreover, the computed solutions are sensitive to the initialization of the algorithms.
We use Algorithm 1 suggested by [14] to solve (5). It solves the problem using an exact block cyclic coordinate descent method, where the blocks of variables are the columns of \(U\) and the rows of \(V\) of the low-rank approximation \(UV\). We initialized the algorithm with the truncated SVD solution. Annular version, similar to annular PCA (AnnPCA) [6, 7], selects only the pixels of \(M\) in a certain annulus. As the intensity decreases away from the star, it is usually better to calculate the low-rank approximation of each annulus separately.
```
Input: Image sequence \(M\in\mathbb{R}^{t\times n}\), rank \(r\), maximum number of iteration maxiter
1\(\hat{U},\hat{V}=\underset{U\in\mathbb{R}^{t\times r},V\in\mathbb{R}^{r \times n}}{\arg\min}\|M-UV\|_{F}\)
2for\(i=\)1: maxiter do
3\(R=M-\hat{U}\hat{V}\)
4for\(k=\)1:r do
5\(R=R+\hat{U}[:,k]\hat{V}[k,:]\)
6\(\hat{U}[:,k]=\underset{u}{\min}\|R-u\hat{V}[k,:]\|_{1}\)
7\(\hat{V}[k,:]=\underset{v}{\min}\|R^{T}-v\hat{U}[:,k]\|_{1}^{T}\)
8\(R=R-\hat{U}[:,k]\hat{V}[k,:]\)
9
10 end for
11
12 end for
```
**Algorithm 1**L1-LRA [14]
To solve the minimization problem in steps 7-8 of Algorithm 1, we use the exact method from [18]; these subproblems are weighted median problems which can be solved in closed form.
In order to obtain the planet signature, we use likelihood ratio \(\Lambda\) map [13], which consists of \(\ell_{1}\) norm likelihood ratios \(\Lambda_{g}(R)\) based on maximizing log-likelihood of Laplace
distribution using the solution of (3) or (4) because it has been shown to provide better results in practice [13]
\[\log\Lambda_{g}(R)=-\!\!\!\sum_{(t,r)\in\Omega_{g}}\!\!\!\frac{|R(t,r)-\hat{a}_{g }P_{g}(t,r)|-|R(t,r)|}{\sigma_{R(r)}}, \tag{6}\]
where \(R=M-\hat{L}\), \(\sigma_{R}\) is the standard deviation of \(R\) computed along the time dimension and \(\Omega_{g}\) is the set of indices \((t,r)\) of pixels whose distance from the trajectory \(g\) is smaller than half the diffraction limit, for more details see [13]. We will also use the \(\ell_{2}\) norm \(\Lambda\) map which consists of \(\ell_{2}\) norm likelihood ratios \(\Lambda_{g}(R)\) using the solution of (3) or (4)
\[\log\Lambda_{g}(R)=-\frac{1}{2}\!\!\!\sum_{(t,r)\in\Omega_{g}}\!\!\!\frac{|R(t,r)-\hat{a}_{g}P_{g}(t,r)|^{2}-|R(t,r)|^{2}}{\sigma_{R(r)}^{2}}. \tag{7}\]
## III Numerical Experiments
In order to analyse the performance of the L1-LRA algorithm, we compare the algorithms visually by plotting their log-likelihood detection maps \(\Lambda\) and also using the receiver operating characteristic (ROC) curves. Additionally, we empirically investigate the methods in terms of fitting the data to Gaussian and Laplacian distributions. The Python codes of the implementations are publicly available.1
Footnote 1: [https://github.com/hazandaglayan/11m_for_exoplanets](https://github.com/hazandaglayan/11m_for_exoplanets)
We tested the algorithms using the publicly available dataset _sph3_ for the exoplanet data challenge [19]. The ADI cube obtained with the VLT/SPHERE-IRDIS instrument has 252 frames with size \(160\times 160\). It has 5 synthetically injected planets at different distances from the star.
### _Empirical estimation of the noise distributions_
In order to analyse the suitability of different noise assumptions, we fit the Gaussian and the Laplacian distribution to the data after subtracting the low-rank component using PCA or L1-LRA. We look at two different annuli separately, one that is close to the star at \(4\lambda/D\) separation and a more distant to the star at \(10\lambda/D\) and measure the goodness of fit visually and by the Pearson correlation coefficients \(R^{2}\).
In Fig. 1, we observe the residual data after applying any low-rank approximation follows a Laplace distribution in general, but we see that the distribution after L1-LRA fits better in the tails for large separations in Fig. 0(c)-0(d). Moreover, The distributions in the tails are quite similar after both PCA and L1-LRA for small separations in Fig. 0(a)-0(b).
Since it is not always easy to see which distribution fits better, we look at the Pearson correlation coefficients (\(R^{2}\)) as a metric to measure the performance of the algorithms. An \(R^{2}\) value close to 1 indicates that the distribution is more consistent with the distribution. In Table I, the highest \(R^{2}\) is obtained with PCA for small separations, and with L1-LRA for large separations.
### _Performance comparison of L1-LRA and PCA_
We test the performance advantage of using the L1-LRA instead of PCA approximation in terms of visual quality and ROC curves tested on the sph3 test data [19]. We also provide ablation studies of using combination of using mixed models, where the low-rank \(L\) is first subtracted from the original data \(M\) using L1-LRA or PCA then used in the planet detection. As such, we have four possible algorithms we investigate; see Table II.
We define the L1L1 algorithm as first solving (5) for background subtraction and then for each trajectory, followed by solving (4) and using \(\hat{a}_{g}\) in (6) to detect planet, and so on for other algorithms.
We first compared the algorithms L1L1, L2L1, and L2L2, which fit the data better, visually in the \(\Lambda\) map using the dataset with five synthetic planets. We selected the best performing rank according to the average LR over the locations of the injected planets. Based on this, we chose rank 6 in all three algorithms.
Fig. 2 shows the \(\Lambda\) map and the intensity of pixels in the \(\Lambda\) map of three algorithms L1L1, L2L2, and L2L1. We
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{**Background subtraction**} & \multicolumn{2}{c|}{**Planet detection**} \\ \cline{2-4} & \(\ell_{1}\) (4) and (6) & \(\ell_{2}\) (3) and (7) \\ \hline L1-IRA (5) & L1L1 & L1L2 \\ \hline PCA (2) & L2L1 & L2L2 \\ \hline \end{tabular}
\end{table} TABLE II: Descriptions of algorithms
Fig. 1: Residual cube after low-rank approximation applied for small and large separations
observe that all three algorithms managed to identify the correct locations of the planet with the detection counted above a threshold of 115 for L2L2, and 50 for L2L1 and L1L1, which are decided according to the smallest intensity of planet location for each algorithm. In general, we see that L2L2 is much more prone to false positives, while L1L1 has no false detections because there are no pixels not belonging to an exoplanet (in blue on Figure 2) above the threshold.
To evaluate the performance of the four methods, we put them to the test using synthetically generated data and examine their results using ROC curves. The detection map used as input for the ROC curve procedure of [13] is the \(\Lambda\) map obtained using (6) or (7). We deleted the five injected planets from the dataset using VIP-HCI package [6, 7]. Then, we created 25 different datasets by injecting four planets each, 90 degrees apart, and placed them at 4\(\lambda/D\). We set the intensity of each as \(0.5\sigma\) where \(\sigma\) is the standard deviation of the annulus. We applied the same procedure to a larger separation of \(10\lambda/D\).
We compared the results of L1L1, L1L2, L2L1, and L2L2 by using ROC curves. The results are tested for three different rank values \(r=\{9,13,17\}\).
In the ROC curve results, we focus on the number of true positives before the first true negatives are found, as done in [13]. Fig. 3 shows the ROC curves obtained by injecting a small separation in the first row. In 2 out of 3 ROC curves (\(r=\{9,17\}\)), L2L1 gives the best results. In all the ROC curves we obtained by injecting into the large separation, L1L1 outperforms all other methods in terms of the ROC curves.
## IV Conclusion
Depending on the distance of the planet to the star, we used L1-LRA instead of PCA for exoplanet detection. In doing so, the residual cube obtained after applying PCA or L1-LRA should be examined to see which one fits better, and the best low-rank matrix approximation can be used. There are several directions for future studies, such as using different initializations for the \(\ell_{1}\) norm approximation or designing more efficient algorithms (e.g., based on smoothing techniques). Moreover, we could also upgrade the methods based on PCA with L1-LRA, and see the evolution in performance.
|
2305.03530 | Exploring Softly Masked Language Modelling for Controllable Symbolic
Music Generation | This document presents some early explorations of applying Softly Masked
Language Modelling (SMLM) to symbolic music generation. SMLM can be seen as a
generalisation of masked language modelling (MLM), where instead of each
element of the input set being either known or unknown, each element can be
known, unknown or partly known. We demonstrate some results of applying SMLM to
constrained symbolic music generation using a transformer encoder architecture.
Several audio examples are available at
https://erl-j.github.io/smlm-web-supplement/ | Nicolas Jonason, Bob L. T. Sturm | 2023-05-05T13:37:04Z | http://arxiv.org/abs/2305.03530v2 | # Exploring Softly Masked Language Modelling for Controllable Symbolic Music Generation
###### Abstract
This document presents some early explorations of applying Softly Masked Language Modelling (SMLM) to symbolic music generation. SMLM can be seen as a generalisation of masked language modelling (MLM), where instead of each element of the input set being either known or unknown, each element can be known, unknown or partly known. We demonstrate some results of applying SMLM to constrained symbolic music generation using a transformer encoder architecture. Several audio examples are available at [https://erl-j.github.io/smlm-web-supplement/](https://erl-j.github.io/smlm-web-supplement/)
## 1 Background
Symbolic music generation systems aim to assist in the composition of music. One challenge is to make systems that are highly controllable so that humans can interactively develop musical ideas.[9, 7, 8, 5, 1, 4, 6, 11] One way to exert control is to assign values to note attributes such as pitch, onset time or duration. Rather than directly assigning values to note attributes, SMLM allows us constrain the note attributes to a set of values that the model is then able to choose from Importantly, SMLM takes into account constraints across all notes in the composition during generation of each note. This means that the model is able to account for the constraints imposed on other notes before generating a particular note. We believe that constraining note attributes to a set of values rather directly assigning values improves the controllability of symbolic music generation systems. For example, given a simple pitch/onset/attribute-representation we can make the composition use a particular scale by constraining pitch, make it follow a particular rhythm by constraining the onsets or perform imputation of areas of the piano roll.
## 1 Introduction
Figure 1: Illustrating example of SMLM training on natural language. At the bottom, an input is ”softly masked” by adding confounding information about the identity of each token. The prediction model processes the softly masked input and outputs distributions for each token. Finally, the loss is computed by comparing the output probabilities to the ground truth.
Softly Masked Language Modelling
Given a size-\(T\) input set \(X=\{x_{1},x_{2},\ldots,x_{T}\}\) with element-wise prior information, \(p_{i}\) for each element \(x_{i}\), the softly masked language modeling (SMLM) task aims to minimize the negative log-likelihood of the elements given the prior information:
\[\mathcal{L}_{\text{SMLM}}(\theta)=-\sum_{i=1}^{T}\log P(x_{i}|p_{1},p_{2}, \ldots,p_{T};\theta), \tag{1}\]
In order to train a SMLM, we need to define a process which takes an input set \(X=\{x_{1},x_{2},\ldots,x_{T}\}\) and extracts the element-wise prior information \(P=\{p_{1},p_{2},\ldots,p_{T}\}\) We refer to this process as _softly masking_ the input \(X\).
Let's illustrate this with an example in natural language generation. Our input will consist of one hot vectors \(X=\{x_{1},x_{2},\ldots,x_{T}\}\) representing each token in the input. We choose the following soft masking process : starting from a one hot vector \(x_{i}\), we add new, "'false"' activations so that it becomes a multi-hot vector. This is illustrated in the bottom of figure 1.
## 3 Training a Softly Masked Language Model On Music Loops
### Dataset
We use a dataset of short excerpts extracted from the MetaMIDI dataset.[3] We check that the MIDI file contains at most one time signature message and that the time signature is 4/4. Then, we quantize the note onset times to 16th notes. We discard all events from MIDI channel 10 (which corresponds to drums). Finally, we segment the tracks into 64 step excerpts without overlap starting from the first note event of the MIDI file. During training, we randomly crop the segments so that it only spans 36 pitches.
### Representation
For simplicity, we consider music as a set of note events with three attributes.
* pitch \(\in\) 0-35, "undefined"
* onset time \(\in\) 0-63, "undefined"
* duration \(\in\) 1-63, "undefined"
This representation is similar to the OctupleMIDI representation [11].
One implication of our representation is that there exists invalid combinations of element attributes. This is the case when one attribute is known to be undefined while the others attribute are known to not be undefined. To address this issue, we use a normalisation function during every forward pass of the model.
### Masking scheme
During training, we generate masks in two stages. In the first stage, we mask particular parts of the vocabulary across all masked elements uniformly. In the second stage, we randomly mask parts of of the vocabulary across each masked element independently. We randomly sample the number of elements to be masked in each stage.
### Architecture
We encode each element attribute with attribute specific encoders. These encoders are implemented as fully connected layers with bias. The element
Figure 2: Random samples from the dataset
attributes are pooled by summing the embedding of each attribute to form an element embedding. For our main block, we use a neural network with the transformer encoder architecture [10, 2]. We use a hidden size of 768, 8 layers, and 8 attention heads. We decode the main block output using attribute specific decoders. These decoders are implemented as fully connected layers with bias. The decoders output logits for each attribute. As a final step we process the logits so that they are guaranteed to be consistent with the constraint. We achieve this by subtracting a large number from the logits of the events that are not permitted by the constraint.
### Training
We use a learning rate of 1e-4 with a learning rate decay of 0.95. We use a batch size of 384. Training lasted 48 hours or 77 epochs using a NVIDIA GeForce RTX 3090. Training loss and validation losses were approximately equal (\(\sim\)1.80 / 1.75) at stopping time.
### Generation
In order to generate note that satisfy a constraint, we express our constraint in the form of a mask. For example, if we want to only allow notes of duration 4 and 8 or no note at all, we fully mask pitch and onset attributes and only mask the value 4, 8 and undefined for the duration attribute. We then generate the composition by sampling attributes from notes in random order.
## 4 Generation examples
All samples are generated with temperature of 1.0 and top-p with \(p=0.9\) unless indicated otherwise. Generated areas are shown in green. Audio is available at [https://erl-j.github.io/smlm-web-supplement/](https://erl-j.github.io/smlm-web-supplement/).
### Unconditional generation
Figure 5: Imputation of a rectangular area in pitch-time space. Generated with temperature=0.75 and p=0.9.
Figure 6: Generation of high pitches conditioned on low pitches
### Pitch control
Figure 8: Generation with pitch constrained to major scale with root at pitch 0
Figure 7: Generation of low pitches conditioned on high pitches.
### Duration control
Figure 10: Generation with note duration constrained to be less than 8 steps.
Figure 9: Generation with pitch constrained to major pentatonic scale with root at pitch 0
### Rhythm control
Figure 11: Generation with note duration constrained to be more than 8 steps.
Figure 12: Generation with onset time constrained to every 4 steps
Figure 14: Generation with onset time constrained to every 16 steps
Figure 13: Generation with onset time constrained to every 8 steps
### Combining constraints
## 5 Future work
In future work we aim to: 1) Expand the representation in order to apply SMLM to a wider range of musical tasks; 2) Explore ways to improve generation quality and generation speed; 3) Perform qualitative and quantitative evaluation of the system.
## 6 Acknowledgments
We thank Gustav Eje Henter and Luca Casini for valuable discussions during this work.
This work is an outcome of MUSAiC, a project that has received funding from the European Research Council under the European Union's Horizon 2020 research and innovation program (Grant agreement No. 864189).
|
2308.13907 | On the non-commutative Neveu decomposition and stochastic ergodic
theorems | In this article, we prove Neveu decomposition for the action of the locally
compact amenable semigroup of positive contractions on semifinite von Neumann
algebras and thus, it entirely resolves the problem for the actions of
arbitrary amenable semigroup on semifinite von Neumann algebras. We also prove
it for amenable group actions by Markov automorphisms on any $\sigma$-finite
von Neumann algebras. As an application, we obtain stochastic ergodic theorem
for actions of $ \mathbb{Z}_+^d$ and $\mathbb{R}_+^d$ for $ d \in \mathbb{N}$
by positive contractions on $L^1$-spaces associated with a finite von Neumann
algebra. It yields the first ergodic theorem for positive contraction on
non-commutative $L^1$-spaces beyond the Danford-Schwartz category. | Panchugopal Bikram, Diptesh Saha | 2023-08-26T15:47:36Z | http://arxiv.org/abs/2308.13907v1 | # On the non-commutative Neveu decomposition and stochastic ergodic theorems
###### Abstract.
In this article, we prove Neveu decomposition for the action of locally compact amenable semigroup of positive contractions on semifinite von Neumann algebras and thus, it entirely resolves the problem for the actions of arbitrary amenable semigroup on semifinite von Neumann algebras. We also prove it for amenable group actions by Markov automorphisms on any \(\sigma\)-finite von Neumann algebras. As an application, we obtain stochastic ergodic theorem for actions of \(\mathbb{Z}_{+}^{d}\) and \(\mathbb{R}_{+}^{d}\) for \(d\in\mathbb{N}\) by positive contractions on \(L^{1}\)-spaces associated with a finite von Neumann algebra. It yields the first ergodic theorem for positive contraction on non-commutative \(L^{1}\)-spaces beyond the Danford-Schwartz category.
Key words and phrases: von Neumann algebras, Neveu decomposition, stochastic ergodic theorem 2010 Mathematics Subject Classification: Primary 46L10; Secondary 46L65, 46L55
## 1. Introduction
The connection between von Neumann algebra and ergodic theory is well known in the literature. This article falls in the conjunction of these two well studied areas of research. Especially, we study the Neveu decomposition and stochastic ergodic theorems for actions by positive contractions on non-commutative \(L^{1}\)-spaces.
Although the study of ergodic theorems originated in the classical mechanics but has wide applications in modern day mathematics and physics. Historically, the subject begins with the mean ergodic theorem by von Neumann [18] (which is the convergence of ergodic averages associated to a contraction in the Hilbert space norm) and the pointwise ergodic theorems by Birkhoff [19] (which is the almost everywhere convergence of ergodic averages associated to measure preserving transformations in \(L^{p}\) spaces). Later it becomes an extensively studied field of research and still remains active.
The main focus of this article is to prove non-commutative Neveu decomposition and as an application to obtain stochastic ergodic theorem (that is convergence in measure). To motivate, we begin with a brief history of pointwise ergodic theorems. After its inception in 1939, pointwise ergodic theorems has seen a lot of generalisations in both classical and non-commutative settings.
Classically, given a measure preserving system \((X,T,\mu)\), Birkhoff's ergodic theorem states that the ergodic averages associated to the Koopman operator converges almost everywhere to the conditional expectation onto the fixed point space. It is a natural question to ask Birkhoff's ergodic theorem beyond Koopman operator, such as whether this result holds true for general positive contractions on \(L^{1}\)-spaces.
A partial answer is obtained by Hopf, Dunford and Schwartz. An operator \(S:L^{1}+L^{\infty}\to L^{1}+L^{\infty}\) is called Dunford-Schwartz operator if it is a \(L^{1}\)-\(L^{\infty}\) contraction. In [10] and [11], the authors considered any Dunford-Schwartz operator \(S\) associated to a general measure space \((X,\mu)\) and proved the ergodic averages \(\frac{1}{n}\sum_{0}^{n-1}S^{k}(f)\) converge almost everywhere for all \(f\in L^{1}(X,\mu)\).
Although this trend seems promising, but a further extension of the pointwise ergodic theorems for more general positive contraction on \(L^{1}(X,\mu)\) need not be true. For example, in [1], the authors constructed a class of isometry such that for each such isometry \(T\), there exists an \(f\in L^{1}(X,\mu)\) such that the limit of the sequence \(\frac{1}{n}\sum_{0}^{n-1}T^{k}(f)\) fails to exist almost everywhere. Furthermore, in [13], Tulcea showed that there are enough examples of positive isometric isomorphisms on \(L^{1}[0,1]\) for which pointwise ergodic theorem does not hold. But on the contrary, pointwise ergodic theorem holds for positive contractions on \(L^{p}\)-spaces for \(1<p<\infty\). Indeed, in [1], Akcoglu proved the celebrated pointwise convergence result for a positive contraction on classical \(L^{p}\)-spaces for \(1<p<\infty\). Thus, it is natural to look for a satisfactory convergence result for positive contractions on \(L^{1}\)-spaces.
In 1985, Krengel proved the following: given a \(\sigma\)-finite measure spaces \((X,\mu)\) and a positive contraction \(T\) defined on \(L^{1}(X,\mu)\), the sequence \(\frac{1}{n}\sum_{k=0}^{n-1}T^{k}(f)\) converges in measure for all \(f\in L^{1}(X,\mu)\). Actually, Krengel proved it for \(d\)-many commuting positive contractions on \(L^{1}(X,\mu)\)[10, see Theorem 3.4.9].
Indeed, Krengel used an elegant machinery to prove it. In fact, the main technique of Krengel's theorem is Neveu decomposition. Given a positive contraction \(T\) on the space \(L^{1}(X,\mu)\), Neveu decomposition essentially breaks down the space \(X\) into two disjoint sets, determined uniquely upto measure zero sets such that one of them being the support of a positive \(T\)-invariant function \(f\in L^{1}(X,\mu)\) and the other one being the support of a weakly wandering function \(h\in L^{\infty}(X,\mu)_{+}\). For more details and relevant definitions we refer to [10, Theorem 3.4.6].
Neveu decomposition is also an independent subject of research. Typically, given a \(\sigma\)-finite measure space \((X,\mu)\) and a transformation \(T\) defined on it, the problem of finding a \(T\)-invariant finite measure is studied extensively in the literature and simultaneously many necessary and sufficient conditions are obtained in the process. In [14], the authors characterised the existence of finite invariant measure with the non existence of weakly wandering set of strictly positive measures. The result is then extended to arbitrary group of non-singular transformations by Hajian and Ito in [14]. On the other hand, given a positive contraction \(T\) on \(L^{1}(X,\mu)\), a similar question can be asked. Moreover, observe that the existence of a finite measure \(\nu\) which is absolutely continuous with respect to \(\mu\), is equivalent to the existence of a positive \(T\)-invariant \(f\) in \(L^{1}(X,\mu)\). The condition for existence of finite invariant measure in this situation is first studied by Ito [11]. Later on, Neveu and Krengel obtained a proper decomposition of measure space that we have mentioned in the previous paragraph.
In the non-commutative setting, a measure space is usually replaced by a von Neumann algebra. Although many necessary and sufficient conditions regarding the existence of finite measure is present in the literature but in the non-commutative setup very little was known until Grabarnik and Katz [1]. They established Neveu decomposition for finitely many commuting tuples of \(*\)-automorphisms acting
on a finite von Neumann algebra. Recently, in [1] the authors obtained Neveu decomposition for the actions of amenable group by \(*\)-automorphisms on a finite von Neumann algebra.
It seems the known techniques has limitation to generalize the Neveu decomposition for positive contractions and for semifinite von Neumann algebras. In this article, we prove Neveu decomposition for the actions of any amenable semigroup of positive contractions on semifinite von Neumann algebras. Further, we will also prove the Neveu decomposition for the actions of amenable group which is compatible with the modular automorphism group associated with a weight on a \(\sigma\)-finite von Neumann algebra.
Let \((M,G,\alpha)\) be a non-commutative dynamical system, where \(M\) is a von Neumann algebra with a faithful, normal tracial state \(\tau\) and \(\alpha\) is an action of an amenable group \(G\) on \(M\) by \(*\)-automorphisms. In [1], we showed that the existence of a maximal invariant state can be characterised by an auxiliary infimum condition. In particular, if \(\rho\) is the maximal invariant state in this case, then it was shown that the support of \(\rho\) is the maximal projection such that for any non-zero subprojection \(q\) of support of \(\rho\), \(\inf_{n\in\mathbb{N}}\tau(A_{n}(q))>0\). On the other hand, it was also proved that if for some non-zero projection \(p\in M\), \(\inf_{n\in\mathbb{N}}\tau(A_{n}(p))=0\), then there exists a non-zero subprojection \(q\) of \(p\) such that \(\lim_{n\to\infty}\|A_{n}(q)\|=0\), that is \(q\) is a weakly wandering projection. In the later part, we in effect implicitly found a non-zero projection \(q\) and a sequence \(\{g_{1},\cdots,g_{n},\cdots\}\subseteq G\) such that \(\alpha_{g_{i}}(q)\perp\alpha_{g_{j}}(q)\) whenever \(i\neq j\). Note that since \(\alpha_{g_{i}}\)'s are \(*\)-automorphisms, \(\alpha_{g_{i}}(q)\)'s are also projections. Furthermore, the existence of tracial state is also heavily used to find the projection and the sequence mentioned above. For more details and rigorous proofs of this facts we refer to [1, Section 3]. To prove a similar result, when \((M,G,\alpha)\) is a non-commutative dynamical system ( See Definition 2.17 ), one has to first overcome the aforementioned technical difficulties. In this article, we essentially take a different approach. With the help of Lemma 3.5 and Lemma 3.7, we inherently proved that there exits a non-zero projection \(q\in M\) such that
\[\lim_{n\to\infty}\mu(A_{n}(q))=0,\text{ for all }\mu\in M^{*}.\]
Then a version of Mean ergodic theorem for Banach spaces (Theorem 2.18) is implemented to show that \(q\) is weakly wandering.
The second half of the article is dedicated to obtain the Krengel's stochastic ergodic theorem for actions by some semigroup of positive contractions on non-commutative \(L^{1}\)-spaces. The main tool of this proof is the Neveu decomposition and together with a version of pointwise ergodic theorem on the corner where there is an invariant state according to the Neveu decomposition. Equivalently, we need to prove a non-commutative pointwise ergodic theorem for a non-commutative dynamical system associated with an amenable semigroup preserving a faithful normal state on a von Neumann algebra.
We have already discussed about the pointwise ergodic theorem for classical \(L^{1}\)-spaces for a single positive contraction. Further generalisation of this result for various group actions and other \(L^{p}\)-spaces (\(1<p<\infty\)) are also hugely studied in the literature. For action of amenable groups, Lindenstrauss [12] proved that the ergodic averages associated to tempered Folner sequence converges almost everywhere
for all \(f\) in classical \(L^{1}\)-spaces. For more information and other generalizations and state-of-the-art results we refer to [10], [12], [13], [14] and the references therein.
For the non-commutative setup these results are also extensively studied. The study of non-commutative ergodic theorems were initiated by the pioneering work of Lance [13]. In this article, a pointwise ergodic theorem is studied on a von Neumann algebra. This result is then improved substantially in the works of Yeadon, Kummerer, Conze, Dang-Ngoc and many others [see, 11, 12, 13, 14], [15], [16] and the references therein. Particularly, Yeadon in [11] and [12] extended the results of Lance [13] to the non-commutative \(L^{1}\)-spaces. After that, Junge and Xu [15] extended Yeadon's result to prove a pointwise ergodic theorem in non-commutative \(L^{p}\)-spaces (both tracial and non-tracial) for \(1<p<\infty\) for some specific \(\mathbb{Z}_{+}^{d}\) of \(\mathbb{R}_{+}^{d}\) (\(d\geq 1\)) actions. In recent times, these results are further generalised for the actions of locally compact groups with polynomial growth (in this case as earlier the actions are assumed to be sub-tracial) in [12] on tracial non-commutative \(L^{p}\)-spaces (\(1\leq p<\infty\)). Furthermore, using the Calderon-Zygmund decomposition and variaous norm estimates of the elements of non-commutative \(L^{1}\)-spaces, this result is further extended for actions of amenable groups and the averages associated to some filtered Folner sequence in [10]. Recently, in [17], the authors established first ergodic theorem for large class of contractions beyond the Danford-Schwartz category on non-commutative \(L^{p}\)-spaces for \(1<p<\infty\).
Thus, available results regarding pointwise ergodic theorems on non-commutative \(L^{1}\)-spaces fall in the Dunford-Schwartz category. So far there is no ergodic theorem for positive contraction on non-commutative \(L^{1}\)-spaces. Indeed, in the literature it is known to be challenging to obtain an ergodic theorem for semigroup of positive contractions on non-commutative \(L^{1}\)-spaces, even for a single positive contraction and it is anticipated that it will require non-trivial new approaches other than [15], [16], etc.
In this article, we obtain a satisfactory ergodic theorem for positive \(L^{1}\)-contractions associated with a tracial state. In fact, to prove it, the first non-trivial difficulty was to prove Neveu decomposition for positive contraction and the second one was to prove a pointwise ergodic theorem for state preserving positive contractions. In our proof, we extensively use the Neveu decomposition and a version of pointwise ergodic theorem which is mainly proved in our previous article [11] and [12].
Now we highlight some of our main results to get more idea about it. Let \(M\) be a \(\sigma\)-finite von Neumann algebra and \(G\) be a locally compact, second countable, Hausdorff semigroup with a both left and right invariant \(\sigma\)-finite measure \(m\) and left-right Folner net \(\{K_{l}\}_{l\in\mathbb{R}_{+}}\). Furthermore, let \(\alpha\) be an action of \(G\) on \(M\) by positive contractions, then we have the following Neveu decomposition.
**Theorem 1.1** (Neveu Decomposition).: Let \((M,G,\alpha)\) be as above and suppose it falls one of the following category.
1. \(M\) be a semifinite von Neumann algebra and \((M,G,\alpha)\) be a non-commutative dynamical system, i.e, \(G\) is a semigroup and \(\alpha\) is is the action by positive contractions, or
2. \((M,G,\alpha)\) be a Markov covariant system, i.e, \(G\) is a group and the action commute with the modular automorphisms group associated to a f.n weight on \(M\).
Then there exist two projections \(e_{1},e_{2}\in M\) such that \(e_{1}+e_{2}=1\) and
1. there exists a \(G\)-invariant normal state \(\rho\) on \(M\) with support \(s(\rho)=e_{1}\) and
2. there exists a weakly wandering operator \(x_{0}\in M\) with support \(s(x_{0})=e_{2}\), i.e, \(\frac{1}{m(K_{n})}\int_{K_{n}}\alpha_{g}(x_{0})dm(g)\) converges to \(0\) in norm as \(n\to\infty\).
Further, in both the cases \(s(\rho)\) and \(s(x_{0})\) are unique.
We note that the proof of the Neveu decomposition for \((M,G,\alpha)\), when it falls in the second category use the first category by lifting it to the cross product of \(M\) with it's modular automorphisms group, as the cross product of \(M\) with it's modular automorphisms group is a semifinite von Neumann algebra.
Now let \(M\) be a finite von Neumann algebra and note that \((M,\mathbb{Z}_{+}^{d},\alpha)\) is determined by \(d\)-commuting positive contractions \(\alpha_{1},\alpha_{2},\cdots,\alpha_{d}\) on \(M\) such that
\[\alpha_{(i_{1},\cdots,i_{d})}(\cdot)=\alpha_{1}^{i_{1}}\alpha_{2}^{i_{2}} \cdots\alpha_{d}^{i_{d}}(\cdot)\text{ for }(i_{1},\cdots,i_{d})\in\mathbb{Z}_{+}^{d}.\]
For the dynamical system \((M,\mathbb{R}_{+}^{d},\alpha)\), we consider the ergodic avarages with respect to the set \(Q_{a}:=\{(t_{1},\dots,t_{d})\in\mathbb{R}_{+}^{d}:t_{1}<a,\dots,t_{d}<a\}\) for \(a\in\mathbb{R}_{+}\). Thus, with the preceding notations, consider following ergodic averages;
\[A_{a}(\cdot):=\begin{cases}\frac{1}{a^{d}}\sum_{0\leq i_{1}<a}\cdots\sum_{0 \leq i_{d}<a}\alpha_{1}^{i_{1}}\cdots\alpha_{d}^{i_{d}}(\cdot)&\text{ when }G=\mathbb{Z}_{+}^{d},\ a\in\mathbb{N},\\ \frac{1}{a^{d}}\int_{Q_{a}}\alpha_{t}(\cdot)dt&\text{ when }G=\mathbb{R}_{+}^{d},\ a \in\mathbb{R}_{+}.\end{cases}\]
Then following stochastic ergodic theorem
**Theorem 1.2** (Stochastic Ergodic Theorem).: With the preceding notations, let \(e_{1},e_{2}\in M\) be projections obtained in Neveu decomposition 1.1. Then we have the following results.
1. For all \(B\in L^{1}(M_{e_{1}},\tau_{e_{1}})\), there exists \(\bar{B}\in L^{1}(M_{e_{1}},\tau_{e_{1}})\) such that \(A_{n}^{*}(B)\) converges b.a.u to \(\bar{B}\). Moreover, \(A_{n}(B)\) converges in measure to \(\bar{B}\).
2. For all \(B\in L^{1}(M_{e_{2}},\tau_{e_{2}})\), \(A_{n}^{*}(B)\) converges to \(0\) in measure.
Here we refer to SS2 for the definition of b.a.u and stochastic convergence and basic facts of non-commutative \(L^{p}\)-spaces. Further, we point out that if for all \(g\in G\), \(\alpha_{g}^{*}:L^{1}(M,\tau)\to L^{1}(M,\tau)\) is Lamperti positive contraction (i.e, \(\alpha_{g}^{*}(A)\alpha_{g}^{*}(B)=0\), whenever \(A,B\in L^{1}(M,\tau)_{+}\) with \(AB=0\)), then we have a stronger version of stochastic ergodic theorem. Indeed, we have the following.
**Theorem 1.3** (Strong Stochastic Ergodic Theorem).: Let \(M\), \(G\) and \(\alpha\) as in the Theorem 1.2 and assume that for each \(g\in G\), \(\alpha_{g}^{*}\) is lamperti operator. Let \(X\in L^{1}(M,\tau)\), then there exists \(\overline{X}\in L^{1}(M,\tau)\) such that \(A_{n}(X)\) converges to \(\overline{X}\) in measure. Further, suppose \(e_{1},e_{2}\in M\) are as in the Theorem 1.1, then \(e_{1}\overline{X}e_{1}=\overline{X}\) and \(e_{2}\overline{X}e_{2}=0\).
Now we will discuss the arrangement of this article. In SS2, We gather all of the materials required for this article. We recall, in particular, some fundamental
facts concerning states defined on a von Neumann algebra and non-commutative \(L^{1}\)-spaces associated with a faithful normal semifinite trace. In the same section we also discuss the stochastic and bilateral almost uniform convergence. SS3 and SS4 are devoted to study the Neveu decomposition. In SS5, we discuss some examples. In SS6 we discuss and recall pointwise ergodic theorems. In the final section, we combine the previous section's conclusions to show the stochastic ergodic theorem.
## 2. Preliminaries
### von Neumann algebra and linear functional
In this article, \(\mathcal{H}\) will denote a separable Hilbert space and \(M(\subseteq\mathcal{B}(\mathcal{H}))\) will represent a von Neumann algebra. \(M\) possesses several locally convex topologies along with the norm topology induced from \(\mathcal{B}(\mathcal{H})\). In this paper, \(\left\|\cdot\right\|\) will denote the norm on \(M\) as well as on \(\mathcal{B}(\mathcal{H})\). \(M_{+}\) will denote the set of all positive elements of \(M\). For the definition of these topologies and various other facts regarding von Neumann algebras, we refer to [10, chapter 1].
Write \(M_{*}\) for the set of all \(w\)-continuous linear functionals on \(M\). It is a norm closed subspace of \(M^{*}\), the Banach space dual of \(M\). The dual space norm on \(M^{*}\) and its restriction to \(M_{*}\) will be denoted by \(\left\|\cdot\right\|_{1}\) in the sequel. Then \(M\) is isomorphic to \((M_{*})^{*}\) via a canonical bi-linear form defined on \(M\times M_{*}\). For the proof of this fact, we again refer to [10, Lemma 1.9]. Under this identification, \(M_{*}\) is called the predual of \(M\).
A linear functional \(\varphi\) on \(M\) is called positive if \(\varphi(x^{*}x)\geq 0\) for all \(x\in M\) and we refer it by \(\varphi\geq 0\). A positive linear functional \(\varphi\) is called state if \(\varphi(1)=1\). The elements of \(M_{*}\) will be called normal linear functionals. A self-adjoint linear functional is a finite linear combination of positive linear functionals. The set of all normal positive linear functionals will be denoted by \(M_{*+}\) and self-adjoint elements of \(M_{*}\) will be denoted by \(M_{*s}\).
The support of a self-adjoint operator \(x\) in \(M\), henceforth denoted by \(s(x)\), is the smallest projection in \(M\) such that \(s(x)x=x\), thus, equivalently, \(xs(x)=x\). On the other hand, for a positive normal linear functional \(\varphi\), the set \(\{e\in M:e\text{ is a projection and }\varphi(e)=0\}\) is increasingly directed and if the projection \(p\in M\) be its least upper bound, then one can show that \(\varphi(p)=0\). Then the projection \(1-p\) is called the support of \(\varphi\) and it will be denoted by \(s(\varphi)\) in the sequel. A positive linear functional \(\varphi\) is said to be faithful if for all \(x\in M_{+}\) with \(\varphi(x)=0\) implies \(x=0\). It is also well-known that a positive normal linear functional \(\varphi\) is faithful on \(s(\varphi)Ms(\varphi)\). For more details we refer to [10]. We abbreviate the faithful normal linear functional on \(M\) as f.n linear functional. The collection of all (resp. non-zero) projections in a von Neumann algebra \(M\) will be denoted by \(\mathcal{P}(M)\) (resp. \(\mathcal{P}_{0}(M)\)). Let \(p,q\in\mathcal{P}(M)\), if \(pq=0\), i.e, \(p\) and \(q\) are orthogonal then we write \(p\perp q\).
We now briefly recall singular linear functionals on \(M\) and describe the decomposition of a positive linear functional into normal and singular ones. The following materials regarding singular linear functionals are taken from [11, 12].
**Definition 2.1**.: [11] A positive linear functional \(\varphi\) on a von Neumann algebra \(M\) is said to be singular if there exists a positive normal linear functional \(\psi\) on \(M\) such that \((\varphi-\psi)\geq 0\), then \(\psi=0\).
We will frequently use the following characterisation of a singular linear functional on \(M\).
**Theorem 2.2**.: _[_14_, Theorem 3.8, pp. 134]_ _A positive linear functional \(\varphi\) on \(M\) is singular if and only if for any non-zero projection \(e\in M\), there exists a non-zero projection \(f\leq e\) in \(M\) such that \(\varphi(f)=0\)._
**Theorem 2.3**.: _[_14_, Theorem 3]_ _Let \(\varphi\) be a positive linear functional on \(M\). Then there exist unique normal linear functional \(\varphi_{n}\) and a singular linear functional \(\varphi_{s}\) on \(M\) such that_
\[\varphi=\varphi_{n}+\varphi_{s}.\]
### Non-commutative \(L^{p}\)-spaces
Let \(M\subseteq\mathcal{B}(\mathcal{H})\) be a von Neumann algebra. Recall that a closed densely defined operator \(X:\mathcal{D}(X)\subseteq\mathcal{H}\to\mathcal{H}\) is called affiliated to \(M\) if \(X\) commutes with every unitary operator \(u^{\prime}\) in \(M^{\prime}\), where \(M^{\prime}\) is the commutant of \(M\) in \(\mathcal{B}(\mathcal{H})\). From now on if \(X\) is affiliated to \(M\), we will denote it by \(X\eta M\).
Now we put up a brief account on non-commutative \(L^{p}\)-spaces for a semifinite von Neumann algebra \(N\) with a faithful, normal, semifinite trace \(\tau\). For more details, we refer to [10]. We abbreviate faithful, normal, and semifinite as f.n.s for future reference. Let \(L^{2}(N,\tau)\) be the GNS Hilbert space. Write \(\mathcal{H}=L^{2}(N,\tau)\) and we identify \(N\) as a von Neumann subalgebra of \(\mathcal{B}(\mathcal{H})\).
**Definition 2.4**.: An operator \(X\) (possibly unbounded), defined on \(\mathcal{H}\), is said to be \(\tau\)-measurable if for every \(\epsilon>0\), there is a projection \(e\) in \(N\) such that \(e\mathcal{H}\subseteq\mathcal{D}(X)\) and \(\tau(1-e)<\epsilon\). The set of all closed, densely defined operators affiliated to \(N\) which are \(\tau\)-measurable is denoted by \(L^{0}(N,\tau)\).
**Remark 2.5**.: Let \(X,Y\in L^{0}(N,\tau)\), and then it is easy to see that \(X+Y\) and \(XY\) are densely defined and closable. Moreover, \(L^{0}(N,\tau)\) is a \(*\)-algebra with respect to adjoint operation \(*\), the strong sum \(\overline{X+Y}\), and the strong product \(\overline{XY}\), where \(\overline{(\cdot)}\) denotes the closure of an operator.
For all \(\epsilon,\delta>0\), let us consider the following set,
\[\mathcal{N}(\epsilon,\delta):=\{X\in L^{0}(N,\tau):\exists\text{ projection }e\in N\text{ such that }\ \ \|Xe\|\leq\epsilon\text{ and }\tau(1-e)\leq\delta\}.\]
Note that the collection of sets \(\{X+\mathcal{N}(\epsilon,\delta):X\in L^{0}(N,\tau),\epsilon>0,\delta>0\}\) forms a neighborhood basis on \(L^{0}(N,\tau)\).
**Definition 2.6**.: The topology generated by the neighborhood basis \(\{X+\mathcal{N}(\epsilon,\delta):X\in L^{0}(N,\tau),\epsilon>0,\delta>0\}\) is called the measure topology on \(L^{0}(N,\tau)\).
**Definition 2.7**.: A sequence \(\{X_{n}\}_{n\in\mathbb{N}}\) in \(L^{0}(N,\tau)\) is said to converge in measure (or converge stochastically ) to \(X\in L^{0}(N,\tau)\) if for all \(\epsilon,\delta>0\) there exists a sequence of projections \(\{e_{n}\}_{n\in\mathbb{N}}\) in \(N\) and \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\)
\[\tau(1-e_{n})<\delta\text{ and }\|(X_{n}-X)e_{n}\|<\epsilon.\]
**Remark 2.8**.: We note that with respect to measure topology, \(L^{0}(N,\tau)\) becomes a complete, metrizable, Hausdorff space. Moreover, \(N\) is dense in \(L^{0}(N,\tau)\) in this topology [cf. [10], Theorem 4.12].
The following theorem from [11] provides an equivalent condition for convergence of a sequence in measure. Henceforth, we use the criteria in the following theorem (Theorem 2.9) as the definition of convergence in measure.
**Theorem 2.9** (Theorem 2.2, [11]).: A sequence of operators \(\{X_{n}\}_{n\in\mathbb{N}}\subseteq L^{0}(N,\tau)\) converges in measure to \(X\in L^{0}(N,\tau)\) iff for all \(\epsilon>0\) and \(\delta>0\) there exists \(n_{0}\in\mathbb{N}\) and a sequence of projections \(\{e_{n}\}_{n\in\mathbb{N}}\) in \(N\) such that for all \(n\geq n_{0}\),
\[\tau(1-e_{n})<\delta\text{ and }\left\|e_{n}(X_{n}-X)e_{n}\right\|<\epsilon.\]
We now define another notion of convergence of sequences in \(L^{0}(N,\tau)\), which will be helpful in our context.
**Definition 2.10**.: A sequence of operators \(\{X_{n}\}_{n\in\mathbb{N}}\subseteq L^{0}(N,\tau)\) converges bilaterally almost uniformly (b.a.u) to \(X\in L^{0}(N,\tau)\) if for all \(\epsilon>0\) there exists a projection \(e\in N\) with \(\tau(1-e)<\epsilon\) such that for all \(\delta>0\) there exists \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\),
\[\left\|e(X_{n}-X)e\right\|<\delta.\]
**Remark 2.11**.: We immediately notice that the bilateral almost uniform convergence of a sequence implies the convergence of the sequence in measure.
We put down the following proposition for our future reference. The proof is simple and hence we omit it.
**Proposition 2.12**.: Suppose \(\{X_{n}\}_{n\in\mathbb{N}}\) and \(\{Y_{n}\}_{n\in\mathbb{N}}\) are two sequences in \(L^{0}(N,\tau)\) such that \(\{X_{n}\}_{n\in\mathbb{N}}\) converges in measure (resp. b.a.u) to \(X\) and \(\{Y_{n}\}_{n\in\mathbb{N}}\) converges in measure (resp. b.a.u) to \(Y\). Then, for all \(c\in\mathbb{C}\), \(\{cX_{n}+Y_{n}\}_{n\in\mathbb{N}}\) converges in measure (resp. b.a.u) to \(cX+Y\).
Let \(L^{0}(N,\tau)_{+}\) denote the positive cone of \(L^{0}(N,\tau)\). Extend the trace \(\tau\) on \(N_{+}\) to \(L^{0}(N,\tau)_{+}\) and denote it again by \(\tau\) by slight abuse of notation. For any \(X\in L^{0}(N,\tau)_{+}\), it is defined by
\[\tau(X):=\int_{0}^{\infty}\lambda d\tau(e_{\lambda}),\]
where \(X=\int_{0}^{\infty}\lambda d(e_{\lambda})\) is the spectral decomposition.
Although, for this article we need only the description of \(L^{p}(N,\tau)\) for \(p=1\) or \(\infty\), but we define it for all \(0<p\leq\infty\) since the definitions are similar for all \(p\).
**Definition 2.13**.: For \(0<p\leq\infty\), the non-commutative \(L^{p}\)-space on \((N,\tau)\) is defined by
\[L^{p}(N,\tau):=\begin{cases}\{X\in L^{0}(N,\tau):\left\|X\right\|_{p}:=\tau( \left|X\right|^{p})^{1/p}<\infty\}&\text{ for }p\neq\infty,\\ (N,\left\|\cdot\right\|)&\text{ for }p=\infty\end{cases}\]
where, \(\left|X\right|=(X^{*}X)^{1/2}\).
The positive cone of \(L^{p}(N,\tau)\) will be denoted by \(L^{p}(N,\tau)_{+}\). Now we list a few essential properties of \(L^{p}(N,\tau)\) without proofs. We will use these properties recursively in the sequel. For the proofs, we refer to [10].
**Theorem 2.14**.:
1. For all \(1\leq p\leq\infty\), \(L^{p}(N,\tau)\) is a Banach space with respect to the norm \(\left\|\cdot\right\|_{p}\). Moreover, for all \(1<p<\infty\), \(L^{p}(N,\tau)\) is reflexive.
2. Let \(1<p<\infty\) and \(1/p+1/q=1\). Then the map \(\Psi:L^{q}(N,\tau)\to L^{p}(N,\tau)\) defined by \(\Psi(b)(a)=\tau(ab)\) for \(b\in L^{q}(N,\tau),a\in L^{p}(N,\tau)\) is a surjective linear isometry.
3. The map \(\Psi:L^{1}(N,\tau)\to N_{*}\) defined by \(\Psi(X)(a)=\tau(Xa)\) for \(X\in L^{1}(N,\tau),a\in N\) is a surjective linear isometry which preserves the positive cones.
The following discussion is also going to play an important role in the sequel. Let \(N\) be a von Neumann algebra acting on a Hilbert space \(\mathcal{H}\) and equipped with a faithful, normal, semifinite trace \(\tau\) and \(e\) be a projection in \(N\). Further, assume that \(N_{e}:=\{x_{e}:=ex_{|e\mathcal{H}}:x\in N\}\) denote the reduced von Neumann algebra. Define the reduced trace on \(N_{e}\) as
\[\tau_{e}(x_{e})=\tau(exe),\text{ for all }x\in N.\]
**Remark 2.15**.:
1. Since \(\tau\) is a faithful, normal, semifinite trace, \(\tau_{e}\) also has similar properties.
2. Let \(1\leq p\leq\infty\) and \(X\in L^{p}(N_{e},\tau_{e})\). Define \(\tilde{X}\) on \(\mathcal{H}\) by \[\tilde{X}\xi=Xe\xi,\text{ for all }\xi\in\mathcal{D}(\tilde{X}):=\mathcal{D}(X) \oplus(1-e)\mathcal{H}.\] Then the mapping \(L^{p}(N_{e},\tau_{e})\ni X\mapsto\tilde{X}\in eL^{p}(N,\tau)e\) defines an isomorphism as Banach space for \(1\leq p\leq\infty\). From now onwards we identify \(L^{p}(N_{e},\tau_{e})\) with \(eL^{p}(N,\tau)e\).
3. It follows from [10, Theorem 4.12 and Lemma 5.3] that \(eL^{p}(N,\tau)e\subseteq L^{p}(N,\tau)\).
### Actions by amenable semigroups
Throughout this article, unless otherwise mentioned, \(G\) will denote a locally compact second countable Hausdorff (shortly read it as LCSH) semigroup and \(m\) be a \(\sigma\)-finite measure on \(G\), which is both left and right invariant (i.e. \(m(uB)=m(B)\) and \(m(Bu)=m(B)\) for all \(u\in G\)). In the sequel \(\mathbb{K}\) will always denote either \(\mathbb{Z}_{+}\) or \(\mathbb{R}_{+}\). We consider a collection \(\{K_{l}\}_{l\in\mathbb{K}}\) of measurable subsets of \(G\) having the following properties.
1. \(0<m(K_{l})<\infty\) for all \(l\in\mathbb{K}\).
2. \(\lim_{l\to\infty}\frac{m(K_{l}\Delta Kig)}{m(K_{l})}=0\) and \(\lim_{l\to\infty}\frac{m(K_{l}\Delta gK_{l})}{m(K_{l})}=0\) for all \(g\in G\).
Note that such a net is called Folner net and in that case \(G\) is referred as amenable semigroup. Thus, when we say \(G\) is a amenable semigroup, we mean G is is LCSH semigroup having Folner net with respect to fix invariant measure. In this article, \(G\) will be meant for LCSH amenable semigroup unless otherwise mentioned.
**Remark 2.16**.: We like to pointout that when \(G\) is a group then we don't need to assume the Haar measure to be left-right invariant. Only when it is proper semigroup (i.e, it is not closed under inverse), we require for certain results in the sequel that the Haar measure on \(G\) is left-right invariant.
Now we quickly recall ordered Banach space. A real Banach space \(E\) paired with a closed convex subset \(K\) satisfying \(K\cap-K=\{0\}\) and \(\lambda K\subseteq K\) for \(\lambda\geq 0\) induces a partial order on \(E\) given by \(x\leq y\) if and only if \(y-x\in K\) where \(x,y\in E\). With such a partial order, \(E\) will be referred to as an ordered Banach space. In our context, it is easy to verify that \(M,M^{*},M_{*}\), \(L^{p}(M,\tau)\) for \(1\leq p\leq\infty\) become ordered Banach spaces with respect to the natural order. Let us now define an action of \(G\) on ordered Banach spaces.
**Definition 2.17**.: Let \(E\) be an Banach space. A map \(\Lambda\) defined by
\[G\ni g\xrightarrow{\Lambda}\Lambda_{g}\in\mathcal{B}(E)\]
is called an action if \(\Lambda_{g}\circ\Lambda_{h}=\Lambda_{gh}\) for all \(g,h\in G\). It is called anti-action if \(\Lambda_{g}\circ\Lambda_{h}=\Lambda_{hg}\) for all \(g,h\in G\). In this article, we consider both actions and anti-actions \(\Lambda=\{\Lambda_{g}\}_{g\in G}\) which satisfy the following conditions.
1. For all \(x\in E\), the map \(g\to\Lambda_{g}(x)\) from \(G\) to \(E\) is continuous. Here we take \(w^{*}\)-topology when \(E=M\) and norm topology otherwise.
2. \(\sup_{g\in G}\|\Lambda_{g}\|\leq 1\).
3. Suppose \(x\in E\) with \(x\geq 0\), then \(\Lambda_{g}(x)\geq 0\) for all \(g\in G\).
4. When \(E=M\), we assume that \(\Lambda_{g}(1)\leq 1\) for all \(g\in G\).
We refer the triple \((E,G,\Lambda)\) as a non-commutative dynamical system.
Let \((M,\Lambda,G)\) be a non-commutative dynamical system and \(\varphi\in M^{*}\), it is called \(G\)-invariant if \(\varphi(\Lambda_{g}(x))=\varphi(x)\) for all \(x\in M\) and \(g\in G\).
Now let \((E,G,\Lambda)\) be a non-commutative dynamical system and \(\{K_{n}\}_{n\in\mathbb{N}}\) be a _Folner sequence_ in \(G\). For all \(x\in E\), we consider the following average
\[A_{n}(x):=\frac{1}{m(K_{n})}\int_{K_{n}}\Lambda_{g}(x)dm(g). \tag{2.1}\]
We note that for \(x\in E\), the map \(G\ni g\to\Lambda_{g}(x)\in E\) is continuous in norm of \(E\). Further, when \(E=M\), then it is \(w^{*}\)-continuous. Therefore, in both cases the integration in eq. 2.1 is well defined. In addition by [13, Proposition 1.2, pp-238], for all \(n\in\mathbb{N}\), \(\varphi\in M_{*}\) and \(x\in M\) the following holds
\[\varphi(A_{n}(x))=\frac{1}{m(K_{n})}\int_{K_{n}}\varphi(\Lambda_{g}(x))dm(g).\]
Now for all \(g\in G\), consider
\[\Lambda_{g}^{*}:M^{*}\to M^{*}\text{ by }\Lambda_{g}^{*}(\varphi)(x)=\varphi( \Lambda_{g}(x))\text{ for all }\varphi\in M^{*},x\in M. \tag{2.2}\]
For all \(n\in\mathbb{N}\), we also consider the following average defined by
\[A_{n}^{*}:M_{*}\to M_{*};\ \varphi\mapsto A_{n}^{*}(\varphi)(\cdot):= \varphi(A_{n}(\cdot))=\frac{1}{m(K_{n})}\int_{K_{n}}\Lambda_{g}^{*}(\varphi)( \cdot)dm(g). \tag{2.3}\]
We note that for all \(n\in\mathbb{N}\), \(\|A_{n}\|\leq\sup_{g\in G}\|\Lambda_{g}\|\) and \(\|A_{n}^{*}\|\leq\sup_{g\in G}\|\Lambda_{g}\|\). These will be called averaging operators for future references. We make no difference between these two averaging operators \(A_{n}\) and \(A_{n}^{*}\) for the convenience of notation, unless and otherwise, it is not clear from the context.
The following discussion regarding dual and predual map will be used in future. Let \(M\) be a semifinite von Neumann algebra with a f.n semifinite trace \(\tau\), then we identify \(M_{*}\) with \(L^{1}(M,\tau)\). Suppose \(T:M\to M\) be a contractive positive normal map and \(T^{*}\) and \(T_{*}\) be the dual and predual operator of \(T\) respectively on \(M^{*}\) and \(M_{*}\). Then note that \(T^{*}|_{M_{*}}=T_{*}\).
We write the following version of mean ergodic theorem, the proof may be folklore in the literature. But for the completeness we add a proof. This version of mean ergodic theorem will be used in the sequel. Indeed, it will be used to prove the existence of weakly wandering operators.
**Theorem 2.18** (Mean Ergodic Theorem).: Let \((E,\|\cdot\|)\) be a Banach space and \((E,G,\alpha)\) be a non-commutative dynamical system. Consider the associated ergodic averages
\[A_{n}(\xi):=\frac{1}{m(K_{n})}\int_{K_{n}}\alpha_{g}(\xi)dm(g),\ \xi\in E.\]
Suppose \(E^{G}=\{x\in E:\alpha_{g}(x)=x\) for all \(g\in G\}\). Then the following are equivalent.
1. For all \(\xi\in E\) there exists \(\overline{\xi}\in E^{G}\) such that \(\lim_{n\to\infty}A_{n}\xi=\overline{\xi}\).
2. For all \(\xi\in E\) there exists \(\overline{\xi}\in E^{G}\) and a subsequence \((n_{k})\) such that weak-\(\lim_{k\to\infty}A_{n_{k}}\xi=\overline{\xi}\).
3. For all \(\xi\in E\) there exists \(\overline{\xi}\in E^{G}\cap\overline{co}^{weak}\{\alpha_{g}\xi:g\in G\}\).
4. For all \(\xi\in E\) there exists \(\overline{\xi}\in E^{G}\cap\overline{co}^{\|\cdot\|}\{\alpha_{g}\xi:g\in G\}\).
Proof.: The proof of \((1)\Rightarrow(2)\) is clear. Proof of \((3)\Leftrightarrow(4)\) implies from the Mazur's theorem which states that any convex subset of \(E\) has same closure with respect to norm and weak topology.
\((2)\Rightarrow(3):\) Let \(\xi\in E\). By hypothesis, there is \(\overline{\xi}\in E^{G}\) and a subsequence \((n_{k})\) such that weak-\(\lim_{k\to\infty}A_{n_{k}}\xi=\overline{\xi}\). We claim that \(\overline{\xi}\in\overline{co}^{weak}\{\alpha_{g}\xi:g\in G\}\). For this it is enough to show that \(\overline{\xi}\in\overline{co}^{\|\cdot\|}\{\alpha_{g}\xi:g\in G\}\). Suppose it is not true, then by Hahn- Banach separation theorem there exists \(\Lambda\in E^{*}\) and \(a>0\) such that
\[Re\Lambda(\overline{\xi})\geq a+Re\Lambda(f)\ \text{for all}\ f\in\overline{co}^{\| \cdot\|}\{\alpha_{g}\xi:g\in G\}.\]
In particular,
\[Re\Lambda(\overline{\xi})\geq a+Re\Lambda(\alpha_{g}(\xi))\ \text{for all}\ g\in G.\]
Therefore,
\[a+Re\Lambda(A_{n_{k}}(\xi)) =a+\frac{1}{m(K_{n_{k}})}\int_{K_{n_{k}}}Re\Lambda(\alpha_{g}( \xi))dm(g)\] \[\leq\frac{1}{m(K_{n_{k}})}\int_{K_{n_{k}}}Re\Lambda(\overline{ \xi})dm(g)\] \[=Re\Lambda(\overline{\xi})\ \text{for all}\ k\in\mathbb{N}.\]
Now passing limit as \(k\to\infty\), we obtain \(a+Re\Lambda(\overline{\xi})\leq Re\Lambda(\overline{\xi})\), which is a contradiction.
\((4)\Rightarrow(1):\) Let \(\xi\in E\) and \(\epsilon>0\). We first find a convex combination \(\xi^{\prime}:=\sum_{1}^{m}\lambda_{i}\alpha_{g_{i}}(\xi)\), where \(\sum_{1}^{m}\lambda_{i}=1\) such that \(\left\|\overline{\xi}-\xi^{\prime}\right\|<\epsilon\). Also for all \(n\in\mathbb{N}\), note that
\[A_{n}(\xi)-A_{n}(\xi^{\prime}) =\sum_{1}^{m}\lambda_{i}(A_{n}(\xi)-A_{n}(\alpha_{g_{i}}(\xi)))\] \[=\sum_{1}^{m}\lambda_{i}\frac{1}{m(K_{n})}\Big{[}\int_{K_{n}} \alpha_{h}(\xi)dm(h)-\int_{K_{n}}\alpha_{hg_{i}}(\xi)dm(h)\Big{]}\] \[=\sum_{1}^{m}\lambda_{i}\frac{1}{m(K_{n})}\Big{[}\int_{K_{n}} \alpha_{h}(\xi)dm(h)-\int_{K_{n}g_{i}}\alpha_{h}(\xi)dm(h)\Big{]}.\]
Therefore, for all \(n\in\mathbb{N}\) we have
\[\left\|A_{n}(\xi)-A_{n}(\xi^{\prime})\right\|\leq C\left\|\xi\right\|\sum_{1}^{m }\lambda_{i}\frac{m(K_{n}g_{i}\Delta K_{n})}{m(K_{n})}.\]
Now by the Folner condition, we choose \(n_{0}\in\mathbb{N}\) such that \(\left\|A_{n}(\xi)-A_{n}(\xi^{\prime})\right\|\leq C\left\|\xi\right\|\epsilon\) for all \(n\geq n_{0}\). Now since \(A_{n}(\overline{\xi})=\overline{\xi}\) for all \(n\in\mathbb{N}\), we have
\[\left\|A_{n}(\xi)-\overline{\xi}\right\| \leq\left\|A_{n}(\xi)-A_{n}(\xi^{\prime})\right\|+\left\|A_{n}( \xi^{\prime}-\overline{\xi})\right\|\] \[\leq C\left\|\xi\right\|\epsilon+\left\|A_{n}\right\|\left\|\xi^ {\prime}-\overline{\xi}\right\|\] \[\leq C\epsilon(\left\|\xi\right\|+1)\text{ for all }n\geq n_{0}.\]
This completes the proof.
## 3. Neveu Decomposition
Let \(G\) be a amenable semigroup and \(M\) be a von Neumann algebra. Suppose \((M,G,\alpha)\) is a non-commutative dynamical system. In the beginning of this section we discuss the existence of invariant states for \((M,G,\alpha)\). Later part of this section we assume that \(M\) is semifinite von Neumann algebra with a f.n.s trace \(\tau\) and then study the existence of weakly wandering operator for \((M,G,\alpha)\) and we obtain the Neveu decomposition for \((M,G,\alpha)\) and thus this completely settle the problem of Neveu decomposition for the action of amenable semigroup on semifinite von Neumann algebras. We begin with the following proposition.
**Proposition 3.1**.: Let \(M\) be a von Neumann algebra equipped with a f.n state \(\varphi\) and \((M,G,\alpha)\) be a non-commutative dynamical system. Further, for \(e\in\mathcal{P}_{0}(M)\), assume that the following holds.
\[\inf_{n\in\mathbb{N}}A_{n}(\varphi)(p)>0,\ \text{ for all }p\in\mathcal{P}_{0}(M)\text{ with }p\leq e. \tag{3.1}\]
Then, there exists an invariant normal state \(\nu_{\varphi}\) with \(s(\nu_{\varphi})\geq e\).
Proof.: First note that since \(\left\|A_{n}(\varphi)\right\|\leq\left\|\varphi\right\|\) for all \(n\in\mathbb{N}\), by Banach-Alaglou theorem applied on \(M^{*}\), we obtain a subsequence \(\{A_{n_{k}}(\varphi)\}_{k\in\mathbb{N}}\subseteq M_{1}^{*}\), which converges pointwise. We define
\[\overline{\varphi}(x):=\lim_{k\to\infty}A_{n_{k}}(\varphi)(x),x\in M.\]
We claim that \(\overline{\varphi}\circ\alpha_{h}=\overline{\varphi}\) for all \(h\in G\). Indeed, for all \(h\in G\), \(x\in M\) and \(k\in\mathbb{N}\) we have
\[\left|\overline{\varphi}(\alpha_{h}(x))-\overline{\varphi}(x)\right|\leq \left|\overline{\varphi}(\alpha_{h}(x))-A_{n_{k}}(\varphi)(\alpha _{h}(x))\right|+\] \[\left|A_{n_{k}}(\varphi)(\alpha_{h}(x))-A_{n_{k}}(\varphi)(x) \right|+\left|A_{n_{k}}(\varphi)(x)-\overline{\varphi}(x)\right|\] \[\leq \left|\overline{\varphi}(\alpha_{h}(x))-A_{n_{k}}(\varphi)( \alpha_{h}(x))\right|+\] \[\frac{m(K_{n_{k}}h\Delta K_{n_{k}})}{m(K_{n_{k}})}+\left|A_{n_{k} }(\varphi)(x)-\overline{\varphi}(x)\right|.\]
Hence by Folner condition and the definition of \(\overline{\varphi}\), the right hand side of the above equation converges to \(0\). Therefore, we conclude that \(\alpha_{h}^{*}(\overline{\varphi})=\overline{\varphi}\) for all \(h\in G\). It is clear that since \(\varphi\) is a state, \(\overline{\varphi}\) is also a positive linear functional and \(\overline{\varphi}(1)=1\). Let
\[\overline{\varphi}=\overline{\varphi}_{n}+\overline{\varphi}_{s}\]
be the decomposition of \(\overline{\varphi}\) in accordance with [14, Theorem 3], where \(\overline{\varphi}_{n}\) is a normal linear functional and \(\overline{\varphi}_{s}\) is a positive linear functional which is singular. Therefore, we have, \(\overline{\varphi}\circ\alpha_{g}=\overline{\varphi}_{n}\circ\alpha_{g}+ \overline{\varphi}_{s}\circ\alpha_{g}\) for all \(g\in G\).
Now fix a \(g\in G\) and further decomposing \(\overline{\varphi}_{s}\circ\alpha_{g}\) in normal and singular component and we obtain
\[\overline{\varphi}_{n}+\overline{\varphi}_{s}=\overline{\varphi}=\overline{ \varphi}\circ\alpha_{g}=\overline{\varphi}_{n}\circ\alpha_{g}+(\overline{ \varphi}_{s}\circ\alpha_{g})_{n}+(\overline{\varphi}_{s}\circ\alpha_{g})_{s}, \tag{3.2}\]
which implies
\[\overline{\varphi}_{n}-\overline{\varphi}_{n}\circ\alpha_{g}-(\overline{ \varphi}_{s}\circ\alpha_{g})_{n}=(\overline{\varphi}_{s}\circ\alpha_{g})_{s}- \overline{\varphi}_{s}=0.\]
Hence, first observe that \((\overline{\varphi}_{s}\circ\alpha_{g})_{s}=\overline{\varphi}_{s}\) and \(\overline{\varphi}_{n}-\overline{\varphi}_{n}\circ\alpha_{g}-(\overline{ \varphi}_{s}\circ\alpha_{g})_{n}=0\). Now we wish to show that
\[(\overline{\varphi}_{s}\circ\alpha_{g})_{n}=0.\]
Indeed, observe the following
\[\overline{\varphi}_{s}=(\overline{\varphi}_{s}\circ\alpha_{g})_{s}\] \[\Longrightarrow \overline{\varphi}_{s}+(\overline{\varphi}_{s}\circ\alpha_{g})_{n }=(\overline{\varphi}_{s}\circ\alpha_{g})_{s}+(\overline{\varphi}_{s}\circ \alpha_{g})_{n}=\overline{\varphi}_{s}\circ\alpha_{g}\] \[\Longrightarrow \overline{\varphi}_{s}(1)+(\overline{\varphi}_{s}\circ\alpha_{g} )_{n}(1)=\overline{\varphi}_{s}\circ\alpha_{g}(1)\leq\overline{\varphi}_{s}(1),\text{ as }\alpha_{g}(1)\leq 1\] \[\Longrightarrow (\overline{\varphi}_{s}\circ\alpha_{g})_{n}(1)\leq 0.\]
As \((\overline{\varphi}_{s}\circ\alpha_{g})_{n}\) is positive and \((\overline{\varphi}_{s}\circ\alpha_{g})_{n}(1)\leq 0\), so, \((\overline{\varphi}_{s}\circ\alpha_{g})_{n}=0\). Thus we have
\[\overline{\varphi}_{n}=\overline{\varphi}_{n}\circ\alpha_{g}\text{ for all }g\in G.\]
Therefore, \(\overline{\varphi}_{n}\) is a normal linear functional which is \(G\)-invariant. We define, \(\nu_{\varphi}:=\frac{1}{\overline{\varphi}_{n}(1)}\overline{\varphi}_{n}\).
To show \(s(\nu_{\varphi})\geq e\), we first let \(p\) be any non-zero subprojection of \(e\) in \(M\). Then by [14, Theorem 3.8, pp. 134], observe that there exists a non-zero subprojection \(p^{\prime}\) of \(p\) in \(M\) such that \(\overline{\varphi}_{s}(p^{\prime})=0\). Therefore, we have
\[\nu_{\varphi}(p)\geq\nu_{\varphi}(p^{\prime})=\frac{1}{\overline{\varphi}_{n} (1)}\overline{\varphi}(p^{\prime})=\frac{1}{\overline{\varphi}_{n}(1)}\lim_{k \rightarrow\infty}A_{n_{k}}(\varphi)(p^{\prime})\geq\frac{1}{\overline{\varphi }_{n}(1)}\inf_{n\in\mathbb{N}}A_{n}(\varphi)(p^{\prime})>0.\]
This completes the proof.
The following theorem characterizes to find a maximal invariant state in term of its support projection satisfying the condition in eq. 3.1.
**Theorem 3.2**.: Let \((M,G,\alpha)\) be a non-commutative dynamical system with a with a f.n state \(\varphi\in M_{*}\). Then for \(e\in\mathcal{P}_{0}(M)\), the following statements are equivalent.
1. There exists an invariant normal state \(\rho\) on \(M\) with \(s(\rho)=e\) such that, if \(\nu\) is any invariant normal state on \(M\), then \(s(\nu)\leq e\).
2. \(e\) is the maximal projection satisfying the following condition: \[\inf_{n\in\mathbb{N}}A_{n}(\varphi)(p)>0,\text{\ \ for all }p\in\mathcal{P}_{0}(M)\text{ with }p\leq e.\] (3.3)
Before proving this theorem we recall the following proposition without a proof. The proof of the proposition is straightforward, but reader may also look at [1, Proposition 3.4].
**Proposition 3.3**.: Let \((M,G,\alpha)\) be a non-commutative dynamical system with a f.n state \(\varphi\in M_{*}\). If there exists a \(\rho\in M_{*+}\) such that \(\alpha_{g}^{*}(\rho)=\rho\), then for any \(x\in M_{+}\) with \(\rho(x)\neq 0\), we have \(\inf_{g\in G}\alpha_{g}^{*}(\varphi)(x)>0\).
Proof of Theorem 3.2.: _(1) \(\Rightarrow\) (2):_ Since \(e\) is the support of the invariant state \(\rho\), we have \(\rho(p)>0\) for all projection \(0\neq p\leq e\) in \(M\). Therefore, by virtue of the Proposition 3.3 we conclude that
\[\inf_{n\in\mathbb{N}}A_{n}(\varphi)(p)>0,\text{ for all }p\in\mathcal{P}_{0}(M) \text{ with }p\leq e.\]
Now suppose that \(e\) is not the maximal projection that satisfy the condition in eq. 3.3. Therefore, there exists a non-zero \(f\in\mathcal{P}(M)\) which is not a sub-projection of \(e\) but satisfies the condition in eq. 3.3. Therefore, by Proposition 3.1 there exists an invariant normal state \(\nu_{\varphi}\) on \(M\) such that \(s(\nu_{\varphi})\geq f\). This is a contradiction to the hypothesis.
_(2) \(\Rightarrow\) (1):_ Since the non-zero projection \(e\) satisfies the condition in eq. 3.3, by virtue of Proposition 3.1, there exists an invariant normal state \(\nu_{\varphi}\) on \(M\) such that \(s(\nu_{\varphi})\geq e\). Again by applying Proposition 3.3, one can show that \(s(\nu_{\varphi})\) satisfies the condition in eq. 3.3. Since \(e\) is the maximal projection satisfying the condition in eq. 3.3, we obtain \(s(\nu_{\varphi})\leq e\). Similarly it will also follow that \(s(\nu)\leq e\) for any invariant normal state \(\nu\) on \(M\).
**Definition 3.4**.: Let \((M,G,\alpha)\) be a covariant system and \(x\) be a positive operator in \(M\). Then \(x\) is said to be a weakly wandering operator if
\[\lim_{n\to\infty}\|A_{n}x\|=0.\]
Suppose \((M,G,\alpha)\) is a non-commutative dynamical system and \(e\in\mathcal{P}_{0}(M)\) such that if \(\varphi\) is any \(G\)-invariant state then \(s(\varphi)\leq e\). Then we wish to show the existence of weakly wandering projection \(q\in\mathcal{P}_{0}(M)\) with \(q\leq 1-e\), i.e \(A_{n}(q)\xrightarrow{n\to\infty}0\) in \(\|\cdot\|\). For this purpose we assume that \(M\) is semifinite von Neumann algebra with a f.n semifinite trace \(\tau\). Further, \(p\in\mathcal{P}_{0}(M)\), denote the reduced von Neumann algebra \(pMp\) by \(M_{p}\), i.e, \(M_{p}=pMp\). We like to emphasise that the following result is key to find the weakly wandering operators.
**Lemma 3.5**.: Let \((M,\tau)\) be a semifinite von Neumann algebra and \(r\in\mathcal{P}_{0}(M)\). If \(K\) is a weak*-compact subset of \((M_{1}^{*})_{+}\) such that \(\mu_{|M_{r}}\) is singular for all \(\mu\in K\), then for all \(f\in\mathcal{P}_{0}(M_{r})\) there exists a \(p\in\mathcal{P}_{0}(M)\) with \(p\leq f\) such that \(\mu(p)=0\) for all \(\mu\in K\).
Proof.: Let \(f\in\mathcal{P}_{0}(M_{r})\). Then consider a \(p\in\mathcal{P}_{0}(M)\) with \(p\leq f\) such that \(\tau(p)<\infty\). Define a faithful, normal state \(\tau_{p}\) on \(M\) by \(\tau_{p}(x)=\frac{\tau(pxp)}{\tau(p)}\). Note that \(\tau_{p}(p)=\tau_{p}(1)=1\). Let \(0<\epsilon<\frac{1}{2}\) and define \(\Phi_{\epsilon}:=\{0\leq x\leq p:\tau_{p}(x)\geq 1-\epsilon\}\). If \(\mu(p)=0\) for all \(\mu\in K\), then we are done.
Let \(\mu\in K\). Since \(\mu\) is singular there exists a sequence of projections \(\{p_{n}\}\) in \(M\) such that \(p_{n}\uparrow p\) and \(\mu(p_{n})=0\). Choose \(n\in\mathbb{N}\) such that \(\tau_{p}(p_{n})>1-\epsilon\). Hence, we conclude that
\[\text{ for all }\mu\in K\text{ there exists }x_{\mu}\in\Phi_{\epsilon}\text{ such that }\mu(x_{\mu})<\epsilon/2. \tag{3.4}\]
Let \(C(K)\) be the space of all scalar-valued continuous functions on \(K\), where \(K\) is equipped with the weak* topology induced from \(M^{*}\), and define a linear map \(h:\Phi_{\epsilon}\to C(K)\) by \(h(x):=h_{x}\), where \(h_{x}(\mu)=\mu(x)\) for all \(x\in\Phi_{\epsilon}\) and \(\mu\in K\). Furthermore, consider the following set
\[\Psi:=\{f\in C(K):f<\epsilon\}.\]
We first claim that there exists \(x_{\epsilon}\in\Phi_{\epsilon}\) such that \(\mu(x_{\epsilon})<\epsilon\) for all \(\mu\in K\). For that it is enough to show that \(\Psi\cap h(\Phi_{\epsilon})\neq\emptyset\).
If the set is empty, then we can invoke Hahn-Banach separation theorem to obtain a bounded real linear functional \(\Lambda\) on \(C(K)\) and \(a\in\mathbb{R}\) such that
\[\Lambda(h_{x})\geq a>\Lambda(f)\text{ for all }f\in\Psi\text{ and }x\in\Phi_{\epsilon}.\]
Now by Riesz representation theorem we obtain a unique regular signed Borel measure \(\lambda\) on \(K\) such that
\[\Lambda(f)=\int_{K}fd\lambda\text{ for all }f\in C(K).\]
We claim that \(\Lambda\) is a positive linear functional. Suppose \(f\in C(K)\) such that \(0<f\leq\epsilon\) but \(\Lambda(f)<0\). Then for all \(n\in\mathbb{N}\), \((-n)f\in\Psi\). Consequently, it follows that \(-n\Lambda(f)<a\) for all \(n\in\mathbb{N}\), which is a contradiction. Hence \(\lambda\) can be assumed to be a probability measure. Also since the constant function \(\frac{\epsilon}{2}\in\Psi\), we get
\[\Lambda(h_{x})=\int_{K}\mu(x)d\lambda(\mu)>\epsilon/2\text{ for all }x\in\Phi_{ \epsilon}. \tag{3.5}\]
Now observe that \(K\subset M^{*}\) and consider the barycenter \(\nu\) of \(\lambda\) in \(K\), which is defined by the integral \(\nu:=\int_{K}\mu d\lambda(\mu)\) in the sense:
\[\Gamma(\nu)=\int_{K}\Gamma(\mu)d\lambda(\mu),\ \Gamma\in(M^{*})^{*}.\]
Since \(K\) is a compact, convex set, it follows from [12, Theorem 3.27] that the integral \(\nu:=\int_{K}\mu d\lambda(\mu)\) exists and moreover, \(\nu\in K\). Therefore, by eq. 3.5 we obtain
\[\nu(x)=\Lambda(h_{x})>\epsilon/2\text{ for all }x\in\Phi_{\epsilon},\]
which is a contradiction to eq. 3.4. Hence, \(\Psi\cap h(\Phi_{\epsilon})\neq\emptyset\). Thus, there exists a \(x_{\epsilon}\in\Phi_{\epsilon}\) such that
1. \(\tau_{p}(x_{\epsilon})\geq 1-\epsilon\) and
2. \(\mu(x_{\epsilon})<\epsilon\) for all \(\mu\in K\).
Now consider \(q_{\epsilon}=\chi_{[\frac{1}{2},1)}(x_{\epsilon})\) and since \(x_{\epsilon}\leq p\), so we have \(q_{\epsilon}\in\mathcal{P}(pMp)\). We further, note that
1. \(q_{\epsilon}\leq 2x_{\epsilon}\), which implies \(\mu(q_{\epsilon})\leq 2\mu(x_{\epsilon})<2\epsilon\) and
2. \(p-q_{\epsilon}\leq 2(p-x_{\epsilon})\), which implies \(\tau_{p}(p-q_{\epsilon})\leq 2\tau_{p}(p-x_{\epsilon})\leq 2\epsilon\).
Thus, for \(\frac{\epsilon}{2^{n}}\), find \(q_{\frac{\epsilon}{2^{n}}}=q_{n}\in\mathcal{P}_{0}(pMp)\) such that
1. \(\mu(q_{n})<\frac{2\epsilon}{2^{n}}=\frac{\epsilon}{2^{n-1}}\) for all \(\mu\in K\) and
2. \(\tau_{p}(p-q_{n})\leq\frac{\epsilon}{2^{n-1}}\).
Now consider the projection \(q:=\wedge_{n\geq 1}q_{n}\). Observe that,
\[\tau_{p}(p-q) \leq\sum_{n\geq 1}\tau_{p}(p-q_{n})\] \[=\sum_{k=1}^{\infty}\frac{\epsilon}{2^{n-1}}=2\epsilon<1,\text{ as } \epsilon<\frac{1}{2}.\]
This, shows that \(q\neq 0\) and further note that
\[\mu(q)\leq\frac{\epsilon}{2^{n-1}}\text{ for all }n\in\mathbb{N}\text{ and } \forall\ \mu\in K.\]
Hence, \(\mu(q)=0\) for all \(\mu\in K\).
**Proposition 3.6**.: Let \(K\) be a subset of \((M_{1}^{*})_{+}\) containing singular linear functionals on a von Nuemann algebra \(M\). Then the follwoing are equivalent.
1. \(K\) is weak* closed.
2. For every \(p\in\mathcal{P}_{0}(M)\), there exists \(q\in\mathcal{P}_{0}(M)\) with \(q\leq p\) such that \(\mu(q)=0\) for all \(\mu\in K\).
Proof.: \((1)\Rightarrow(2):\) Since \(K\) is weak* closed, we have \(K\) is weak* compact. Hence by Lemma 3.5 one can find a sequence of projections in \(M\) satisfying the required properties.
\((2)\Rightarrow(1):\) Let \(\{\mu_{n}\}\) be a sequence in \(K\) such that \(\mu_{n}\xrightarrow{w*}\mu\) for some \(\mu\in(M_{1}^{*})_{+}\). Now consider \(p\in\mathcal{P}_{0}(M)\). Then by hypothesis there exists \(p_{m}\in\mathcal{P}_{0}(M)\) such that \(p_{m}\leq p\) and \(\mu(p_{m})=0\) for all \(\mu\in K\). In particular, \(\mu_{n}(p_{m})=0\) for all \(n\in\mathbb{N}\). Therefore, \(\mu(p_{m})=0\), which implies \(\mu\) is singular.
**Lemma 3.7**.: Let \((M,G,\alpha)\) be a non-commutative dynamical system with a f.n state \(\varphi\in M_{*}\). Suppose \(e\in\mathcal{P}_{0}(M)\) such that if \(\nu\) is any \(G\)-invariant normal state on \(M\), then \(s(\nu)\leq e\). Then the set
\[K_{e}:=\{\mu\in(M_{1}^{*})_{+}:\mu\text{ is $G$-invariant and $\mu_{|M_{1-e}}$ is singular}\}\]
is weak*-closed.
Proof.: Clearly the set \(K_{e}\) is non-empty. Let \(\{\mu_{m}\}\) be a sequence in \(K_{e}\) such that \(\mu_{m}\xrightarrow{w^{*}}\mu\) for some \(\mu\in(M_{1}^{*})_{+}\). Since \(\mu_{m}\) is \(G\)-invariant for all \(m\in\mathbb{N}\), we have \(\mu\) is \(G\)-invariant. Write \(\mu=\mu_{n}+\mu_{s}\) according to [13, Theorem 3] and observe that \(\mu_{n}\) is \(G\)-invariant (see proof of Proposition 3.1).
Now by hypothesis, \(s(\mu_{n})\leq e\). Therefore, \((\mu_{n})_{|M_{1-e}}=0\). Hence, we obtain \(\mu_{|M_{1-e}}=(\mu_{s})_{|M_{1-e}}\), which proves the result.
Following Lemma establishes that the support of a weakly wandering operator and the support of a \(G\)-invariant state are orthogonal.
**Lemma 3.8**.: Suppose \(x_{0}\in M_{+}\) is any weakly wandering operator and \(\nu\) be a \(G\)-invariant normal state on \(M\), then \(s(\nu)\perp s(x_{0})\).
Proof.: Since \(x_{0}\in M_{+}\) is a weakly wandering operator, we have \(\lim_{n\to\infty}\|A_{n}(x_{0})\|=0\). Since \(\nu\) is \(G\)-invariant, hence, for all \(n\in\mathbb{N}\), we have
\[\nu(x_{0})=\frac{1}{m(K_{n})}\int_{K_{n}}\nu(\alpha_{g}(x_{0}))dm(g)=\nu(A_{n }(x_{0}))\leq\|A_{n}(x_{0})\|\xrightarrow{n\to\infty}0. \tag{3.6}\]
So, \(\nu(x_{0})=0\), which implies \(\nu(s(x_{0}))=0\). Therefore, \(s(\nu)\perp s(x_{0})\).
**Theorem 3.9**.: Let \((M,G,\alpha)\) be a non-commutative dynamical system with a f.n state \(\varphi\in M_{*}\) and \(e\in M\) be a non-zero projection. Then the following statements are equivalent.
1. There exists a \(G\)-invariant normal state \(\rho\) on \(M\) with \(s(\rho)=e\) such that, if \(\nu\) is any \(G\)-invariant normal state on \(M\), then \(s(\nu)\leq e\).
2. There is a weakly wandering operator \(x_{0}\in M_{+}\) with support \(s(x_{0})=1-e\) such that, if \(x\in M_{+}\) is any weakly wandering operator, then \(s(x)\leq 1-e\).
Proof.: \((1)\Rightarrow(2):\) Let \(p\in\mathcal{P}_{0}(M)\) be such that \(p\leq 1-e\). We first claim that there exists a \(q\in\mathcal{P}_{0}(M)\) such that \(q\leq p\) and \(\inf_{n\in\mathbb{N}}\varphi(A_{n}(q))=0\). Indeed, if this is not true, then there exists \(0\neq p\leq 1-e\) such that \(\inf_{n\in\mathbb{N}}\varphi(A_{n}(q))>0\) for all \(0\neq q\leq p\), which implies that there exists a \(G\)-invariant normal state \(\nu_{\varphi}\) on \(M\) with the support projection \(s(\nu_{\varphi})\geq p\) (see Proposition 3.1). Thus, \(\inf_{n\in\mathbb{N}}\varphi(A_{n}(q))=0\) for some \(q\in\mathcal{P}_{0}(M)\) with \(q\leq p\). Therefore, we obtain a subsequence \((n_{k})\) such that \(\lim_{k\to\infty}\varphi(A_{n_{k}}(q))=0\), as \(\varphi\) f.n state so \(A_{n_{k}}(q)\xrightarrow{k\to\infty}0\) in SOT, equivalently \(\lim_{k\to\infty}\mu(A_{n_{k}}(q))=0\) for all \(\mu\in M_{*}\).
Now consider the set
\[K_{e}:=\{\theta\in(M_{1}^{*})_{+}:\theta\text{ is $G$-invariant and $\theta_{|M_{1-e}}$ is singular}\}.\]
By Lemma 3.7, the set \(K_{e}\) is weak*-compact. Then by Lemma 3.5 there exists \(0\neq q^{\prime}\leq q\) such that \(\theta(q^{\prime})=0\) for all \(\theta\in K_{e}\).
We wish to show that \(\theta(A_{n_{k}}(q^{\prime}))\to 0\) for all positive \(\theta\in M_{1}^{*}\). But as \(q^{\prime}\leq q\), so for the time being we have \(\lim_{k\to\infty}\mu(A_{n_{k}}(q^{\prime}))=0\) for all \(\mu\in M_{*}\).
Let \(\theta\in M_{1}^{*}\) be positive and consider a subsequence \((m_{l})\) of \((n_{k})\). Then writing \(\theta=\theta_{n}+\theta_{s}\) in accordance with [13, Theorem 3], we obtain
\[A_{m_{l}}^{*}(\theta)=A_{m_{l}}^{*}(\theta_{n})+A_{m_{l}}^{*}(\theta_{s}) \text{ for all $l\in\mathbb{N}$.}\]
Hence, by Banach-Alaglou theorem we can find a subsequence \((t_{r})\) of \((m_{l})\) such that \(A_{t_{r}}^{*}(\theta_{s})\xrightarrow{w^{*}}\psi\), for some positive \(\psi\in M_{1}^{*}\). Note that by Folner condition, \(\psi\) is \(G\)-invariant. Then again writing \(\psi=\psi_{n}+\psi_{s}\) according to [13, Theorem 3], we observe that \(\psi_{n}\) is \(G\)-invariant (see proof of Proposition 3.1). But by hypothesis, \(s(\psi_{n})\leq e\). Hence, \(\psi_{n}(q^{\prime})=0\). Consequently, we derive that \(\theta_{n}(A_{t_{r}}(q^{\prime}))\to 0\) and \(\theta_{s}(A_{t_{r}}(q^{\prime}))\to\psi_{s}(q^{\prime})\).
Note that \(\psi_{s}\in K_{e}\). Therefore, \(\psi_{s}(q^{\prime})=0\) and hence \(\theta(A_{t_{r}}(q^{\prime}))\to 0\). Consequently, \(A_{n_{k}}(q^{\prime})\to 0\) weakly. Now by Theorem 2.18, we have \(\|A_{n}(q^{\prime})\|\to 0\). Hence, \(q^{\prime}\) is a weakly wandering projection.
Let, \(\{q_{j}\}_{j\in\Lambda}\) be the maximal family of mutually orthogonal weakly wandering projections in \(M\) such that \(q_{j}\leq 1-e\) for all \(j\in\Lambda\). Since, \(M\) is \(\sigma\)-finite, we may take \(\Lambda=\mathbb{N}\).
We claim that, \(\bar{q}:=\sum_{j=1}^{\infty}q_{j}=1-e\). We note that, \(\bar{q}\leq 1-e\). Now if \(\bar{q}\neq 1-e\), then by the same construction, we get a weakly wandering non-zero subprojection of \(1-e-\bar{q}\), which is a contradiction to the maximality of the family of projections \(\{q_{j}\}_{j\in\mathbb{N}}\).
Now define, \(x_{0}:=\sum_{j=1}^{\infty}\frac{1}{2^{j}}q_{j}\in M_{+}\). Since \(\bar{q}=1-e\), we have \(s(x_{0})=1-e\). We claim that \(x_{0}\) is a weakly wandering operator. Indeed, for all \(n,m\in\mathbb{N}\), we have
\[\|A_{n}(x_{0})\| \leq\sum_{j=1}^{m}\frac{1}{2^{j}}\left\|A_{n}q_{j}\right\|+\frac {1}{2^{m}}\left\|A_{n}(\sum_{j=1}^{\infty}\frac{1}{2^{j}}q_{m+j})\right\|\] \[\leq\sum_{j=1}^{m}\|A_{n}q_{j}\|+\frac{1}{2^{m}}\left\|\sum_{j=1} ^{\infty}\frac{1}{2^{j}}q_{m+j}\right\|\]
\[\leq\sum_{j=1}^{m}\|A_{n}q_{j}\|+\frac{1}{2^{m}}\left\|\sum_{j=1}^{ \infty}q_{m+j}\right\|\] \[\leq\sum_{j=1}^{m}\|A_{n}q_{j}\|+\frac{1}{2^{m}}.\]
Let \(\epsilon>0\). We choose \(m\in\mathbb{N}\) such that \(\frac{1}{2^{m}}<\frac{\epsilon}{2}\). Since for all \(j\in\{1,\ldots,m\}\), \(q_{2,j}\) is a weakly wandering, there exists \(N_{j}\in\mathbb{N}\) such that for all \(n\geq N_{j}\), \(\|A_{n}q_{j}\|<\frac{\epsilon}{2^{j}}\).
We choose, \(N:=\{N_{1},\ldots,N_{m},m\}\in\mathbb{N}\). Hence, \(\|A_{n}(x_{0})\|\leq\epsilon\) for all \(n\geq N\). Therefore, \(x_{0}\in M_{+}\) is a weakly wandering operator with \(s(x_{0})=1-e\).
\((2)\Rightarrow(1):\) Suppose there is no \(G\)-invariant normal state on \(M\). Let \(\theta\in(M_{1}^{*})_{+}\) and consider the sequence \(\{A_{n_{k}}^{*}(\theta)\}\) in \(M_{1}^{*}\). By Banach-Alaglou theorem, there is a subsequence \((m_{l})\) of \((n_{k})\) such that \(A_{m_{l}}^{*}(\theta)\xrightarrow{w^{*}}\overline{\theta}\) for some \(\overline{\theta}\in(M_{1}^{*})_{+}\). Since \(\overline{\theta}\) is \(G\)-invariant, its normal component is zero (since it is \(G\)-invariant). Hence we have \(\overline{\theta}\) is singular.
Consider the set
\[K_{0}:=\{\theta\in(M_{1}^{*})_{+}:\text{$\theta$ is $G$-invariant and singular}\}\]
By Lemma 3.7, \(K_{0}\) is weak*-closed. Hence by Lemma 3.5, there is \(p\in\mathcal{P}_{0}(M)\) with \(p\leq e\) such that \(\mu(p)=0\) for all \(\mu\in K_{0}\). Since \(\overline{\theta}\in K_{0}\) for all \(\theta\in(M_{1}^{*})_{+}\) we have \(\overline{\theta}(p)=0\). Hence, \(\theta(A_{m_{l}}(p))\to 0\). Therefore, \(A_{n_{k}}(p)\to 0\) weakly, and by Theorem 2.18, we have \(\|A_{n}(p)\|\to 0\). Hence, \(p\) is a weakly wandering projection, which is a contradiction.
Finally we conclude that there is a non-zero \(G\)-invariant normal state on \(M\). By Lemma 3.8, it follows that if \(\nu\) is any \(G\)-invariant normal state on \(M\), then its support \(s(\nu)\leq e\).
Let \(\mu\) be a \(G\)-invariant, normal state with maximal possible support. Clearly, \(s(\mu)\leq e\). If \(s(\mu)\neq e\), then consider the projection \(1-s(\mu)\) and observe that \((1)\Rightarrow(2)\) implies the existence of a weakly wandering operator \(x\in M_{+}\) such that \(s(x)=1-s(\mu)\geq 1-e\), which is a contradiction.
**Remark 3.10**.: The current approach is fundamentally different from the previous method for finding weakly wandering operators. The conventional approaches (see [1]) for finding weakly wandering operators was limited to group actions on finite von Neumann algebras. Actually, earlier techniques, implicitly find wandering projection, i.e, it finds a \(q\in\mathcal{P}_{0}(M)\) and a sequence \(\{g_{1},g_{2},\cdots\}\subseteq G\) such that \(\alpha_{g_{i}}(q)\perp\alpha_{g_{j}}(q)\) for all \(i\neq j\). Using this it was shown that
\[\lim_{n\to\infty}\|A_{n}(q)\|=0.\]
In contrast, the current techniques first show that there exits a \(q\in\mathcal{P}_{0}(M)\) such that
\[\lim_{n\to\infty}\mu(A_{n}(q))=0,\text{ for all }\mu\in M^{*}.\]
Then Theorem 2.18 is employed to conclude that \(\lim_{n\to\infty}\|A_{n}(q)\|=0\). Furthermore, the current idea applies to every amenable semigroup actions on any semifinite von Neumann algebra. Additionally, as we will also see later that it can be used for amenable group actions on any von Neumann algebra.
**Theorem 3.11** (Neveu Decomposition).: Let \(M\) be a semifinite von Neumann algebra and \((M,G,\alpha)\) be a non-commutative dynamical system. Then there exist two projections \(e_{1},e_{2}\in M\) such that \(e_{1}+e_{2}=1\) and
1. there exists a \(G\)-invariant normal state \(\rho\) on \(M\) with support \(s(\rho)=e_{1}\) and
2. there exists a weakly wandering operator \(x_{0}\in M\) with support \(s(x_{0})=e_{2}\).
Further, \(s(\rho)\) and \(s(x_{0})\) are unique.
Proof.: Let \(e_{1}=e\) be maximal projection of a \(G\)-invariant normal state on \(M_{*}\) as in Theorem 3.9 and suppose \(e_{2}=1-e_{1}\). The rest of the proof essentially follows from Theorem 3.9.
**Remark 3.12**.: Let \(M\) be a semifinite von Neumann algebra with f.n semifinite trace \(\tau\). Then note that Neveu decomposition can be stated in terms of an action of an amenable semigroup \(G\) on \(L^{1}(M,\tau)\). Let \((L^{1}(M,\tau),G,\gamma)\) be non-commutative dynamical system. Then there exist two projections \(e_{1},e_{2}\in M\) such that \(e_{1}+e_{2}=1\) and
1. there exists a \(Y\in L^{1}(M,\tau)\) such that \(\gamma_{g}(Y)=Y\) and \(s(Y)=e_{1}\).
2. there exists a weakly wandering operator \(x_{0}\in M\) with \(A_{n}(\gamma^{*})(x_{0})\to 0\) in norm and support \(s(x_{0})=e_{2}\).
Further, \(s(Y)\) and \(s(x_{0})\) are unique.
Now we discuss the invariance of the projections obtained in Neveu decomposition. To be more precise, let \(M\) be a semifinite von Neumann algebra with f.n semifinite trace \(\tau\) and \((L^{1}(M,\tau),G,\gamma)\) be a non-commutative dynamical system with \(e_{1}\) and \(e_{2}\) being the projection obtained in Neveu decomposition (see Remark 3.12). Then we like to discuss whether \(s(\gamma_{g}(e_{i}))\leq e_{i}\) for \(i=1,2\) and for all \(g\in G\). We begin with the following definition.
**Definition 3.13**.: Let \(\gamma:L^{1}(M,\tau)\to L^{1}(M,\tau)\) be a positive operator. Then \(\gamma\) is called Lamperti operator if \(e_{1},e_{2}\in L^{1}(M,\tau)\cap\mathcal{P}_{0}(M)\) with \(e_{1}e_{2}=0\), then \(\gamma(e_{1})\gamma(e_{2})=0\).
We consider the following set
\[\mathcal{L}=\{\gamma:L^{1}(M,\tau)\to L^{1}(M,\tau)\text{ positive Lamperti contraction}\}.\]
The following result is straightforward and well known in the literature. For the completeness of this article, we include a proof.
**Proposition 3.14**.: Let \(\gamma:L^{1}(M,\tau)\to L^{1}(M,\tau)\) be a positive contraction. Then \(\gamma\) is Lamperti operator if and only if for \(a,b\in L^{1}(M,\tau)_{+}\) with \(ab=0\) then \(\gamma(a)\gamma(b)=0\).
Proof.: Suppose \(a,b\in L^{1}(M,\tau)_{+}\) with \(ab=0\). Immediately, note that \(s(a)s(b)=0\). Then the proof follows from the fact that there exists two sequences \((a_{n})\) and \((b_{n})\) in \(M\) such that
1. \(0\leq a_{n}\leq a\) and \(0\leq b_{n}\leq b\) such that \(\tau(s(a_{n}))<\infty\) and \(\tau(s(b_{n}))<\infty\) with \(s(a_{n})\nearrow s(a)\) and \(s(b_{n})\nearrow s(b)\) and
2. \(\lim_{n\to\infty}\left\|a_{n}-a\right\|_{1}=0\) and \(\lim_{n\to\infty}\left\|b_{n}-b\right\|_{1}=0\).
Indeed, note that \(a_{n}b_{n}=0\), so \(s(a_{n})s(b_{n})=0\). This implies that \(\gamma(s(a_{n}))\gamma(s(b_{n}))=0\). Hence, \(\gamma(a_{n})\gamma(b_{n})=0\) as \(a_{n}\leq\left\|a_{n}\right\|s(a_{n})\) and \(b_{n}\leq\left\|b_{n}\right\|s(b_{n})\). Then obtain that \(\gamma(a)\gamma(b)=0\).
**Proposition 3.15**.: Let \(M\) be a semifinite von Neumann algebra with a semifinite f.n trace \(\tau\) and \((M,G,\alpha)\) be a non-commutative dynamical system. Suppose \(e_{1},e_{2}\) be the corresponding projection obtained in the Neveu decomposition, then we have the following
\[\alpha_{g}(e_{2})\leq e_{2}\text{ and }s(\alpha_{g}^{*}(e_{1}Xe_{1}))\leq e_{1 }\text{ for all }X\in L^{1}(M,\tau),g\in G.\]
Furthermore, if \(\alpha_{g}^{*}\in\mathcal{L}\) for all \(g\in G\), then \(\alpha_{g}(e_{1})\leq e_{1}\) for all \(g\in G\).
Proof.: To show \(\alpha_{g}(e_{2})\leq e_{2}\), we recall that there exists a positive operator \(x_{0}\in M\) such that \(s(x_{0})=e_{2}\) and it is weakly wandering, i.e, \(\left\|A_{n}(x_{0})\right\|\to 0\) as \(n\to\infty\). We choose a sequence of projections \(\{f_{m}\}\subseteq M\) such that
\[f_{m}\nearrow s(x_{0})\text{ and }\frac{1}{m}f_{m}\leq x_{0}\text{ for all }m\in\mathbb{N}.\]
Observe that \(f_{m}\) is weakly wandering for all \(m\in\mathbb{N}\). Hence, for all \(g\in G\), \(\alpha_{g}(f_{m})\) is also weakly wandering for all \(m\in\mathbb{N}\). Since \(e_{2}\) is the maximal support of the weakly wandering projection, so we have \(s(\alpha_{g}(f_{m}))\leq e_{2}\) for all \(m\in\mathbb{N}\). Therefore, \(\alpha_{g}(f_{m})\leq e_{2}\) for all \(m\in\mathbb{N}\), which implies \(\alpha_{g}(e_{2})\leq e_{2}\), for all \(g\in G\).
Then we note that
\[\tau(\alpha_{g}^{*}(e_{1}Xe_{1})e_{2}) =\tau(e_{1}Xe_{1}\alpha_{g}(e_{2}))\] \[\leq\tau(e_{1}Xe_{1}e_{2})=0,\text{ as }\alpha_{g}(e_{2})\leq e_{2}.\]
Hence, \(e_{2}\alpha_{g}^{*}(e_{1}Xe_{1})e_{2}=0\), consequently, \(s(\alpha_{g}^{*}(e_{1}Xe_{1}))\leq e_{1}\).
Now assume that \(\alpha_{g}^{*}\in\mathcal{L}\). Then we want to show that \(\alpha_{g}(e_{1})\leq e_{1}\), equivalently, \(s(\alpha_{g}^{*}(e_{2}Xe_{2}))\leq e_{2}\). Now suppose \(\rho\) is a normal state with \(s(\rho)=e_{1}\) with \(\rho\circ\alpha_{g}=\alpha_{g}\) for all \(g\in G\). So, there exists \(Y\in L^{1}(M,\tau)\) such that \(\rho(x)=\tau(Yx)\) for all \(x\in M\) with \(s(Y)=e_{1}\). Further, we have the following
\[\tau({\alpha_{g}}^{*}(X)y)=\tau(X\alpha_{g}(y))\text{ for all }X\in L^{1}(M,\tau) \text{ and }y\in M.\]
Thus, for all \(x\in M\), we observe that
\[\tau(Yx)=\rho(x)=\rho(\alpha_{g}(x))=\tau(Y\alpha_{g}(x))=\tau(\alpha_{g}^{*}( Y)x).\]
Hence, it follows that \(\alpha_{g}^{*}(Y)=Y\). As \(s(Y)=e_{1}\), so \(Ye_{2}Xe_{2}=0\). Hence, \(\alpha_{g}^{*}(Y)\alpha_{g}^{*}(e_{2}Xe_{2})=0\) for all \(g\in G\). But we have \(\alpha_{g}^{*}(Y)=Y\), therefore \(Y\alpha_{g}^{*}(e_{2}Xe_{2})=0\). This implies \(e_{1}\alpha_{g}^{*}(e_{2}Xe_{2})e_{1}=0\).
**Corollary 3.16**.: Suppose \((M,G,\alpha),e_{1},e_{2}\) are as in proposition 3.14 and assume that \(\alpha_{g}^{*}\in\mathcal{L}\) for every \(g\in G\). Then for all \(X\in L^{1}(M,\tau)\) and \(g\in G\), we have \(e_{i}\alpha_{g}^{*}(X)e_{i}=\alpha_{g}^{*}(e_{i}Xe_{i})\) for \(i=1,2\).
Proof.: First note that we have the following equation
\[\tau(\alpha_{g}^{*}(X)y)=\tau(X\alpha_{g}(y))\text{ for all }X\in L^{1}(M,\tau) \text{ and }y\in M.\]
Suppose \(X\in L^{1}(M,\tau)\) and \(g\in G\), then using the proposition 3.15, we conclude the following
1. \(s(\alpha_{g}^{*}(e_{1}Xe_{1}))\leq e_{1}\) and \(s(\alpha_{g}^{*}(e_{2}Xe_{2}))\leq e_{2}\),
2. \(e_{1}\alpha_{g}^{*}(e_{1}Xe_{2})e_{1}=e_{1}\alpha_{g}^{*}(e_{2}Xe_{1})e_{1}=e_{ 1}\alpha_{g}^{*}(e_{2}Xe_{2})e_{1}=0\),
3. \(e_{2}\alpha_{g}^{*}(e_{1}Xe_{2})e_{2}=e_{2}\alpha_{g}^{*}(e_{2}Xe_{1})e_{2}=e_{2} \alpha_{g}^{*}(e_{1}Xe_{1})e_{2}=0.\)
Thus, we have \(e_{1}\alpha_{g}^{*}(X)e_{1}=e_{1}\alpha_{g}^{*}(e_{1}Xe_{1})e_{1}=\alpha_{g}^{*} (e_{1}Xe_{1}).\) Similarly, we will have \(e_{2}\alpha_{g}^{*}(X)e_{2}=\alpha_{g}^{*}(e_{2}Xe_{2}).\)
## 4. **Neveu decomposition for actions on any von Neumann algebra**
In this section we study Neveu Decomposition of a covariant system on arbitrary von Neumann algebra \(M\) with an assumption that it commutes with the modular automorphism group associated to a f.n semifinite weight \(\psi\) on \(M\). We refer \((M,\psi)\) as non-commutative measure space.
Let \(L^{2}(M,\psi)\) be the GNS Hilbert space associated to the f.n semifinite weight \(\psi\) on \(M\) and \(\pi_{\psi},\Delta_{\psi},J_{\psi}\) and \(\sigma^{\psi}=(\sigma_{t}^{\psi})_{t\in\mathbb{R}}\) be the corresponding GNS representation, modular operator, modular conjugation operator and modular automorphism \(M\) respectively. To simplify the notations, sometimes we write \(\mathcal{H}=L^{2}(M,\psi)\), identify \(\pi_{\psi}(x)\) as \(x\) for all \(x\in M\) and write the modular automorphisms group as \(\sigma=(\sigma_{t})_{t\in\mathbb{R}}\). Let \(M\rtimes_{\sigma^{\psi}}\mathbb{R}\) be the crossed product constriction of \(M\) with respect to its modular automorphisms group \(\sigma^{\psi}\). We denote it by \(N\), i.e, \(N=M\rtimes_{\sigma^{\psi}}\mathbb{R}\). Indeed, it is defined as follows. Consider the Hilbert space \(L^{2}(\mathbb{R},\mathcal{H})=\mathcal{H}\otimes L^{2}(\mathbb{R},m)\). For every \(x\in M\) and \(t\in\mathbb{R}\), define
\[(\pi_{\sigma^{\psi}}(x)\xi)(s):=\sigma_{-s}(x)\xi(s)\text{ and }\] \[(\lambda(t)\xi)(s):=\xi(s-t),\text{ for all }s\in\mathbb{R} \text{ and }\xi\in L^{2}(\mathbb{R},\mathcal{H}).\]
Naturally, \(\lambda(t)\) is expressed as \(\lambda(t)=1\otimes\lambda_{t}\), where \(\lambda_{t}\in\mathcal{B}(L^{2}(\mathbb{R},m))\) is defined by \((\lambda_{t}(g))(s)=g(s-t)\) for all \(s\in\mathbb{R}\) and \(g\in\mathcal{B}(L^{2}(\mathbb{R},m))\). Then the crossed product \(M\rtimes_{\sigma^{\psi}}\mathbb{R}\) of \(M\) with the action of modular automorphisms group \(\sigma^{\psi}\) is the von Neumann algebra generated by \(\pi_{\sigma^{\psi}}(M)\) and \(\lambda(\mathbb{R})\) i.e,
\[M\rtimes_{\sigma^{\psi}}\mathbb{R}=\{\{\pi_{\sigma^{\psi}}(x):x\in M\}\cup\{ \lambda(t):t\in\mathbb{R}\}\}^{\prime\prime}.\]
**Definition 4.1**.: Let \((M,G,\alpha)\) be a non-commutative dynamical system. It is called Markov if there exists f.n semifinite weight \(\psi\) on \(M\) such that \(\alpha_{g}\circ\sigma_{t}^{\psi}=\sigma_{t}^{\psi}\circ\alpha_{g}\) for all \(g\in G\) and \(t\in\mathbb{R}\).
**Lemma 4.2**.: Let \((M,\psi)\) be a non commutative measure space with a f.n semifinite weight \(\psi\) and \(\sigma=(\sigma_{t})_{t\in\mathbb{R}}\) be its modular automorphisms group with respect to \(\psi\). Suppose \(T:M\to M\) is a positive contraction satisfying \(T\circ\sigma_{t}=\sigma_{t}\circ T\) for all \(t\in\mathbb{R}\). Then, there exists a unique positive contraction \(\widetilde{T}:N\to N\) satisfying
\[\widetilde{T}(\pi_{\sigma}(x))=\pi_{\sigma}(T(x)\text{ and }\widetilde{T}(1 \otimes\lambda_{t})=1\otimes\lambda_{t},\text{ for all }t\in\mathbb{R}.\]
Proof.: Note that \(N\subseteq M\otimes\mathcal{B}(L^{2}(\mathbb{R}))\). Consider \(\widetilde{T}=T\otimes 1|_{N}\). Thus it is enough to prove that \(\widetilde{T}(N)\subseteq N\), as the map \(T\otimes 1:M\otimes\mathcal{B}(L^{2}(\mathbb{R}))\to M\otimes\mathcal{B}(L^{2}( \mathbb{R}))\) is a positive map. Let \(I\) be a directed set of finite Borel partition of \(G\). Suppose \(\omega_{1},\omega_{2}\in I\), then we say \(\omega_{1}\leq\omega_{2}\) if \(\omega_{2}\) is a refinement of \(\omega_{1}\). Let \(x\in M\) and \(\omega\in I\) with \(\omega=\{P_{1},P_{2},\cdots,P_{n}\}\) and take a set \(\{g_{1},g_{2},\cdots,g_{n}\}\) with \(g_{i}\in P_{i}\) for \(i=1,2,\cdots,n\), then consider the following element
\[x_{\omega}=\sum_{i=1}^{n}\sigma_{g_{i}^{-1}}(x)\otimes m_{i},\]
where \(m_{i}\) is the multiplication operator with respect to the function \(1_{P_{i}}\) on \(L^{2}(\mathbb{R})\). Thus \(\{x_{\omega}:\omega\in I\}\) is a net in \(M\otimes\mathcal{B}(L^{2}(\mathbb{R}))\). Let \(\eta=\xi\otimes f\), where \(\xi\in\mathcal{H}_{\psi}\) and \(f\in L^{2}(\mathbb{R})\). Then we have the following
\[\left\|\pi_{\sigma}(x)\eta-x_{\omega}\eta\right\|^{2} =\left\|\pi_{\sigma}(x)(\xi\otimes f)-x_{\omega}(\xi\otimes f) \right\|^{2}\] \[=\int\left\|\pi_{\sigma}(x)(\xi\otimes f)(s)-x_{\omega}(\xi \otimes f)(s)\right\|^{2}d\lambda(s)\] \[=\int\left\|f(s)\sigma_{s^{-1}}(x)\xi-\sum_{i=1}^{n}\sigma_{g_{i} ^{-1}}(x)\otimes m_{i}(\xi\otimes f)(s)\right\|^{2}d\lambda(s)\] \[=\sum_{i=1}^{n}\int_{P_{i}}\left\|\sigma_{s^{-1}}(x)\xi-\sigma_{g _{i}^{-1}}(x)\xi\right\|^{2}\left|f(s)\right|^{2}d\lambda(s).\]
Suppose \(\epsilon>0\), find a compact set \(K\subseteq\mathbb{R}\) such that \(\int_{\mathbb{R}\setminus K}\left|f(s)\right|^{2}d\lambda(s)<\epsilon^{2}\). Furthermore, choose a finite Borel partition \(\{K_{1},K_{2},\cdots,K_{m}\}\) of \(K\) such that
\[\left\|\sigma_{s^{-1}}(x)\xi-\sigma_{t^{-1}}(x)\xi\right\|<\epsilon,\text{ for all }s,t\in K_{i}\text{ and }1\leq i\leq m.\]
Now consider the partition \(\omega=\{K_{1},K_{2},\cdots,K_{m},K_{m+1}\}\) where \(K_{m+1}=G\setminus K\), then note that
\[\left\|\pi_{\sigma}(x)\eta-x_{\omega}\eta\right\|^{2} =\sum_{i=1}^{m+1}\int_{K_{i}}\left\|\sigma_{s^{-1}}(x)\xi-\sigma_ {g_{i}^{-1}}(x)\xi\right\|^{2}\left|f(s)\right|^{2}d\lambda(s)\] \[<(\left\|f\right\|^{2}+(2\left\|x\right\|\left\|\xi\right\|)^{2} )\epsilon^{2}.\]
Thus, this shows that the net \(\{x_{\omega}:\omega\in I\}\) converges to \(\pi_{\sigma}(x)\) in strong operator topology. Thus, we note that
\[T\otimes 1(x_{\omega}) =T\otimes 1(\sum_{i=1}^{n}\sigma_{g_{i}^{-1}}(x)\otimes m_{i})\] \[=\sum_{i=1}^{n}T(\sigma_{g_{i}^{-1}}(x))\otimes m_{i}\] \[=\sum_{i=1}^{n}\sigma_{g_{i}^{-1}}(T(x))\otimes m_{i},\text{ as }T \circ\sigma_{t}=\sigma_{t}\circ T\]
Therefore, \(\{T\otimes 1(x_{\omega}):\omega\in I\}\) converges to \(\pi_{\sigma}(T(x))\) strongly. Hence, \((T\otimes 1)(\pi_{\sigma}(x))=\pi_{\sigma}(T(x))\) and \((T\otimes 1)(1\otimes\lambda_{t})=1\otimes\lambda_{t}\) for all \(t\in\mathbb{R}\). Thus, \(\tilde{T}(N)\subseteq N\). \(\Box\)
Let \(M\) be any von Neumann algebra with f.n semifinite weight \(\psi\). Then recall the dual weight, let \(\widetilde{\psi}\) be the dual weight on \(N\). It is defines as follows
\[\widetilde{\psi}(x)=\psi\circ\pi_{\sigma^{\psi}}^{-1}(\gamma x),\text{ }x\in N_{+}\]
where for every \(x\in N_{+}\), \(\gamma(x)\) preparatorily defined as
\[\gamma(x)(\omega):=\int_{\mathbb{R}}\omega(\hat{\alpha}_{p}(x))dp\text{ for all }\omega\in N_{*}^{+}.\]
For the detailed definition we refer [10]. We further recall that \(N\) is a semifinite von Neumann algebra with a f.n.s trace \(\tau\) defined by
\[\tau(x)=\lim_{\epsilon\to 0}\widetilde{\psi}(B_{\epsilon}xB_{\epsilon})\]
Here \(B\) is the non-singular generator of \((\lambda(-t))\) and \(B_{\epsilon}=\frac{B}{1+\epsilon B}\).
**Lemma 4.3**.: Let \(M\) be any von Neumann algebra and \((M,G,\alpha)\) be a dynamical system. Then the following are equivalent.
1. Suppose there exists a \(p\in\mathcal{P}_{0}(M)\) such that \(A_{n_{k}}(p)\to 0\) in SOT for some subsequence \((n_{k})\).
2. Support of Maximal \(G\)-invariant state on \(M\) is not \(1\).
3. There exists a non-zero weakly wandering operator for \((M,G,\alpha)\).
Proof.: The proof follows from Theorem 3.2.
**Lemma 4.4**.: Let \((M,\psi)\) be a non commutative measure space with a f.n.s weight \(\psi\) and \((M,G,\alpha)\) be a non commutative dynamical system such that
\[\sigma_{t}^{\psi}\circ\alpha_{g}=\alpha_{g}\circ\sigma_{t}^{\psi},\text{ for all }t\in\mathbb{R}\text{ and },g\in G.\]
Suppose that support of maximal \(G\)-invariant state is not full, then there exists a \(p\in\mathcal{P}_{0}(M)\) such that \(A_{n}(\alpha)(p)\xrightarrow{\|\cdot\|}0\).
Proof.: Let \(N=M\rtimes_{\sigma^{\psi}}\mathbb{R}\). Note that for each \(g\in G\), \(\alpha_{g}:M\to M\) is a unital positive map and \(\sigma_{t}^{\psi}\circ\alpha_{g}=\alpha_{g}\circ\sigma_{t}^{\psi}\), so, by Lemma 4.2, there exists a unital positive map \(\widetilde{\alpha}_{g}:N\to N\) such that
\[\widetilde{\alpha}_{g}(\pi_{\sigma}(x))=\pi_{\sigma}(\alpha_{g}(x)\text{ and }\widetilde{\alpha}_{g}(1\otimes\lambda_{t})=1\otimes\lambda_{t},\text{ for all }t\in\mathbb{R}.\]
Write \(\widetilde{\alpha}=(\widetilde{\alpha}_{g})_{g\in G}\), then it is straightforward to check that \((N,G,\widetilde{\alpha})\) is a non-commutative dynamical system and further note that \(N\) is a semifinite von Neumann algebra.
Then by Theorem 3.11 there exits \(e_{1},e_{2}\in\mathcal{P}_{0}(N)\) such that
1. there exists a \(G\)-invariant normal state \(\rho\) on \(N\) with support \(s(\rho)=e_{1}\) and
2. there exists a weakly wandering operator \(x_{0}\in N\) with support \(s(x_{0})=e_{2}\).
First suppose that \(e_{2}=0\), and note that \(\rho\) is f.n \(G\)-invariant state on \(N\). Now consider \(\rho_{0}(x)=\rho(\pi_{\sigma}(x))\) for all \(x\in M\). Note that \(\rho_{0}\circ\alpha_{g}=\rho_{0}\) for all \(g\in G\) and \(\rho_{0}\) is a f.n state on \(M\), in other words \(s(\rho_{0})=1\). This is a contradiction to our assumption. Consequently, we have \(e_{2}\neq 0\).
Now assume that \(e_{2}\neq 0\). Observe that \(x_{0}\in N\subseteq M\otimes\mathcal{B}(L^{2}(\mathbb{R}))\). Choose a unit vector \(\xi\in L^{2}(\mathbb{R})\) such that \((1\otimes p_{\xi})x_{0}(1\otimes p_{\xi})\neq 0\). Since \(x_{0}\in N\subseteq M\otimes\mathcal{B}(L^{2}(\mathbb{R}))\), so we have \((1\otimes p_{\xi})x_{0}(1\otimes p_{\xi})=x_{0}^{\prime}\otimes p_{\xi}\) for some non zero positive operator \(x_{0}^{\prime}\in M\). Then, observe the following
\[\|A_{n}(\alpha)(x_{0}^{\prime})\| =\|A_{n}(\alpha\otimes 1)(x_{0}^{\prime}\otimes p_{\xi})\|\] \[=\|A_{n}(\alpha\otimes 1)((1\otimes p_{\xi})x_{0}(1\otimes p_{\xi} ))\|\] \[=\|(1\otimes p_{\xi})A_{n}(\alpha\otimes 1)x_{0}(1\otimes p_{\xi} )\|\] \[=\|(1\otimes p_{\xi})A_{n}(\tilde{\alpha})x_{0}(1\otimes p_{\xi} )\|\] \[\leq\|A_{n}(\tilde{\alpha})x_{0}\|\xrightarrow{n\to\infty}0.\]
Thus, \(x_{0}^{\prime}\) is a weakly wandering operator for \((M,G,\alpha)\). Now choose a non zero projection \(p\in M\) such that \(\frac{1}{m}p\leq x_{0}\), for some \(m\in\mathbb{N}\), to do this one can use spectral theorem. Then note that
\[\|A_{n}(\alpha)(p)\|\leq m\,\|A_{n}(\alpha)(x_{0})\|\xrightarrow{n\to\infty}0.\]
Hence, \(p\) is a non zero weakly wandering projection for \((M,G,\alpha)\).
For each \(n\in\mathbb{N}\), consider the compact set \(F_{n}=[-n,n]\subseteq\mathbb{R}\), then note that \((F_{n})_{n\in\mathbb{N}}\) is a Folner sequence of \(\mathbb{R}\). Suppose \(\lambda\) is the Lebesgue measure on \(\mathbb{R}\).
**Lemma 4.5**.: Let \((M,G,\alpha)\) be a Markov non-commutative dynamical system and \(p\in\mathcal{P}_{0}(M)\) is weakly wandering. For fix \(n,k\in\mathbb{N}\), consider
\[x_{k,n}:=\frac{1}{m(F_{k})}\frac{1}{m(K_{n})}\int_{K_{n}}\int_{F_{k}}\alpha_{g }\circ\sigma_{t}(p)dm(g)d\lambda(t),\]
then \(x_{k,n}\) is also weakly wandering and \(s(x_{k,n})=\bigvee_{g\in K_{n}}\bigvee_{t\in F_{k}}\alpha_{g}\circ\sigma_{t}(p)\).
Proof.: Consider \(x_{n}=\frac{1}{m(K_{n})}\int_{K_{n}}\alpha_{g}(p)dm(g)\). Then note that
\[x_{k,n} =\frac{1}{m(F_{k})}\frac{1}{m(K_{n})}\int_{K_{n}}\int_{F_{k}} \alpha_{g}\circ\sigma_{t}(p)dm(g)\] \[=\frac{1}{m(F_{k})}\int_{F_{k}}\sigma_{t}\Big{(}\frac{1}{m(K_{n} )}\int_{K_{n}}\alpha_{g}(p)dm(g)\Big{)}d\lambda(t),\text{ as }\sigma_{t}\circ\alpha_{g}=\alpha_{g}\circ\sigma_{t}\] \[=\frac{1}{m(F_{k})}\int_{F_{k}}\sigma_{t}(x_{n})d\lambda(t).\]
Thus, observe that whenever \(x_{n}\) is weakly wandering, then \(x_{k,n}\) is also weakly wandering. So it is enough to prove that \(x_{n}\) is weakly wandering. Indeed, for all \(l\in\mathbb{N}\), note that
\[A_{l}(x_{n}) =\frac{1}{m(K_{l})}\int_{K_{l}}\alpha_{g}(x_{n})dm(g)\] \[=\frac{1}{m(K_{l})}\frac{1}{m(K_{n})}\int_{K_{l}}\int_{K_{n}} \alpha_{gh}(p)dhdg\] \[=\frac{1}{m(K_{n})}\frac{1}{m(K_{l})}\int_{K_{n}}\int_{K_{l}} \alpha_{gh}(p)dgdh\] \[=\frac{1}{m(K_{n})}\frac{1}{m(K_{l})}\int_{K_{n}}\Big{[}\int_{K_{ l}}\alpha_{gh}(p)dg-\int_{K_{l}}\alpha_{g}(p)dg\Big{]}dh+\frac{1}{m(K_{l})}\int_{K_{ l}}\alpha_{g}(p)dg.\]
Hence, for all \(n,l\in\mathbb{N}\)
\[\|A_{l}(x_{n})\| \leq\frac{m(K_{l}h\Delta K_{l})}{m(K_{l})}\sup_{g\in K_{l}}\| \alpha_{g}(p)\|+\|A_{l}(p)\|\] \[\leq\frac{m(K_{l}h\Delta K_{l})}{m(K_{l})}+\|A_{m}(p)\|\]
Therefore, \(\lim_{l\to\infty}\|A_{l}(x_{n})\|=0\).
For the other part let us denote \(q=\bigvee_{g\in K_{n}}\alpha_{g}(p)\). Note that
\[qx_{n}=\frac{1}{m(K_{n})}\int_{K_{n}}q\alpha_{g}(p)dm(g)=\frac{1}{m(K_{n})}\int_{ K_{n}}\alpha_{g}(p)dm(g)=x_{n}.\]
Therefore, \(q\geq s(x_{n})\). Conversely if \(\xi\in q^{\perp}\), then
\[\langle q\xi,\xi\rangle=0 \Rightarrow\langle\alpha_{g}(p)\xi,\xi\rangle=0\ \ \text{for all}\ g\in K_{n}\] \[\Rightarrow\langle x_{n}\xi,\xi\rangle=0\] \[\Rightarrow x_{n}^{1/2}\xi=0\] \[\Rightarrow\xi\in s(x_{n})^{\perp}.\]
Therefore, \(q=s(x_{n})\). By the same argument, we obtain that
\[s(x_{k,n})=\bigvee_{g\in K_{n}}\bigvee_{t\in F_{k}}\alpha_{g}\circ\sigma_{t}(p)\]
Let \(\varphi\) be a f.n state on \(M\) and \(e\) be the maximal projection in \(M\) satisfying the following condition
\[\inf_{n\in\mathbb{N}}A_{n}(\varphi)(p)>0\ \forall\ p\in\mathcal{P}(M)\ \text{such that}\ 0\neq p\leq e.\]
**Lemma 4.6**.: Let \((M,G,\alpha)\) be a Markov covariant system. Then there exists a weakly wandering operator \(x_{0}\in M\) such that \(s(x_{0})=1-e\).
Proof.: Since \(\alpha_{g}(e)=e\) for all \(g\in G\). Without any loss of generality we may assume that \(e=0\). Then we show that there exists a weakly wandering \(x_{0}\in M\) such that \(s(x_{0})=1\).
First note that there exists a \(q\in\mathcal{P}_{0}(M)\) such that \(\inf_{n\in\mathbb{N}}A_{n}(\varphi)(q)=0\). Hence, by Lemma 4.3 and Lemma 4.4, there exists a \(p\in\mathcal{P}_{0}(M)\) such that \(p\) is weakly wandering.
Now for all \(n\in\mathbb{N}\) consider the operator
\[x_{k,n}:=\frac{1}{m(F_{k})}\frac{1}{m(K_{n})}\int_{K_{n}}\int_{F_{k}}\alpha_{g }\circ\sigma_{t}(p)dm(g)d\lambda(t),\]
By Lemma 4.5, it follows that \(x_{k,n}\) is a weakly wandering operator for all \(n\in\mathbb{N}\) with support \(s(x_{k,n})=\bigvee_{g\in K_{n}}\bigvee_{t\in F_{k}}\alpha_{g}\circ\sigma_{t}(p).\) Then take the following operator
\[y=\sum_{n,k=1}^{\infty}\frac{x_{k,n}}{2^{n+k}}.\]
It follows that \(y\) is wandering operator with support \(s(y)=\bigvee_{n,k\in\mathbb{N}}s(x_{k,n})\). Actually we have the following
\[s(y) =\bigvee_{n,k\in\mathbb{N}}s(x_{k,n})\] \[=\bigvee_{n,k\in\mathbb{N}}\bigvee_{g\in K_{n}}\bigvee_{t\in F_{ k}}\alpha_{g}\circ\sigma_{t}(p)\] \[=\bigvee_{g\in G}\bigvee_{t\in\mathbb{R}}\alpha_{g}\circ\sigma_{t }(p),\ \text{since}\ \cup_{n\in\mathbb{N}}K_{n}=G\ \text{and}\ \cup_{k\in\mathbb{N}}F_{n}=\mathbb{R}.\]
Now write \(z_{1}=s(y)\) and thus, we note that \(\alpha_{g}(z_{1})=z_{1}\) for all \(g\in G\) and \(\sigma_{t}(z_{1})=z_{1}\) for all \(t\in\mathbb{R}\). Consider the von Neumann algebra \(M_{1-z}=(1-z_{1})M(1-z_{1})\) and let \(\alpha_{1,g}\) be the restriction of \(\alpha_{g}\) on \(M_{1-z}\) for all \(g\in G\). Further, observe that \((M_{1-z},G,\alpha_{1,g})\) is a Markov covariant system.
Then by repeating the same argument as before we obtain a \(G\)-invariant projection \(z_{2}\in\mathcal{P}_{0}((1-z_{1})M(1-z_{1}))\) and a weakly wandering operator \(y_{2}\) with \(s(y_{2})=z_{2}\). Repeating the similar process we obtain sequence of orthonormal projections \(\{z_{1},z_{2},\cdots\}\) and a sequence of weakly wandering operator \(\{y_{1},y_{2},\cdots\}\) such that \(s(y_{k})=z_{k}\) and \(\|y_{k}\|\leq 1\) for all \(k\in\mathbb{N}\). Therefore, we will have \(\bigvee_{k\in\mathbb{N}}z_{k}=1\) and consider
\[y_{0}=\sum_{k=1}^{\infty}\frac{y_{k}}{2^{k}}.\]
Note that \(y_{0}\) is a weakly wandering operator with \(s(y_{0})=1\).
Thus, we summarize the Neveu decomposition for arbitrary von Neumann algebra as follow.
**Theorem 4.7** (Neveu Decomposition).: Let \(M\) be any von Neumann algebra and \((M,G,\alpha)\) be a Markov covariant system. Then there exist two projections \(e_{1},e_{2}\in M\) such that \(e_{1}+e_{2}=1\) and
1. there exists a \(G\)-invariant normal state \(\rho\) on \(M\) with support \(s(\rho)=e_{1}\) and
2. there exists a weakly wandering operator \(x_{0}\in M\) with support \(s(x_{0})=e_{2}\).
Further, \(s(\rho)\) and \(s(x_{0})\) are unique.
**Remark 4.8**.: In this section, we have obtained the Neveu decomposition for Markov covariant system \((M,G,\alpha)\) associated to the actions of a amenable group \(G\) on a von Neumann algebra \(M\). We note that if there exists f.n semifinite weight \(\psi\) on \(M\) such that \(\psi\circ\alpha_{g}=\psi\), then it is straightforward to check that \(\sigma_{t}^{\psi}\circ\alpha_{g}=\alpha_{g}\circ\sigma_{t}^{\psi}\), for all \(g\in G\) and \(t\in\mathbb{R}\). Hence, it is a Markov covariant system. We like to highlight that it is natural to assume \((M,G,\alpha)\) preserving a f.n semifinite weight \(\psi\) on \(M\). Possibly, most covariant system \((M,G,\alpha)\) preserve a f.n semifinite weight on \(M\). But, right now neither we have a proof nor a reference. Further, we note that automorphisms on \(\mathcal{B}(\mathcal{H})\) always preserve the f.n semifinite trace on \(\mathcal{B}(\mathcal{H})\). Furthermore, note that the shift automorphism \(\alpha:\mathcal{B}(L^{2}(\mathbb{R},\lambda))\to\mathcal{B}(L^{2}(\mathbb{R}, \lambda))\) where \(\lambda\) is Lebesgue measure on \(\mathbb{R}\), does not preserve any state and but preserve the trace on \(\mathcal{B}(L^{2}(\mathbb{R},\lambda))\).
Even in the classical setting, given a \(\sigma\)-finite measure space \((X,\mu)\) and a transformation \(T\) defined on it, the problem of finding a finite measure \(\nu\) invariant under T is studied extensively by many authors. We refer to [10], [11], [12], [13] and references therein. Thus, Markov condition on \((M,G,\alpha)\) is natural and very mild compare to the state preserving condition.
## 5. Example
In this section we discuss some examples. Here we assume \(G\) to be an amenable semigroup or a group and \((K_{n})\) be a Folner sequence. Let \((\mathcal{H},U,G)\) be a non-commutative dynamical system, i.e, \((U_{g})\) is a strongly continuous unitary presentation of \(G\) on the Hilbert space \(\mathcal{H}\). We begin with following definition.
**Definition 5.1**.: Let \(G\) be a amenable semigroup and \((\mathcal{H},U,G)\) be a non-commutative dynamical system. It is called weakly mixing if for all \(\xi,\eta\in\mathcal{H}\) one has
\[\frac{1}{m(K_{n})}\int_{K_{n}}\left|\langle U_{g}\xi,\eta\rangle\right|^{2}dm(g )\to 0\text{ as }n\to\infty.\]
Let \(\alpha\) be a automorphism on \(\mathcal{B}(\mathcal{H})\). Then there exists a unitary \(U\in\mathcal{B}(\mathcal{H})\) such that \(\alpha(x)=UxU^{*}\) for all \(x\in\mathcal{B}(\mathcal{H})\). Decompose \(\mathcal{H}=\mathcal{H}_{c}\oplus\mathcal{H}_{wm}\), where \(\mathcal{H}_{c}\) and \(\mathcal{H}_{wm}\) are compact (equivalently, it the eigenspace of \(U\)) and weakly mixing part of \(U\) respectively. Let \(P_{c}\) and \(P_{wm}\) are two projection onto the subspaces \(\mathcal{H}_{c}\) and \(\mathcal{H}_{wm}\) respectively.
**Theorem 5.2**.: Let \(\alpha\) be an automorphism on \(\mathcal{B}(\mathcal{H})\). Then there exists an \(\alpha\)-invariant normal state \(\varphi_{c}\) on \(\mathcal{B}(\mathcal{H})\) such that \(s(\varphi_{c})=P_{c}\) and a weakly wandering operator \(x_{wm}\in\mathcal{B}(\mathcal{H})\) such that \(s(x_{wm})=P_{wm}\).
Proof.: Let \(\{e_{i}\}_{i\in\mathbb{N}}\) be a orthonormal basis of \(\mathcal{H}_{c}\) consists of eigenvectors of \(U\) and \(\{\epsilon_{i}\}_{i\in I}\) be a set of positive numbers such that \(\sum_{i\in I}\epsilon_{i}=1\). Then consider
\[\varphi_{c}(x)=\sum_{i\in I}\epsilon_{i}\langle xe_{i},e_{i}\rangle,\text{ for all }x\in\mathcal{B}(\mathcal{H})\]
Clearly, we have \(\varphi_{c}\circ\alpha=\varphi_{c}\).
Now we note that \(U|_{\mathcal{H}_{wm}}\) is weakly mixing. Then by [13, Theorem 1.1] weakly wandering vectors of \(U|_{\mathcal{H}_{wm}}\) are dense in \(\mathcal{H}_{wm}\). Let \(\{\eta_{k}:k\in\mathbb{N}\}\subseteq\mathcal{H}_{wm}\) be a dense collection of weakly wandering vectors of \(U|_{\mathcal{H}_{wm}}\). Let \(\eta\in\{\eta_{k}:k\in\mathbb{N}\}\), then there exists a subsequence \(n_{1}<n_{2}<\cdots\) such that \(U^{n_{i}}\eta\perp U^{n_{j}}\eta\) for \(i\neq j\). Now we show that \(\alpha^{n_{i}}(P_{\eta})\perp\alpha^{n_{j}}(P_{\eta})\). Indeed observe that \(\alpha^{n_{i}}(P_{\eta})=P_{U^{n_{i}}\eta}\) and \(\alpha^{n_{j}}(P_{\eta})=P_{U^{n_{j}}\eta}\). Hence, \(\alpha^{n_{i}}(P_{\eta})\perp\alpha^{n_{j}}(P_{\eta})\) for all \(i\neq j\). Now it is standard to show that \(P_{\eta}\) is weakly wandering projection. Indeed, let \(A_{n}(P_{\eta})=\frac{1}{n}\sum_{k\in K_{n}}\alpha^{k}(P_{\eta})\) where \(K_{n}=\{0,1,2,\cdots,n-1\}\), then for \(n,k\in\mathbb{N}\), we have
\[\left\|A_{n}((P_{\eta}))\right\| =\frac{1}{k}\left\|k\cdot A_{n}(P_{\eta})\right\|\] \[\leq\frac{1}{k}\sum_{j=1}^{k}\left\|A_{n}(P_{\eta})-A_{n}(\alpha^ {n_{j}}(P_{\eta}))\right\|+\frac{1}{k}\left\|\sum_{j=1}^{k}A_{n}(\alpha^{n_{ j}}(P_{\eta}))\right\|\] \[\leq\frac{1}{k}\sum_{j=1}^{k}\frac{\left|(K_{n}\Delta K_{n}n_{j} )\right|}{n}+\frac{1}{k}\left\|A_{n}(\sum_{j=1}^{k}\alpha^{n_{j}}(P_{\eta})) \right\|\] \[\leq\frac{1}{k}\sum_{j=1}^{k}\frac{\left|(K_{n}\Delta K_{n}n_{j} )\right|}{n}+\frac{1}{k}\left\|\sum_{j=1}^{k}P_{U^{n_{j}}(\eta)}\right\|\] \[=\frac{1}{k}\sum_{j=1}^{k}\frac{\left|(K_{n}\Delta K_{n}n_{j}) \right|}{n}+\frac{1}{k}\left\|P_{\sum_{j=1}^{k}U^{n_{j}}(\eta)}\right\|,\text{ as }U^{n_{i}}\eta\perp U^{n_{j}}\eta,\ i\neq j.\] \[\leq\frac{1}{k}\sum_{j=1}^{k}\frac{\left|(K_{n}\Delta K_{n}n_{j} )\right|}{n}+\frac{1}{k}\]
Now let \(\epsilon>0\). We choose \(k\in\mathbb{N}\) such that \(\frac{1}{k}<\frac{\epsilon}{2}\). Since \((K_{n})\) is a Folner sequence, so for each \(j\in\{1,\dots,k\}\), there exists a \(N_{j}\in\mathbb{N}\) such that
\[\frac{|(K_{n}\Delta K_{n}n_{j})|}{n}<\frac{\epsilon}{2},\text{ for all }n\geq N_{j}.\]
Choose \(N:=\max\{N_{1},\dots,N_{k},k\}\in\mathbb{N}\) and for all \(n\geq N\), note that
\[\|A_{n}(P_{\eta})\|\leq\epsilon.\]
Now we set \(x_{wm}=\sum_{k=1}^{\infty}\frac{1}{2^{k+1}}p_{\eta_{k}}.\) It is straightforward to note that \(x_{wm}\) is weakly wandering operator with \(s(x_{wm})=P_{wm}\).
**Remark 5.3**.: We note that if \(\eta\in\mathcal{H}_{wm}\), then it is straightforward to check that \(\inf_{n}A_{n}(P_{\eta})(\varphi)=0\), for any faithful normal state in \(\mathcal{B}(\mathcal{H})_{*}\) and since \(P_{\eta}\) is finite rank projection, so \(\mu(A_{n}(P_{\eta}))=0\) for all singular linear functional \(\mu\) in \(\mathcal{B}(\mathcal{H})^{*}\). Then it follows that \(P_{\eta}\) is weakly wandering operator.
For the next result assume that \(G\) is an amenable group or a semigroup with a Folner sequence \(\{K_{n}:n\in\mathbb{N}\}\).
**Definition 5.4**.: Let \((\mathcal{H},U,G)\) be a non-commutative dynamical system. A vector \(\xi\in\mathcal{H}\) is called wandering if following holds
\[\frac{1}{m(K_{n})}\int_{K_{n}}U_{g}\xi\ dm(g)\to 0\text{ as }n\to\infty\text{ in }\left\|\cdot\right\|.\]
Following is a weak version of Krengel Theorem.
**Theorem 5.5**.: Let \((\mathcal{H},U,G)\) be a weakly mixing non-commutative dynamical system. Then every elements of \(\mathcal{H}\) are weakly wandering for \((\mathcal{H},U,G)\).
Proof.: Consider \(\alpha_{g}(x)=U_{g}xU_{g}^{*}\) for all \(g\in G\). Let \(x\in\mathcal{B}(\mathcal{H})\), then we note the following
\[\alpha_{g}\circ\alpha_{h}(x) =\alpha_{g}(\alpha_{h}(x))=\alpha_{g}(U_{h}xU_{h}^{*})\] \[=U_{g}U_{h}xU_{h}^{*}U_{g}^{*}=U_{gh}xU_{gh}^{*}\]
Thus, it follows that \((\mathcal{B}(\mathcal{H}),\alpha,G)\) is a non-commutative dynamical system. Suppose \(\varphi\) be a f.n state on \(\mathcal{B}(\mathcal{H})\). Then we wish to show that for every non-zero projection \(p\in\mathcal{B}(\mathcal{H})\) with \(\tau(p)<\infty\), we have \(\inf_{n\in\mathbb{N}}\varphi(A_{n}(p))=0\), where \(\tau\) is semifinite trace on \(\mathcal{B}(\mathcal{H})\). Indeed, we note that \(\varphi\) can be written as \(\varphi(\cdot)=\sum_{i=1}^{\infty}\delta_{i}((\cdot)\eta_{i},\eta_{i})\) where \(\sum_{i}\delta_{i}=1\) and \(\eta_{i}\in\mathcal{H}\) with \(\|\eta_{i}\|=1\). Now assume that \(p\) is a projection onto \(\mathbb{C}\xi\) for some \(\xi\in\mathcal{H}\). Then for all \(n\in\mathbb{N}\), observe that
\[\varphi(A_{n}(p)) =\sum_{i=1}^{\infty}\delta_{i}\langle(A_{n}(p))\eta_{i},\eta_{i}\rangle\] \[=\sum_{i=1}^{\infty}\delta_{i}\frac{1}{m(K_{n})}\int_{K_{n}} \langle\alpha_{g}(p)\eta_{i},\eta_{i}\rangle dm(g)\] \[=\sum_{i=1}^{\infty}\delta_{i}\frac{1}{m(K_{n})}\int_{K_{n}} \langle(U_{g}pU_{g}^{*})\eta_{i},\eta_{i}\rangle dm(g)\] \[=\sum_{i=1}^{\infty}\delta_{i}\frac{1}{m(K_{n})}\int_{K_{n}} \langle\langle U_{g}^{*}\eta_{i},\xi\rangle U_{g}\xi,\eta_{i}\rangle dm(g)\]
\[=\sum_{i=1}^{\infty}\delta_{i}\frac{1}{m(K_{n})}\int_{K_{n}}\left| \langle U_{g}\xi,\eta_{i}\rangle\right|^{2}dm(g)\]
As \(\sum\delta_{i}<\infty\), it follows that \(\lim_{n\to\infty}\varphi(A_{n}(p))=0\). Then for any finite rank projection the result follows immediately. We note that if \(p\) is any finite rank projection then \(\varphi(p)=0\) for all singular linear functional on \(\mathcal{B}(\mathcal{H})\). Then it follows from the proof of (1) \(\implies\) (2) of Theorem 3.9 that \(p\) is weakly wandering for \((\mathcal{B}(\mathcal{H}),\alpha,G)\). Let \(\xi\in\mathcal{H}\) and suppose \(p_{\xi}\) is projection onto \(\mathbb{C}\xi\). Then for all \(\eta\in\mathcal{H}\), note that
\[\langle A_{n}(p_{\xi})\eta,\eta\rangle =\frac{1}{m(K_{n})}\int_{K_{n}}\langle(U_{g}p_{\xi}U_{g}^{*})\eta,\eta\rangle dm(g)\] \[=\frac{1}{m(K_{n})}\int_{K_{n}}\langle\langle U_{g}^{*}\eta,\xi \rangle U_{g}\xi,\eta\rangle dm(g)\] \[=\frac{1}{m(K_{n})}\int_{K_{n}}\left|\langle U_{g}\xi,\eta \rangle\right|^{2}dm(g).\]
Thus, follows that if \(\lim_{n\to\infty}\left\|A_{n}(p_{\xi})\right\|=0\), then it implies
\[\frac{1}{m(K_{n})}\int_{K_{n}}U_{g}\xi\ dm(g)\to 0\ \text{as}\ n\to\infty\ \text{in}\ \left\|\cdot\right\|.\]
Hence, every vector of \(\mathcal{H}\) is weakly wandering for \((\mathcal{H},U,G)\).
## 6. Pointwise Ergodic theorem
In this section we study the pointwise ergodic theorem for non-commutative dynamical system \((L^{1}(M,\tau),G,\gamma)\). The main results of this section follows from [1] and [1]. To prove the mean ergodic theorem, first it was required to prove a version of mean ergodic theorem in the GNS space laved. For that in addition to \(\alpha\) being positive maps, it was assumed that for each \(g\in G\), the map \(\gamma_{g}^{*}:M\to M\) satisfies the Schwarz condition, i.e, \(\gamma_{g}^{*}(x)^{*}\gamma_{g}^{*}(x)\leq\gamma_{g}^{*}(x^{*}x)\) for all \(x\in M\). In this section, we notice that the Schwarz map condition is redundant, hence improving the results for the actions by positive maps. We begin with recalling the following known results.
**Proposition 6.1**.: _[_1_, 15_]_ _Let \(M,N\) be two unital \(C^{*}\)-algebras, and \(\phi:M\to N\) be a 2-positive map. Then \(\phi(a)^{*}\phi(a)\leq\left\|\phi(1)\right\|\phi(a^{*}a)\) for all \(a\in M\)._
**Proposition 6.2**.: _[_1_, Theorem 3.11]_ _Let \(M\) be the commutative \(C^{*}\) algebra \(C(L)\) and \(N\) be another \(C^{*}\) algebra. Then if \(\phi:C(L)\to N\) be a positive map, then \(\phi\) is completely positive._
We note that given a non-commutative dynamical system \((M_{*},G,\gamma)\), we have dual non-commutative dynamical system \((M,G,\alpha)\), where \(\alpha=(\alpha_{g})_{g\in G}\) and \(\alpha_{g}=\gamma_{g}^{*}\) for all \(g\in G\). It is to note that \((M,G,\alpha)\) becomes a non-commutative dynamical system. In the sequel, we use both in the interest of presentation.
**Remark 6.3**.: Let \((M,G,\alpha)\) be a non-commutative dynamical system. Then for all \(g\in G\), \(\alpha_{g}\) is a positive contraction for all \(g\in G\). Therefore, \(\|\alpha_{g}(1)\|\leq 1\) for all \(g\in G\). Let \(x\in M_{s}\) and consider the abelian von Neumann algebra generated by \(x\), denoted by \(\operatorname{VN}(x)\). Then by Proposition 6.2 it follows that \((\alpha_{g})_{|_{\operatorname{VN}(x)}}\) is completely positive, in particular \(2\)-positive. Hence by Proposition 6.1, we have \(\alpha_{g}(x)^{*}\alpha_{g}(x)\leq\alpha_{g}(x^{*}x)\).
Suppose \(G\) is an amenable semigroup. If \(G\) is an amenable group then by \(\alpha\) we denote an action of \(G\) on the von Neumann algebra \(M\). On the other hand if \(G\) is a semigroup then \(\alpha\) will denote either an action or an anti-action of \(G\) on \(M\). Similarly, if \(G\) is an amenable group we also consider a sequence of Folner sets \(\{K_{n}\}_{n\in\mathbb{N}}\) as defined in [1] and if \(G\) is a semigroup we consider a net of Folner sets \(\{K_{l}\}_{l\in\mathbb{R}_{+}}\) as denied in [1]. The tripple \((M,G,\alpha)\) will form a non-commutative dynamical system.
Now we have the following mean ergodic convergence theorem.
**Theorem 6.4**.: Let \((M,G,\alpha)\) be a non-commutative dynamical system. Also assume that there exists a f.n state \(\rho\). Then for all \(\mu\in M_{*}\), there exists a \(\bar{\mu}\in M_{*}\) such that
\[\bar{\mu}=\left\|\cdot\right\|_{1}-\lim_{n\to\infty}B_{n}(\mu),\]
where,
\[B_{n}(\cdot):=\begin{cases}\frac{1}{m(K_{n})}\int_{K_{n}}\alpha_{g^{-1}}^{*}( \cdot)dm(g)&\text{ when $G$ is amenable group},\ n\in\mathbb{N},\\ \\ \frac{1}{m(K_{l})}\int_{K_{1}}\alpha_{g}^{*}(\cdot)dm(g)&\text{ when $G$ is semigroup },\ l\in\mathbb{R}_{+}.\end{cases}\]
Further, if \(G\) is a unimodular group and \(\{K_{n}\}\) are symmetric, then \(\bar{\mu}\) is \(G\)-invariant.
Proof.: Let \(L^{2}(M_{s},\rho)\) be the closure of \(M_{s}\) with respect to the norm induced from the inner product \(\langle\cdot,\cdot\rangle_{\rho}\). Define the following maps on the Hilbert space \(L^{2}(M_{s},\rho)\).
\[u_{g}(x\Omega_{\rho})=\alpha_{g}(x)\Omega_{\rho},x\in M_{s},g\in G.\]
Now applying Remark 6.3 observe that \(u_{g}\) defines a contraction on \(L^{2}(M_{s},\rho)\). Now the rest of the proof follows verbatim from [1, Lemma 4.5] for the case of amenable groups and [1, Theorem 3.4] for the case of semigroups.
Before we move onto the main theorem of this section (that is the pointwise convergence theorem) we further need to fix a few notations and need recall the following definition.
**Definition 6.5**.: A locally compact group \(G\) is said to be of polynomial growth if there exists a compact generating subset \(V\) of \(G\) (i.e, \(\cup_{n\in\mathbb{N}}V^{n}=G\)) satisfying the following condition.
There exists \(k>0\) and \(r\in\mathbb{N}\) such that \(m(V^{n})\leq kn^{r}\) for all \(n\in\mathbb{N}\).
**Remark 6.6**.: It is known from [10] that if \(G\) is a group as in definition 6.5 then \(G\) is amenable and for the compact generating set \(V\), the sequence \(\{V^{n}\}_{n\in\mathbb{N}}\) satisfies the Folner Condition. It is also known from [10] that a locally compact group with polynomial growth is unimodular.
We like to discuss pointwise ergodic theorem for the actions of polynomial growth group and for the actions of semigroups \(\mathbb{Z}_{+}^{d}\) and \(\mathbb{R}_{+}^{d}\) for a natural number \(d\geq 1\). For the simplicity of notation, let \(\mathbb{P}\) be the collection of all LCSH polynomial growth
group and write \(\mathbb{G}=\mathbb{P}\cup\{\mathbb{Z}_{+}^{d}:d\in\mathbb{N}\}\cup\{\mathbb{R}_{+} ^{d}:d\in\mathbb{N}\}\). Further, we note that if \(G\in\mathbb{P}\), then we take ergodic averages with respect to the Folner sequences \(\{V^{n}\}_{n\in\mathbb{N}}\). We also point out that the non-commutative dynamical system \((M,\mathbb{Z}_{+}^{d},\alpha)\) is determined by \(d\)-commuting positive contractions \(\alpha_{1},\alpha_{2},\cdots,\alpha_{d}\) on \(M\) such that \(\alpha_{(i_{1},\cdots,i_{d})}(\cdot)=\alpha_{1}^{i_{1}}\alpha_{2}^{i_{2}}\cdots \alpha_{d}^{i_{d}}(\cdot)\) for \((i_{1},\cdots,i_{d})\in\mathbb{Z}_{+}^{d}\). For the dynamical system \((M,\mathbb{R}_{+}^{d},\alpha)\), we consider the ergodic avarages with respect to the set \(Q_{a}:=\{(t_{1},\ldots,t_{d})\in\mathbb{R}_{+}^{d}:t_{1}<a,\ldots,t_{d}<a\}\) for \(a\in\mathbb{R}_{+}\). Thus, with the preceding notations, for a dynamical system \((M,G,\alpha)\) where \(G\in\mathbb{G}\), we discuss the pointwise ergodic theorem and stochastic ergodic theorem for the following ergodic averages;
\[A_{a}(\cdot):=\begin{cases}\frac{1}{V^{a}}\int_{V^{a}}\alpha_{g}(\cdot)dm(g)& \text{ when }G\in\mathbb{P},\ a\in\mathbb{N},\\ \\ \frac{1}{a^{d}}\int_{Q_{a}}\alpha_{t}(\cdot)dt&\text{ when }G=\mathbb{R}_{+}^{d}, \ a\in\mathbb{R}_{+}.\end{cases}\]
For \(X\in M_{*}\), by \(A_{a}(X)\) we mean \(A_{a}^{*}(X)\). By abusing of notation we make no difference between \(A_{a}(\cdot)\) and \(A_{a}^{*}(\cdot)\), unless it is not clear from the context. Now we have the main theorem of this section.
**Theorem 6.7**.: Let \(M\) be a finite von Neumann algebra with a f.n trace \(\tau\) and \((M,G,\alpha)\) be a non-commutative dynamical system with \(G\in\mathbb{G}\). Furthermore, assume that there exists a f.n \(G\)-invariant state \(\rho\) on \(M\). Then for all \(Y\in L^{1}(M,\tau)\), there exists \(\overline{Y}\in L^{1}(M,\tau)\) such that \(M_{a}(Y)\) converges to \(\overline{Y}\) bilaterally almost uniformly.
Proof.: Note that in view of Theorem 6.4, \(\overline{Y}\) exists for any \(Y\in L^{1}(M,\tau)\). Now the bilateral uniform convergence of \(A_{a}(Y)\) to \(\overline{Y}\) follows from [1, Theorem 4.15] and Theorem 5.5 of [1].
## 7. Stochastic Ergodic Theorem
In this section we combine the results obtained in SS3 and SS6 to prove a stochastic ergodic theorem. Throughout this section we assume that \(M\subseteq\mathcal{B}(\mathcal{H})\) is a von Neumann algebra with a f.n tracial state \(\tau\). We further assume that \(G\) is a group of polynomial growth with a compact, symmetric generating set \(V\) and in that case the averages will be considered with respect to the Folner sequence \(\{V^{n}\}_{n\in\mathbb{N}}\) or \(G\in\{\mathbb{Z}_{+}^{n},\mathbb{R}_{+}^{n}\}\). Then we prove stochastic ergodic theorem for non-commutative dynamical system \((L^{1}(M,\tau),G,\gamma)\). Recall that a non-commutative dynamical system \((L^{1}(M,\tau),G,\gamma)\) consists of a map \(G\ni g\to\gamma_{g}\in\mathcal{B}(L^{1}(M,\tau))\), satisfies the following
1. \(\gamma_{g}\circ\gamma_{h}=\gamma_{gh}\), for all \(g,h\in G\) and for all \(x\in L^{1}(M,\tau)\) the map \(g\to\gamma_{g}(x)\) is continuous.
2. For all \(g\in G\), the map \(\gamma_{g}:L^{1}(M,\tau)\to L^{1}(M,\tau)\) is a positive contraction, i.e, \(\gamma_{g}(x)\geq 0\) and \(\|\gamma_{g}(x)\|\leq\|x\|\) for all \(x\in L^{1}(M,\tau)_{+}\).
Given a non-commutative dynamical system \((L^{1}(M,\tau),G,T)\), consider dual map \(\gamma_{g}^{*}:M\to M\) and note that \(\gamma_{g}^{*}:M\to M\) is a subunital positive contraction and write \(\gamma^{*}=(\gamma_{g}^{*})\), then \((M,G,\gamma^{*})\) becomes a non-commutative dynamical system. Let \(e_{1}\) and \(e_{2}\) be the two projections obtained in Theorem 3.11 which satisfy the following.
1. There exists a normal state \(\rho\) on \(M\) with \(s(\rho)=e_{1}\) such that \(\rho(\gamma_{g}^{*}(x))=\rho(x)\) for all \(g\in G\) and \(x\in M\), equivalently there exits a \(Y\in L^{1}(M,\tau)\) with \(s(Y)=e_{1}\) and \(\gamma_{g}(Y)=Y\) for all \(g\in G\).
2. \(\gamma_{g}(e_{1}L^{1}(M,\tau)e_{1})\subseteq e_{1}L^{1}(M,\tau)e_{1}\) for all \(g\in G\).
3. \(\gamma_{g}^{*}(e_{2})\leq e_{2}\) for all \(g\in G\).
We write \(M_{e_{i}}=e_{i}Me_{i}\) and \(\tau_{e_{i}}=\frac{1}{\tau(e_{i})}\tau|_{e_{i}Me_{i}}\) for \(i=1,2\). Then note that \(e_{1}L^{1}(M,\tau)e_{1}=L^{1}(M_{e_{1}},\tau_{e_{1}})\). Further, for all \(g\in G\), consider \(\gamma_{g}|_{L^{1}(M_{e_{1}},\tau_{e_{1}})}:L^{1}(M_{e_{1}},\tau_{e_{1}})\to L ^{1}(M_{e_{1}},\tau_{e_{1}})\). For all \(g\in G\) and \(x\in M\), note that
\[(\gamma_{g}|_{L^{1}(M_{e_{1}},\tau_{e_{1}})})^{*}(e_{1}xe_{1})=e_{1}\gamma_{g} ^{*}(e_{1}xe_{1})e_{1}\]
Write \(\alpha_{g}(y)=e_{1}\gamma_{g}^{*}(y)e_{1}\) for all \(y\in M_{e_{1}}\). For each \(g\in G\), \(\alpha_{g}\) is a positive contraction and for all \(y\in M_{e_{1}}\) we have
\[\rho(\alpha_{g}(y))=\rho(e_{1}\gamma_{g}^{*}(y)e_{1})=\rho(\gamma_{g}^{*}(y))= \rho(y).\]
Finally, we have \(\alpha_{g}^{*}(e_{1}Xe_{1})=\gamma_{g}(e_{1}Xe_{1})\) for all \(g\in G\) and for all \(X\in L^{1}(M,\tau)\). Thus, conclude that \((M_{e_{1}},G,\alpha)\) is a non-commutative dynamical system with a invariant f.n state \(\rho\). Now we have the following stochastic ergodic theorem.
**Theorem 7.1** (Stochastic Ergodic Theorem).: Let \(M\) be a finite von Neumann algebra with a f.n trace \(\tau\). Suppose \((L^{2}(M,\tau),G,\gamma)\) is a covariant system. Consider the projections \(e_{1},e_{2}\in M\) as mentioned in Remark 3.12. Then we have the following results.
1. For all \(B\in L^{1}(M_{e_{1}},\tau_{e_{1}})\), there exists \(\bar{B}\in L^{1}(M_{e_{1}},\tau_{e_{1}})\) such that \(A_{n}(B)\) converges b.a.u to \(\bar{B}\). Moreover, \(A_{n}(B)\) converges in measure to \(\bar{B}\).
2. For all \(B\in L^{1}(M_{e_{2}},\tau_{e_{2}})\), \(A_{n}(B)\) converges to \(0\) in measure.
Proof.: _(i):_ Following the previous discussion recall that \((M_{e_{1}},G,\alpha)\) is a non-commutative dynamical system with a invariant f.n state \(\rho\), i.e, \(\rho(\alpha_{g}(x))=\rho(x)\) for all \(g\in G\) and \(x\in M_{e_{1}}\). Let \(B\in L^{1}(M_{e_{1}},\tau_{e_{1}})\), then it follows from Theorem 6.7 that there exists a \(\bar{B}\in L^{1}(M_{e_{1}},\tau_{e_{1}})\) such that \(A_{n}(B)\) converges to \(\bar{B}\) in b.a.u. Furthermore, the convergence in measure follows from Remark 2.11.
_(ii):_ From Corollary 3.11, it follows that there exists a weakly wandering operator \(x_{0}\in M_{+}\) such that \(s(x_{0})=e_{2}\). Hence \(e_{2}x_{0}e_{2}=x_{0}\), which implies \(x_{0}\in M_{e_{2}}\).
Now let \(B\) be a non-zero element of \(L^{1}(M_{e_{2}},\tau_{e_{2}})_{+}\). Let us choose \(0<\epsilon\leq 1\) and \(\delta>0\). Since \(e_{2}=\chi_{(0,\infty)}(x_{0})\), observe that there exists \(m\in\mathbb{N}\) such that the projection \(p:=\chi_{(\frac{1}{m},\infty)}(x_{0})\in M_{e_{2}}\) satisfies \(\tau(e_{2}-p)<\frac{\delta}{2}\). Now we define the projections
\[r_{n}:=\chi_{[\epsilon,\infty)}(pA_{n}(B)p),n\in\mathbb{N},\]
and claim that \(\tau(r_{n})<\delta/2\) for all \(n\in\mathbb{N}\). Indeed, since \(\frac{1}{m}p\leq x_{0}\) we have, \(A_{n}(p)\leq mA_{n}(x_{0})\) for all \(n\in\mathbb{N}\), which implies \(\|A_{n}(p)\|\leq m\left\|A_{n}(x_{0})\right\|\). Now since \(x_{0}\) is a weakly wandering operator, there exists \(N_{0}\in\mathbb{N}\) such that
\[\|A_{n}(p)\|\leq\frac{\epsilon\delta}{2\tau(B)}\text{ for all }n\geq N_{0}.\]
Therefore, for all \(n\in\mathbb{N}\) we have,
\[\tau(pA_{n}(B)p)=\tau(A_{n}(B)p)=\tau(BA_{n}(p))\leq\tau(B)\left\|A_{n}(p)\right\|.\]
Note that \(\epsilon r_{n}\leq pA_{n}(B)p\) for all \(n\in\mathbb{N}\). Therefore, we have \(\tau(r_{n})\leq\frac{\delta}{2}\) for all \(n\geq N_{0}\).
Define the projections \(q_{n}:=p-r_{n}\), \(n\in N\) and observe that for all \(n\geq N_{0}\),
\[\tau(e_{2}-q_{n})=\tau(e_{2}-p+r_{n})=\tau(e_{2}-p)+\tau(r_{n})\leq\delta/2+ \delta/2=\delta,\ n\geq N_{0}.\]
We also note that, for all \(n\in\mathbb{N}\)
\[q_{n}A_{n}(B)q_{n}=q_{n}pA_{n}(B)pq_{n} \leq\chi_{[0,\epsilon]}(pA_{n}(B)p)(pA_{n}(B)p)\chi_{[0,\epsilon] }(pA_{n}(B)p)\] \[\leq\chi_{[0,\epsilon]}(pA_{n}(B)p).\]
Hence, for all \(n\in\mathbb{N}\) we have
\[\left\|q_{n}A_{n}(B)q_{n}\right\|\leq\epsilon.\]
The result for arbitrary \(B\in L^{1}(M_{e_{2}},\tau_{e_{2}})\) then follows from Proposition 2.12.
**Remark 7.2**.: Let \(X\in L^{1}(M,\tau)\). Then as a consequence of Theorem 7.4, we get the following.
* There exists \(\overline{X}\in L^{1}(M,\tau)\) such that for all \(\epsilon,\delta>0\) there exists \(N_{0}\in\mathbb{N}\) and a projection \(p\in M_{e_{1}}\) such that \[\tau(e_{1}-p)<\delta/2,\text{ and, }\left\|p(e_{1}A_{n}(X)e_{1}-\overline{X})p \right\|<\epsilon\text{ for all }n\geq N_{0}.\]
* For all \(\epsilon,\delta>0\), there exists a sequence of projections \(\{q_{n}\}_{n\in\mathbb{N}}\) in \(M_{e_{2}}\) and \(N_{1}\in\mathbb{N}\) such that \[\tau(e_{2}-q_{n})<\delta/2,\text{ and, }\left\|q_{n}e_{2}A_{n}(X)e_{2}q_{n} \right\|<\epsilon\text{ for all }n\geq N_{1}.\]
Consider the following projection
\[r_{n}:=p+q_{n},\ n\in\mathbb{N}.\]
Note the for all \(n\in\mathbb{N}\), \(r_{n}\) is a projection in \(M\) and
\[\tau(1-r_{n})=\tau(e_{1}-p)+\tau(e_{2}-q_{n})<\delta.\]
**Lemma 7.3**.: Let \(X\in L^{1}(M,\tau)_{+}\). Then there exists \(N_{2}\in\mathbb{N}\) such that for all \(n\geq N_{2}\), \(\left\|r_{n}e_{1}A_{n}(X)e_{2}r_{n}\right\|\leq\sqrt{\epsilon(\epsilon+\left\| pe_{1}Ye_{1}p\right\|)}\) and \(\left\|r_{n}e_{2}A_{n}(X)e_{1}r_{n}\right\|\leq\sqrt{\epsilon(\epsilon+\left\|pe_{1 }Ye_{1}p\right\|)}\).
Proof.: Observe that for all \(n\in\mathbb{N}\), \(A_{n}(X)\in L^{1}(M,\tau)_{+}\) and for all \(n\geq N_{0}\), \(pe_{1}Ye_{1}p\) and \(A_{n}(X)e_{1}p\) are bounded operators. Then we claim that for all \(n\geq N_{0}\), \(pe_{1}A_{n}(X)^{1/2}\) is also a bounded operator. Indeed, let \(n\geq N_{0}\) and \(\xi\in\mathcal{D}(A_{n}(X)e_{1}p)\). Then,
\[\left\langle A_{n}(X)^{1/2}e_{1}p\xi,A_{n}(X)^{1/2}e_{1}p\xi\right\rangle =\left\langle A_{n}(X)e_{1}p\xi,e_{1}p\xi\right\rangle\] \[=\left\langle pe_{1}A_{n}(X)e_{1}p\xi,\xi\right\rangle\] \[\leq\left\|pe_{1}A_{n}(X)e_{1}p\right\|\left\|\xi\right\|\] \[=\left\|pe_{1}(A_{n}(X)-\overline{X})e_{1}p+pe_{1}\overline{X}e_{ 1}p\right\|\left\|\xi\right\|\] \[\leq\left(\left\|pe_{1}\overline{X}e_{1}p\right\|\right)\left\| \xi\right\|.\]
Since, for all \(n\in\mathbb{N}\), \(\overline{\mathcal{D}(A_{n}(X)e_{1}p)}=\mathcal{H}\), we get \(\big{\|}A_{n}(X)^{1/2}e_{1}p\big{\|}\leq\sqrt{\epsilon+\|pe_{1}Ye_{1}p\|}\) for all \(n\geq N_{0}\). Also we note that
\[\big{\|}pe_{1}A_{n}(X)^{1/2}\big{\|}=\big{\|}(A_{n}(X)^{1/2}e_{1}p)^{*}\big{\|} \leq\sqrt{\epsilon+\big{\|}pe_{1}\overline{X}e_{1}p\big{\|}}\text{ for all }n\geq N_{0}. \tag{7.1}\]
Again observe that for all \(n\geq N_{1}\), \(A_{n}(X)e_{2}q_{n}\) is a bounded operator. We also claim that for all \(n\geq N_{1}\), \(A_{n}(X)^{1/2}e_{2}q_{n}\) is a bounded operator. Indeed, let \(n\geq N_{1}\) and \(\xi\in\mathcal{D}(A_{n}(X)e_{2}q_{n})\). Then,
\[\langle A_{n}(X)^{1/2}e_{2}q_{n}\xi,A_{n}(X)^{1/2}e_{2}q_{n}\xi\rangle =\langle A_{n}(X)e_{2}q_{n}\xi,e_{2}q_{n}\xi\rangle\] \[=\langle q_{n}e_{2}A_{n}(X)e_{2}q_{n}\xi,\xi\rangle\] \[\leq\|q_{n}e_{2}A_{n}(X)e_{2}q_{n}\|\,\|\xi\|\] \[\leq\epsilon\,\|\xi\|\,\text{ for all }n\geq N_{1}.\]
Since, \(\overline{\mathcal{D}(A_{n}(X)e_{2}q_{n})}=\mathcal{H}\) for all \(n\in\mathbb{N}\), we get, \(\big{\|}A_{n}(X)^{1/2}e_{2}q_{n}\big{\|}\leq\sqrt{\epsilon}\) for all \(n\geq N_{1}\). Now define \(N_{2}:=\max\{N_{0},N_{1}\}\) and note that for all \(n\geq N_{2}\)
\[\|r_{n}e_{1}A_{n}(X)e_{2}r_{n}\|= \,\|pe_{1}A_{n}(X)e_{2}q_{n})\|\] \[= \,\big{\|}pe_{1}A_{n}(X)^{1/2}A_{n}(X)^{1/2}e_{2}q_{n}\big{\|}\] \[\leq \,\big{\|}pe_{1}A_{n}(X)^{1/2}\big{\|}\,\big{\|}A_{n}(X)^{1/2}e_{ 2}q_{n})\big{\|}\] \[\leq \sqrt{\epsilon(\epsilon+\big{\|}pe_{1}\overline{X}e_{1}p\big{\|})}.\]
Now since \(r_{n}e_{2}A_{n}(X)e_{1}r_{n}=(r_{n}e_{1}A_{n}(X)e_{2}r_{n})^{*}\) holds for all \(n\in\mathbb{N}\), we have
\[\|r_{n}e_{2}A_{n}(X)e_{1}r_{n}\|\leq\sqrt{\epsilon(\epsilon+\big{\|}pe_{1} \overline{X}e_{1}p\big{\|})}.\]
**Theorem 7.4** (Strong Stochastic Ergodic Theorem).: Let \(M\) be a finite von Neumann algebra with a f.n trace \(\tau\). Suppose \((L^{1}(M,\tau),G,\gamma)\) is a non-commutative dynamical system such that for each \(g\in G\), \(\gamma_{g}\) is lamperti operator. Let \(X\in L^{1}(M,\tau)\), then there exists \(\overline{X}\in L^{1}(M,\tau)\) such that \(A_{n}(X)\) converges to \(\overline{X}\) in measure. Further, suppose \(e_{1},e_{2}\in M\) are as in Remark 3.12, then \(e_{1}\overline{X}e_{1}=\overline{X}\) and \(e_{2}\overline{X}e_{2}=0\).
Proof.: Let \(X\in L^{1}(M,\tau)\), actually it is enough to prove the result for \(X\geq 0\). So, assume \(X\geq 0\). Then for all \(n\in\mathbb{N}\), \(A_{n}(X)\in L^{1}(M,\tau)_{+}\) and
\[A_{n}(X)=e_{1}A_{n}(X)e_{1}+e_{1}A_{n}(X)e_{2}+e_{2}A_{n}(X)e_{1}+e_{2}A_{n}(X )e_{2}.\]
Observe that, it follows from Corollary 3.16 that for all \(n\in\mathbb{N}\), \(e_{1}A_{n}(X)e_{1}=A_{n}(e_{1}Xe_{1})\in L^{1}(M_{e_{1}},\tau_{e_{1}})_{+}\) and \(e_{2}A_{n}(X)e_{2}=A_{n}(e_{2}Xe_{2})\in L^{1}(M_{e_{2}},\tau_{e_{2}})_{+}\).
Let \(\epsilon,\delta>0\). Consider the element \(\overline{X}\in L^{1}(M,\tau)\) and projections \(r_{n}\) in \(M\) as in Remark 7.2. Let \(Z:=e_{1}\overline{X}e_{1}\) and note that for all \(n\in\mathbb{N}\)
\[r_{n}(A_{n}(X)-\overline{X})r_{n}=r_{n}\Big{(}e_{1}A_{n}(X)e_{1} -e_{1}\overline{X}e_{1}\Big{)}r_{n}+ r_{n}e_{1}A_{n}(X)e_{2}r_{n}+r_{n}e_{2}A_{n}(X)e_{1}r_{n}\] \[+r_{n}e_{2}A_{n}(X)e_{2}r_{n}.\]
We also note that for all \(n\in\mathbb{N}\), \(r_{n}\Big{(}e_{1}A_{n}(X)e_{1}-e_{1}\overline{X}e_{1}\Big{)}r_{n}=p\Big{(}e_{1}A _{n}(X)e_{1}-e_{1}\overline{X}e_{1}\Big{)}p\) and \(r_{n}e_{2}A_{n}(X)e_{2}r_{n}=q_{n}e_{2}A_{n}(X)e_{2}q_{n}\).
Hence the result follows from Remark 7.2 and Lemma 7.3. |
2306.05411 | R-MAE: Regions Meet Masked Autoencoders | In this work, we explore regions as a potential visual analogue of words for
self-supervised image representation learning. Inspired by Masked Autoencoding
(MAE), a generative pre-training baseline, we propose masked region
autoencoding to learn from groups of pixels or regions. Specifically, we design
an architecture which efficiently addresses the one-to-many mapping between
images and regions, while being highly effective especially with high-quality
regions. When integrated with MAE, our approach (R-MAE) demonstrates consistent
improvements across various pre-training datasets and downstream detection and
segmentation benchmarks, with negligible computational overheads. Beyond the
quantitative evaluation, our analysis indicates the models pre-trained with
masked region autoencoding unlock the potential for interactive segmentation.
The code is provided at https://github.com/facebookresearch/r-mae. | Duy-Kien Nguyen, Vaibhav Aggarwal, Yanghao Li, Martin R. Oswald, Alexander Kirillov, Cees G. M. Snoek, Xinlei Chen | 2023-06-08T17:56:46Z | http://arxiv.org/abs/2306.05411v2 | # R-MAE: Regions Meet Masked Autoencoders
###### Abstract
Vision-specific concepts such as'region' have played a key role in extending general machine learning frameworks to tasks like object detection. Given the success of region-based detectors for supervised learning and the progress of intra-image methods for contrastive learning, we explore the use of regions for reconstructive pre-training. Starting from Masked Autoencoding (MAE) both as a baseline and an inspiration, we propose a parallel pre-text task tailored to address the one-to-many mapping between images and regions. Since such regions can be generated in an unsupervised way, our approach (R-MAE) inherits the wide applicability from MAE, while being more'region-aware'. We conduct thorough analyses during the development of R-MAE, and converge on a variant that is both effective and efficient (1.3% overhead over MAE). Moreover, it shows consistent quantitative improvements when generalized to various pre-training data and downstream detection and segmentation benchmarks. Finally, we provide extensive qualitative visualizations to enhance the understanding of R-MAE's behavior and potential. Code will be made available.1
Footnote 1: [https://github.com/facebookresearch/r-mae](https://github.com/facebookresearch/r-mae).
## 1 Introduction
General machine learning paradigms can often benefit from key concepts when applied to specific domains. For computer vision and especially for localization-geared tasks like object detection, one of these concepts is '_region_': Widely-accepted physiological theories [40] suggest that human perception will group similar elements and parts together to parse complex scenes and objects. This hypothesis is empirically validated by the R-CNN series [29] (note that the 'R' stands for'region', which can be pre-computed [54] or jointly learned [50]). R-CNN successfully bridged the gap between the general _supervised_ learning framework [41] that pre-trains the backbone, and the specific downstream task of finding objects (Fig. 1, left). Even today, region-refinement still remains an essential component for top-performing detectors [12, 16, 46, 43] trained on human annotations.
Besides supervised classification, un- or self-supervised learning methods [23, 10, 18, 32] have recently emerged as powerful alternatives for pre-training representations. For computer vision, _contrastive_ learning [18] shows solid gains in training _efficiency_ against supervised baselines for object detection [33]. Meanwhile, _reconstructive_ pre-training such as Masked Autoencoding (MAE) [32] has proven even more _effective_, improving the upper-bound of detection accuracy beyond faster convergence [44, 64].
Although both paradigms are general, _more_ efforts have been directed towards adapting contrastive methods to vision. In particular, since the standard formulation [18] represents each image with a single vector, it neglects the rich spatial structure of images and may not transfer as well to tasks that require accurate localization. Again, region as a key concept that allows for 'intra-image contrast' has been extensively researched to close this gap [51, 36, 60, 26, 62, 65, 59, 6, 37] (Fig. 1, middle). Nevertheless, while reconstructive methods are more powerful [44] and underlie many state-of-the-art detectors [43, 64], it is unclear how regions can be introduced to such frameworks, and whether they can further help downstream performance (Fig. 1, right).
Figure 1: **Regions are a key concept in adapting general machine learning paradigms to important vision tasks like object detection. Left: from supervised classification to region-based learning in the R-CNN series [29]. Middle: from inter-image contrast to region-level, intra-image contrast as explored in self-supervised pre-training [36]. Right: while being more effective [44], how to use region information in reconstructive pre-training remains under-explored. We aim to close this gap.**
We aim to fill in this blank. We begin with MAE [32] as a representative baseline, and explore the use of pre-computed regions [25] in an MAE-style. Specifically, we propose a pre-text task called'masked Region Autoencoding' (RAE). Similar to MAE, RAE is also reconstructive. But different from MAE, RAE focuses on regions, or _'region maps'_ that represent regions as binary-valued maps indicating if a pixel belongs to a region. Moreover, as our goal is to pre-train image backbones like Vision Transformers (ViTs) [24], the corresponding masked image is also fed through these as 'pixel encoders' to compute additional inputs for RAE.
One distinctive challenge we face with RAE is a potential _one-to-many_ mapping, since each image may contain an unknown number of regions. This makes RAE akin to object detection [45], where multiple instances can appear in one scene. The solution in R-CNN [29] essentially stacks regions in the _batch_ axis and processes each of them separately. This ensures permutation equivariance among regions, but can be less efficient. Therefore, we extend our investigation to the remaining two axes within ViT - _channel_ and _length_[24], and show that by treating pooled region embeddings as queries, the length-based variant offers the best trade-off between speed and accuracy for RAE.
RAE as a task is fully compatible with MAE, which can be optimized in parallel by simply restoring the pixel decoder [32]. From the standpoint of MAE, the addition of RAE makes the pre-trained pixel encoder more _region-aware_. Therefore, we name our joint approach R-MAE, short for 'Region-aware Masked Autoencoding'. By default, R-MAE uses unsupervised, image-computable regions [25], giving it the same range of applicability as MAE.
Different from prior practices [7, 32], we develop R-MAE by pre-training on COCO train2017 [45], for its scene-centric images and ground-truth regions as potential oracles. Evaluation is again focused on localization tasks, transferring to COCO object detection and ADE20K semantic segmentation [67]. The development is carefully devised in two stages with extensive analyses: We first show RAE _alone_ works well; then we show RAE fares well _with_ MAE. Our default setup merely adds _1.3_% FLOPs on top of MAE.
Further, we generalize by pre-training with more COCO data and on ImageNet [22], and by evaluating on long-tail object detection (LVIS [31]). Consistent gains are observed.2 To highlight what's learned in R-MAE, we visualize both the output and the attention map of models pre-trained, and find R-MAE is indeed more region-, or _instance-aware_. Finally, as a side application, we show RAE _itself_ has the potential for interactive segmentation [53], thanks to its ability to generate high-quality region maps from just a few visible patches. All these evidences suggest R-MAE/RAE learns useful and meaningful representations for downstream tasks, especially ones like detection and segmentation.
Footnote 2: We also examined larger backbone and better regions in Appendix B.
## 2 Related Work
We first review two _intrinsic properties_ of regions, which have driven their popularity in computer vision:
**Local.** Images are typically treated as holistic entities in machine learning algorithms [41, 18], but real-world photos have rich spatial structures and local contents can vary across the same scene [3]. This becomes a strong motivation for the well-known R-CNN series [29, 28, 50, 34], especially with Region-of-Interest (RoI) operations on local feature maps [28]. The same goes for contrastive or Siamese learning [18, 33, 49, 30, 20, 14], where 2D signals are generally suppressed into global vectors for inter-image contrast. Realizing its potential downside on localization, many follow-up works [63, 48, 51, 60, 61, 62, 65, 26, 59, 37] have shifted focus on intra-image contrast, which use features from local geometric entities (_e.g_. points [57], regions [36] or both [6]). On the other hand, reconstructive methods [32, 7, 58, 19] as denoising autoencoders [56] preserve the 2D structure. It is therefore unclear how regions can further help in this regard.
**Object-centric.** Perhaps this is a more motivating reason for regions to meet MAE. Reconstructive learning is the dominating paradigm in pre-training natural language representations [23, 10], and while steady progress is made [17, 32], computer vision models are still lagging behind. One crucial difference between the two is that language consists of semantically meaningful words, while images are raw signals recorded in pixels. Meanwhile, in vision, objects can serve as a natural counterpart to words - they are constantly referred and manipulated as we interact with the visual world [40, 66], and they can often be captured by regions [54, 2]. By enhancing MAE's region awareness, we hope to uncover novel ways to bridge the gap between the two fields.
Then we discuss how regions are _generated_ and _utilized_:
**Source of regions.** Regions can come from various sources (_e.g_. human annotations [45], spatial heuristics [36], clustering/segmentation [9, 25, 1], object proposals [54, 2], motion segmentation [47]). As an initial exploration, we use pre-computed, clustering-based regions [25]. However, regions can also be jointly discovered [37] or updated [6] with representation learning, which is left for future work.
**Use of regions.** There are at least three other ways to leverage regions in MAE. One is to bias the random masking strategy [42], which is less general and can be sensitive to region qualities [42]. Second is to revisit the RoI operation [50] and contrastive learning, which is costly with Siamese encoders [33, 20], and has been extensively studied [36, 60, 62, 59] even with MAE [4]. Third is to view regions as an extra _modality_, and treat the task as a multi-modal learning one (_e.g_. with text [27, 52], depth map [5]). This is closest to our work, yet the lightweight design of R-MAE makes it especially well-suited to handle regions.
## 3 Approach
MAE [32] is the foundation and baseline of our RAE and R-MAE. So we summarize it first as background knowledge.
### Background: MAE
**Task.** As the name suggests, MAE uniformly masks out a portion of the image and learns to reconstruct by directly predicting raw pixel values. To provide a meaningful and challenging task for images, a high mask ratio \(\beta_{\text{t}}\) (_e.g_. 75%) is used by default. The reconstruction is compared against the ground-truth with a simple \(\ell_{2}\) loss, \(\mathcal{L}_{\text{t}}\).
**Architecture.** As an autoencoder [56], MAE instantiates its encoder and decoder with ViTs [24]. ViTs directly 'tokenize' images as sequences of patches, which paves the way for MAE's efficient encoder pre-training that _removes_ (and not replaces) masked tokens. Only the fixed-sized (8-block, 512-dimensional) pixel decoder processes in full sequence length. After pre-training, the pixel encoder is transferred as a visual backbone for downstream tasks [43].
### RAE: Masked Region Autoencoding
**Motivation.** Before introducing RAE, let's first provide our high-level thoughts of using regions, or _any_ extra information \(x\) to pre-train representations. There are three ways:
1. Feeding \(x\) as an input - yet because of the additional signal, it can mitigate the difficulty of existing tasks to learn meaningful representations;
2. Predicting \(x\) as a target - this way the model can learn from \(x\) as a supervisory signal, but the task can be too challenging to accomplish and lead to overfitting [8];
3. And lastly, MAE-style usage of \(x\) - it stands between the two above extremes (100% input or 100% output), with \((1-\beta)\times x\) as the input and \(\beta\times x\) as the essential target, the mask ratio serves as a flexible control of the difficulty level for the pre-text task.
Thus, an MAE-style approach is more flexible/powerful here.
**Region maps.** To adapt MAE to regions - or sets of location points, we first prepare them to be 'image-like'. Specifically, each region can be represented by a binary-valued region map similar in size to the image. Each element on the map, with a value of either in 0 or 1, indicates whether the corresponding location belongs to the region or not. Now, given any partially visible region map (mask ratio \(\beta_{\text{R}}\)), we can ask the model to complete it, the same as MAE does for pixels.
**Architecture.** Similar to MAE, RAE has an encoder and decoder for region autoencoding. We follow MAE and simply use ViT [24] blocks for both: \(m_{\text{E}}\)-block \(p_{\text{E}}\)-dimensional encoder and \(m_{\text{D}}\)-block \(p_{\text{D}}\)-dimensional decoder. However, just a region encoder-decoder pair is insufficient, as our ultimate goal is to obtain a pre-trained _pixel_ encoder. Therefore, we maintain the encoder from MAE in RAE, and use a _neck_ of \(m_{\text{N}}\) ViT blocks to match dimensions and (optionally) propagate information before feeding into the region decoder. Such a configuration also makes effective use of the abundant contextual information available in the pixels to pre-train the encoder. Please see Fig. 2 for the overview.
**One-to-many mapping.** While regions can be considered as an additional _modality_ to pixel-based MAE, the problem addressed here presents a distinctive challenge that cannot be fully captured by this view alone. Compared to other modalities (_e.g_. depth or semantic maps [5]) for which there is a one-to-one correspondence to pixels, the mapping between images and regions is one-to-many: One pixel can belong to an unknown number of regions.
Fortunately, this happens to be the very problem encountered in object detection. The mainstream solution, as promoted by R-CNN [29], is to sample and stack regions in the _batch_ axis, and processes each of them separately. In RAE, this means each region map will go through the encoder-decoder in isolation: If there are \(b\) images and \(k\) regions per image, the network must be applied \(b\times k\) times. This is expensive - so how to reduce the cost?
One naive alternative is to merge the \(k\) regions in the _channel_ axis. In this way, they can be viewed as a single image for encoding and decoding, and the computations are shared in the intermediate blocks. But unlike natural images which have fixed channel orders (_e.g_., RGB), randomly sampled regions can appear in _any_ order. It would be ideal if the solution still preserves _permutation equivariance_.
Figure 2: **The pre-training pipeline** of R-MAE. RAE as a standalone task takes a region encoder-decoder pair and the pixel encoder to reconstruct masked region maps. The MAE pixel decoder is optional (de-highlighted) but fully compatible with RAE, and we call our joint pipeline R-MAE. We default RAE to the variant that concatenates multiple pooled regions in the _length_ axis, as it effectively balances speed and accuracy. But other variants also offer similar region-awareness to MAE.
**Regions as queries - the length variant.** Our final idea is inspired by DETR [13], which uses 'object queries' as substrates to decode objects. In a nutshell, each region is first encoded and pooled into a 1D _embedding_; then multiple region embeddings are concatenated along the sequence _length_[24] axis to form'region queries'; and finally, these region queries will decode region maps from the output of the pixel encoder (through neck, see Fig. 2 for details). Since ViT blocks are _set_ operations w.r.t. the input [55], this solution is permutation equivariant by design.
The last decoder block is responsible for expanding the region queries _spatially_. Note that because the decoder has two sets of inputs, its blocks follow the three-layer design [13], with an extra _cross-attention_ layer that uses outputs from the neck to generate keys and values. Different from standard attention layers that compute a weighted sum (with keys) over values to produce the output (Fig. 4, left), we expand the query by directly adding it to all the values (Fig. 4, right). A small MLP head is attached afterwards to predict region maps on these spatially expanded features.
Since this variant alleviates the linear complexity w.r.t. number of regions \(k\), and still maintains the desired property w.r.t. permutation, we choose it as the default for RAE.
**Loss.** While \(\ell_{2}\) loss fits real-valued pixel predictions, by default we use cross-entropy loss for binary-valued regions (\(\mathcal{L}_{\text{R}}\)). Modeling it as a classification task allows easy balance of the weights between foreground and background (\(w_{\text{b}}\)).
### R-MAE: Regions Meet MAE
Finally, as RAE is fully compatible with MAE, they can be trained in conjunction by simply restoring the pixel encoder and applying a joint loss: \(\mathcal{L}_{\text{I}}+\lambda\mathcal{L}_{\text{R}}\) (\(\lambda\) defaults to 1).
In Fig. 2 we illustrate the default pre-training pipeline (including de-highlighted). Note that: (i) The pixel branch feeds to the region branch, but _not_ vice versa; (ii) The mask is shared between two branches. We name this pipeline R-MAE, short for Region-aware Masked Autoencoding.
Fig. 3 shows some qualitative examples from R-MAE.
Figure 4: How the region query is **spatially expanded**. We modify the standard cross-attention layer [13] (left) and given a region query, it is summed with all the value vectors to expand its spatial axes (right). A small MLP head is attached afterwards.
Figure 3: **Qualitative results** on COCO val2017 images, using R-MAE pre-trained with unsupervised region maps [25], and then applied on either COCO ground-truth regions (left column) or regions similar to the ones used during pre-training (right column). From left to right, each group contains: 1) the masked image, 2) the image reconstruction, 3) the original image; 4) the masked region, 5) the region reconstruction, 6) the original region, and 7) all regions in that image. Besides results, the figure also gives a sense of the differences between ground-truth and regions used in R-MAE. It’s interesting the algorithm generalizes well from unsupervised regions to ground-truth ones.
## 4 Experiments
In this section, we first develop RAE and R-MAE in two stages: (i) We verify RAE works well as a standalone task; (ii) We bring back MAE and show RAE also fares well with it. Extensive analyses are provided for both stages. Then, we extend our experiments to more data, more tasks, and compare against state-of-the-art. More results are found in Appendix B. Finally, we provide visualizations to better understand R-MAE's behavior and potential.
### Default Setup
**Source of regions.** We use regions generated from the unsupervised Felzenswalb-Huttenlocher (FH) algorithm [25]. It is efficient and covers the whole image, and underlies classic object proposal methods (_e.g_. selective search [54]).
**Pre-training data.** Deviating from prior practices [7, 32], we develop RAE and R-MAE by pre-training on COCO train2017 [45]. This default choice is due to the scene-centric nature of the images in COCO and the presence of ground-truth regions which can serve as useful oracles. Following [36], FH is run at three scales: \(\{500,1000,1500\}\), which also set the minimum cluster sizes. Since this dataset (118k images) is significantly smaller than ImageNet (1.4m), we pre-train for 4k epochs instead of 800 [32] - it's about _half_ the number of iterations compared to MAE default.
**Other pre-training details.** Unless otherwise specified, we follow MAE [32] for hyper-parameters. Our base learning rate is set to 1e-4, which offers better stability during training and maintains the baseline performance (see Appendix B). The length variant is used. ViT-B [24] is set as the pixel backbone, and a 1-block, 128-dimensional ViT is used for the neck, the region encoder and the region decoder. A 3-layer MLP acts as the region predictor after the decoder block. \(k{=}8\) regions are sampled (with repetition) per image, with a mask ratio of \(\beta_{\text{R}}{=}0.75\). Both \(\lambda\) and background loss weight is 1. When MAE is enabled, the pixel branch feeds the region branch, and the random masks are shared.
**Downstream transfer.** We use the recipe from ViTDet [43] for object evaluation on COCO, and report mean Average Precision (AP) for both box detection (AP\({}^{\text{b}}\)) and instance segmentation (AP\({}^{\text{m}}\)). For semantic segmentation, we evaluate on ADE20K and report mean Intersection-over-Union (mIoU) as the main metric. All details follow MAE [32] (_e.g_., run each setting 3 times and take the mean).
### Main Comparisons
We develop R-MAE in two stages, first we show RAE _itself_ is effective; then we show it fares well with MAE. This leads to two main comparisons:
**RAE** _vs_. **scratch.** Tab. 1 mainly compares our RAE with no pre-training (_i.e_., from scratch). The improvement is significant: 47.2 _vs_. 41.2 in AP\({}^{\text{b}}\), and 42.1 _vs_. 24.4 in mIoU with the MAE recipe. While MAE is still more effective, we show RAE is lightweight, and compatible (next).
**R-MAE** _vs_. **MAE.** In Tab. 2, we jointly optimize RAE and MAE objectives in R-MAE. While the improvement at outset (2k COCO epochs) is less evident, it becomes more salient when the algorithm converges (4k and 8k epochs). On the other hand, MAE saturates around 4k epochs. And thanks to the lightweight design of our RAE, the improvement comes with a _minimal_ computation cost: the region branch only adds 1.3% FLOPs to the MAE baseline (9.8b _vs_. 9.7b).
### Analyses of RAE and R-MAE
We present a **full-page** analysis for RAE and R-MAE, as shown in Tab. 3 and Fig. 5 (RAE); and Tab. 4 (R-MAE). Due to the space limit, we put our observations in the respective captions, and summarize our main findings below:
* RAE variants matter. The _batch_ variant offers the best accuracy (Tab. 3a, Tab. 4a) but can be expensive in FLOPs (Fig. 5a); the _channel_ variant is efficient, but lags behind especially when operating alone (Tab. 3a); the _length_ variant strikes a trade-off between the two.
* Number of regions matter. as shown in Fig. 5b, more regions help for both tasks.
* Mask ratio matters. 75% with shared mask works best.
* Cross-feeding patterns: the _asymmetric_ design from pixels to regions achieves the best results (Tab. 4c).
\begin{table}
\begin{tabular}{l|c|c|c c|c} pre-train & params (m) & FLOPs (b) & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU \\ \hline / & - & - & 41.2 & 37.1 & 24.4 \\ RAE & 86.3 & 4.7 & **47.2** & **41.8** & **42.1** \\ \hline MAE & 111.9 & 9.7 & 50.1 & 44.6 & 45.9 \\ /\({}^{\ast}\)[43] & - & - & 48.1 & 42.6 & - \\ \end{tabular}
\end{table}
Table 1: **RAE alone works well.** Using the default fine-tuning recipe from ViTDet [43] and MAE [32], our RAE shows significant improvement (47.2 _vs_. 41.2 in COCO AP\({}^{\text{b}}\), and 42.1 _vs_. 24.4 in ADE20K mIoU) over training from scratch, suggesting RAE itself can serve as a pre-text task. MAE is better but RAE is lightweight and compatible, see Tab. 2. (* uses the longer, optimal recipe\({}^{\text{s}}\)).
\begin{table}
\begin{tabular}{c|c c|c|c c|c} \# of & \multicolumn{3}{c|}{MAE} & \multicolumn{3}{c}{R-MAE} \\ \cline{2-7} epochs & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU \\ \hline
2k & **49.9** & **44.5** & 45.2 & 49.7 & 44.1 & **46.0** \\
4k & 50.1 & 44.6 & 45.9 & **50.6** & **45.0** & **46.8** \\
8k & 50.1 & 44.6 & 46.5 & **50.8** & **45.2** & **47.0** \\ \end{tabular}
\end{table}
Table 2: **MAE _vs_. R-MAE** across COCO pre-training epochs. By introducing RAE to MAE, R-MAE shows improvement when the pre-training is sufficiently converged. At 2k epochs MAE is still slightly higher in detection AP, but at 4k (default) and 8k epochs, the benefit emerges. Thanks to the lightweight design of RAE, it only adds _1.3%_ overhead per iteration in FLOPs over MAE.
\begin{table}
\end{table}
Table 4: **Analysis of R-MAE.** We cover: a) variants of the design – all 3 are performing similarly well, different from RAE; b) loss weight of RAE; c) cross-feed between RAE and MAE, where we find our _asymmetric_ design that only feeds pixels to regions works best, potentially because only the pixel encoder is used; d) whether to share masks within R-MAE (separate can incur cheating). Defaults are in gray.
Figure 5: **Analysis of RAE** in figures. Top-left: With our lightweight design, RAE does not contribute significantly on top of MAE (9.7b), and among them the channel variant is the most efficient, length being second, and batch being the most expensive especially when the # of regions go up. Top-right: # of regions helps performance, and even with 16 regions it still does not stop growing. Bottom: mask ratio matters – we either change the region mask ratio (\(\beta_{\text{R}}\)) alone (left), or jointly change it with the image mask ratio (\(\beta_{\text{R}}\)=\(\beta_{\text{t}}\), right). In both cases, a high mask ratio (\(\sim\)0.75) is required. ADE20K numbers are averaged over three runs to reduce variance, following prior practice [32].
\begin{table}
\end{table}
Table 3: **Analysis of RAE** on detection and segmentation. We cover: a) variants of RAE, including all three axes of ViT activation maps; b) ground-truth instance and panoptic segmentation on COCO as oracles; c,d) loss-type which \(\ell_{2}\) also works and background weight in cross-entropy; e-j) architecture changes, where we find larger encoder/decoder/neck generally helps accuracy but can hurt speed, and a 3-layer normal MLP works best as a predictor compared to the inverted MLP layer (in ViT) and none. Default settings are shaded in gray.
- achieving the best mIoU and helping on detection scores (Tab. 3b).
Next, we generalize to more data, more tasks; extensions to larger backbones and better regions are found in Appendix B.
### More Pre-Training Data on COCO
Next we generalize our finding. The first is on pre-training data scale - if adding more data changes our observation. To this end, we add COCO unlabeled2017 to our pre-training set, and again train for 4k epochs following [35].
Results are summarized in Tab. 5. With no change of hyper-parameters, R-MAE continues to outperform MAE.
### Evaluation on LVIS Detection
The second generalization is on downstream task. We directly evaluate the COCO pre-trained MAE baseline and R-MAE on LVIS object detection [31] as another benchmark. LVIS builds on COCO images, but its key focus is on long-tail recognition. The results are presented in Tab. 6, where we observe a similar gain to COCO detection.
### ImageNet Pre-Training
The third generalization is to further pre-train on ImageNet [22]. We make the following adjustments from our default setting: (i) Setting the epoch number to 800/1600, following MAE [32]; (ii) extracting FH regions with a single scale of \(1000\), following [36]. As ImageNet is a standard pre-training benchmark [18, 32], this allows a fair comparison to state-of-the-art methods.
\begin{table}
\begin{tabular}{l|c c c c|c c c c} pre-train & AP\({}^{\text{b}}\) & AP\({}^{\text{b}}_{\text{f}}\) & AP\({}^{\text{b}}_{\text{e}}\) & AP\({}^{\text{p}}_{\text{f}}\) & AP\({}^{\text{m}}\) & AP\({}^{\text{m}}_{\text{f}}\) & AP\({}^{\text{m}}_{\text{e}}\) & AP\({}^{\text{m}}_{\text{f}}\) \\ \hline MAE & 37.7 & 44.5 & **36.4** & 25.4 & 35.8 & 40.9 & **35.3** & 25.1 \\ R-MAE & **38.3** & **45.4** & **36.4** & **26.7** & **36.2** & **41.8** & 34.9 & **26.4** \\ \end{tabular}
\end{table}
Table 6: Comparison on **LVIS** between MAE and R-MAE. We also include LVIS-specific metrics for long-tail recognition.
Figure 6: **Attention map visualization** on COCO val2017. In each group from left to right we show the original image with the selected query (denoted by red square); three attention maps corresponding to the query generated from i) MoCo v3 [21]; ii) MAE [32]; and iii) R-MAE. All of these methods are pre-trained on COCO train2017 split. In every row from top to down, we show 3 types of the query: i) rigid objects, ii) non-rigid objects, iii) multiple objects. Regions with darker red colors in the attention map denote larger attention weights. Compared to the baselines, the attention map from R-MAE is more local and focused.
\begin{table}
\begin{tabular}{l|c c|c c|c c} pre-train & \multicolumn{2}{c|}{train2017 only} & \multicolumn{3}{c}{+unlabeled2017} \\ \cline{2-7} & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU \\ \hline MAE & 50.1 & 44.6 & 45.9 & 51.5 & 45.9 & 48.4 \\ R-MAE & **50.6** & **45.0** & **46.8** & **52.1** & **46.1** & **48.7** \\ \end{tabular}
\end{table}
Table 5: COCO pre-training **with more data** (unlabeled2017).
Figure 7: **Attention map (failure cases)** on COCO val2017. We show the attention with the query pointing to background or a very large object in the image. The same visualization technique and ordering are used as in Fig. 6. Our R-MAE tends to focus on a very local region instead of the whole background or object, and this can sometimes cause failures.
Tab. 7 summarizes our comparison among latest MAE variants on detection and segmentation using the same transferring recipe [43, 32]. For MultiMAE [5], we also implemented our own version by treating \(k\) region maps as a semantic map of \(k\) channels. Across all the methods compared, R-MAE achieves the best results on all three metrics. Noted that R-MAE even shows advantage over Long-Seq MAE [35] and SemMAE [42], both of which increased pre-training cost by \(\sim\)4\(\times\) with longer sequence length.
### Qualitative Results
R-MAE generates reasonable prediction of regions as shown in Fig. 3. In order to gain better understanding about the behaviour of our pre-trained pixel encoders, we visualize the attention map of the ViT after R-MAE in Fig. 6.
We randomly pick images from the COCO val2017 split for the qualitative examination. Given an input image, we first pick a patch as our query (denoted in red square), and visualize its averaged attention map of the last ViT block. The regions with more attention weights are shown in darker red color. For better comparisons, we also include the visualizations of MoCo v3 [21] and MAE [32] on the same image. To be fair, all of three models are pre-trained on COCO train2017 split till full convergence.
**Observations of attention map.** Given the query patch, MoCo v3 attends to a very large area. In particular, its attention map is hard to interpret when the scene becomes more complicated (shown in the last row). It may be because MoCo v3 is pre-trained with contrastive learning of holistic image representations, so it largely fails to capture the local information. On the other hand, MAE gives more reasonable attention map per query, which highly covers the object of interest. However, for objects of similar colors or shapes, MAE struggles to differentiate them. Finally, R-MAE demonstrates its localization capabilities through the attention map, with strong focus on objects across different locations. The last row of Fig. 6 shows the extreme case of a crowded scene with similar objects. It is impressive to see that our method could distinguish the corresponding object instance from others.
**Failure cases.** We also show some failure cases from our visualization in Fig. 7. The query pointing to background is shown on the left group. While MoCo v3 and MAE produces a wide attention map which covers the whole background, the attention map from our R-MAE tends to focus on very local regions around the query. The same is observed for an image of very large object (, the bear on the right). The R-MAE attention map only covers parts of the object.
**RAE as an interactive segmentor.** Finally, we find a pre-trained RAE can act as 'interactive segmentor' [53]. Specifically, it takes the image along with some patches-of-interest as its inputs during the inference time. In an interactive segmentation setting, these patches can be provided by user clicks or eye gazing. The model then predicts the object corresponding to the given patches. From Fig. 8, we can see that RAE can predict high-quality regions even with 90% of the patches masked, and continues to refine when more hints are supplied (from left to right).
## 5 Conclusion
In this work, we present a simple yet effective pre-training approach (R-MAE) to explore the important vision concept - _region_ in MAE [32]. Through extensive quantitative and qualitative experiments, we show R-MAE is indeed more'region-aware', and can consistently help downstream performance on localization-related tasks (, detection and segmentation). By treating regions as queries, its region branch is designed to be highly efficient (1.3% overhead), yet it serves as a key for R-MAE to achieve state-of-the-art results among ImageNet pre-trained MAE variants. We hope our work will inspire more future efforts along this direction, and truly close the gap to natural language processing by learning _the visual analogue_ of words in computer vision.
\begin{table}
\begin{tabular}{c|l|c c|c} \# of epochs & method & AP\({}^{b}\) & AP\({}^{a}\) & mIoU \\ \hline \multirow{2}{*}{800} & SemMAE\(\dagger\)[42] & - & - & 46.3 \\ & R-MAE & **51.3** & **45.7** & **46.6** \\ \hline \multirow{4}{*}{1600} & MultiMAE [5] & - & - & 46.2 \\ & MultiMAE (our impl.) & 51.8 & 46.1 & 47.0 \\ \cline{1-1} & LoMaR [15] & 51.4 & 45.7 & - \\ \cline{1-1} & Long-Seq MAE\(\dagger\)[35] & 52.1 & 46.2 & - \\ \cline{1-1} & R-MAE & **52.3** & **46.4** & **47.5** \\ \end{tabular}
\end{table}
Table 7: **State-of-the-art comparison** among MAE variants pre-trained on ImageNet [22]. R-MAE– with a negligible 1.3% overhead – outperforms all the methods compared (\(\dagger\): 8\(\times\)8 patch size; \(\ddagger\): 448\(\times\)448 input, both will increase pre-training cost by \(\sim\)4\(\times\)).
Figure 8: **RAE for interactive segmentation.** Here we show RAE’s region predictions on COCO val2017 set, given images and only masked region maps severing as a proxy to a potential user’s input. Going from left to right, the user is supplying more ‘annotations’. The model is pre-trained with a fixed region masking ratio (75%) but generates high-quality masks even when the inference ratio is significantly higher (90%).
A Implementation Details
**Masking strategy.** Different from [42] which deploys a biased sampling strategy using semantic parts, we aim to verify the effectiveness of RAE and R-MAE without changing the distribution of masked images. Therefore, during the pre-training stage, we simply follow the random uniform masking strategy as used in MAE [32]. To ensure the task on the region side is meaningful, we first sample the mask applied to the image, then sample from region maps that have _at least_ one visible foreground patch.
To best describe our implemented model of RAE and R-MAE in detail, we resort to a more mathematical formulation of the problem and our solutions below.
**Basic notations.** We denote \(R\in\mathbb{R}^{H\times W\times k}\) as the region maps corresponding to the input image, where \(k\) is the number of regions, and \(H,W\) are the dimensions of the input. RAE first patchifies \(R\), and then masks \(R\) with a ratio of \(\beta_{\text{R}}\). The patch size \(p\) used in the regions is the same as the input image. The full sequence length is denoted by \(N=\frac{H}{p}\cdot\frac{W}{p}\).
**RAE batch variant.** The RAE batch variant processes each region independently in the _batch_ dimension. Note that the image features are shared among all \(k\) different regions.
Given \(R{=}\{R_{i}\}_{i=1}^{k},R_{i}\in\mathbb{R}^{H\times W}\), our region encoder projects each visible patch of \(R_{i}\) into a region embedding:
\[v_{\text{{enc}}_{i}}=\mathrm{R}\!-\!\mathrm{Encoder}(v_{R_{i}}), \tag{1}\]
where \(v_{R_{i}}\in\mathbb{R}^{N\cdot(1-\beta_{\text{R}})\times(p\cdot p)}\) are visible patches of \(R_{i}\), and \(v_{\text{{enc}}_{i}}\in\mathbb{R}^{N\cdot(1-\beta_{\text{R}})\times p_{\text {E}}}\) is the output of the region encoder.
We then take the sum of the image features \(v^{\prime}_{\text{{penc}}}\) and \(v^{\prime}_{\text{{enc}}_{i}}\), and feed it to the region decoder for prediction:
\[v^{\prime}_{\text{{enc}}_{i}} =\mathrm{MaskFill}\left(f(v_{\text{{enc}}_{i}}),\texttt{[mask]}\right) \tag{2}\] \[v_{\text{{dec}}_{i}} =\mathrm{R}\!-\!\mathrm{Decoder}\left(v^{\prime}_{\text{{enc}}_ {i}}+v^{\prime}_{\text{{penc}}}\right) \tag{3}\]
where \(v^{\prime}_{\text{{penc}}}\in\mathbb{R}^{N\times p_{\text{D}}}\) is the image features from the pixel encoder filled with [mask] token. Similarly, \(v^{\prime}_{\text{{enc}}_{i}}\in\mathbb{R}^{N\times p_{\text{D}}}\) is region embeddings filled with the [mask] token. Here, \(f:p_{\text{E}}\to p_{\text{D}}\) denotes the linear projection and \(v_{\text{{dec}}_{i}}\in\mathbb{R}^{N\times p_{\text{D}}}\) is the region decoder output which is then used to predict masked patches of \(R_{i}\).
While preserving the _permutation equivariance4_ of \(k\) region maps, the RAE batch variant can be computationally expensive and resource-intensive (_i.e._, the total number of FLOPs increases linearly w.r.t. \(k\)).
Footnote 4: If one permutes the order for the \(k\) input regions, the output will be shuffled in the exactly same order.
**RAE channel variant.** Here, we merge \(k\) region maps in the _channel_ dimension, resulting in an input sequence of visible patches \(v_{R}\in\mathbb{R}^{N\cdot(1-\beta_{\text{R}})\times(k\cdot p\cdot p)}\). This can be seen as converting region maps \(R\in\mathbb{R}^{H\times W\times k}\) into an image of \(k\) channels. The region encoder takes \(v_{R}\) as its input to generate region embeddings:
\[v_{\text{{enc}}}=\mathrm{R}\!-\!\mathrm{Encoder}(v_{R}), \tag{4}\]
where \(v_{\text{{enc}}}{\in}\mathbb{R}^{N\cdot(1-\beta_{\text{R}})\times p_{\text{E}}}\) is the region encoder's output.
We then add image features from the pixel encoder to the region embeddings from the region encoder. The augmented visual features are passed into the region decoder in order to make predictions for masked region patches:
\[v^{\prime}_{\text{{enc}}} =\mathrm{MaskFill}\left(f(v_{\text{{enc}}}),\texttt{[mask]}\right), \tag{5}\] \[v_{\text{{rec}}} =\mathrm{R}\!-\!\mathrm{Decoder}\left(v^{\prime}_{\text{{enc}}}+v^{ \prime}_{\text{{penc}}}\right), \tag{6}\]
where \(v^{\prime}_{\text{{enc}}}\in\mathbb{R}^{N\times p_{\text{D}}}\) is the region embeddings filled with the [mask] token and \(v_{\text{{dec}}}\in\mathbb{R}^{N\times p_{\text{D}}}\) is the output of the region decoder.
By treating \(R\) as an image of \(k\) channels, the channel variant demonstrates great efficiency during the pre-training process. This variant, however, fails to deal with the permutation equivariance between \(k\) regions - the shuffling of the outputs is _not_ guaranteed given shuffled inputs. We also use this variant for our approximation of MultiMAE [5], which treats additional modalities as a single spatial entity.
**RAE length variant.** Inspired by the design of object queries in the DETR decoder [13], the RAE length variant encodes each region map into a single vector using region encoder. The region queries will be concatenated along the sequence _length_ dimension as follows:
\[v_{\text{{rec}}_{i}} =\mathrm{AvgPool}\left(\mathrm{R}\!-\!\mathrm{Encoder}(v_{R_{i}})\right), \tag{7}\] \[v_{\text{{emb}}} =\mathrm{Concat}(v_{\text{{enc}}_{1}},...,v_{\text{{enc}}_{k}}), \tag{8}\]
where \(v_{R_{i}}\in\mathbb{R}^{N\cdot(1-\beta_{\text{R}})\times(p\cdot p)}\) are visible patches of \(R_{i}\), \(v_{\text{{enc}}_{i}}\in\mathbb{R}^{p_{\text{E}}}\) is the region embedding of \(i\)-th region, \(v_{\text{{emb}}}\in\mathbb{R}^{k\times p_{\text{E}}}\) denotes the region queries, and \(\mathrm{AvgPool}\) is the average pooling operation.
Different from the pixel decoder, the region decoder contains three sub-layers in each block: self-attention, cross-attention, and feed-forward [55]. In addition, we use a \(\mathrm{Neck}\) module to provide cross-attention with information from pixels as context. The blocks in \(\mathrm{Neck}\) share the same design as the ones in the pixel decoder:
\[v_{\text{{context}}}=\mathrm{Neck}(v^{\prime}_{\text{{penc}}}), \tag{9}\]
where \(v^{\prime}_{\text{{penc}}}\) is the image features filled with [mask] tokens and \(v_{\text{{context}}}\in\mathbb{R}^{N\times p_{\text{D}}}\) is the output of \(\mathrm{Neck}\). The region decoder then decodes region queries with context information:
\[v_{\text{{query}}}=\mathrm{R}\!-\!\mathrm{Decoder}(f(v_{\text{{emb}}}),v_{\text{{ context}}}), \tag{10}\]
where \(v_{\text{{query}}}\in\mathbb{R}^{k\times p_{\text{D}}}\) is the output of the query decoder. Since masked region autoencoding predicts \(R\in\mathbb{R}^{k\times H\times W}\)
during the pre-training, we modify the cross-attention sub-layer of the last region decoder layer to expand each region embedding in \(v_{\text{query}}\) into a region map as follow (see Fig. 4):
\[v_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text
## Appendix B More Comparisons
**MAE baselines.** We first show the comparison of MAE with different base learning rates: 1.5e-4 in [32] and 1e-4 in our study. Here, models are pre-trained either on ImageNet [22] with 1600 epochs, or on COCO (train2017)/COCO++ (train2017 + unlabeled2017) with 4k epochs. All other settings are set as default. Tab. 8 shows that MAE with 1e-4 rate is able to reproduce ViTDet [43]. The only reason for this change is better pre-training stability which allows us to incorporate additional loss from RAE. Our R-MAE shows further improvements beyond Tab. 8.
**Larger backbones.** Tab. 9 shows the scaling trend of model size when pre-trained on ImageNet. Overall, the gains can hold at ViT-L [24], despite even more negligible computational overheads from RAE with larger backbones.
**Better regions.** To further validate the design of RAE, we explore better regions beyond the ground-truths in COCO. To this end, we simply use off-the-shelf segmentation model from SAM [39] to generate regions and replace FH. With a larger region decoder and mask ratio 0.6, RAE can achieve _better_ results than MAE with _less_ compute in FLOPs (Tab. 10). While it's still unclear why COCO ground-truths fail, it shows better regions can indeed be leveraged in RAE.
**ImageNet classification.** To give a more complete assessment, we also evaluate our pre-trained models on ImageNet classification. To be consistent with MAE [32], we train R-MAE on ImageNet for 1600 epochs. It can be seen from Tab. 11 that our R-MAE achieves the same performance with MAE when being fine-tuned end-to-end. Interestingly, the linear probing performance of R-MAE lags behind MAE by a large margin. This observation indicates that our R-MAE is more focused on local patterns rather than global average features suited for image classification.
**Additional visualizations.** We provide extra qualitative results of our pre-trained models in Fig. 9 and Fig. 10.
\begin{table}
\begin{tabular}{l|c|c c|c c} pre-train settings & region & FLOPs & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU \\ \hline MAE & - & 9.7b & 50.1 & 44.6 & 45.9 \\ \hline RAE, default & \multirow{2}{*}{FH} & 4.7b & 47.2 & 41.8 & 42.1 \\ RAE, \(p_{\text{D}}\)=256 & & 4.8b & 47.6 & 42.2 & 42.9 \\ \hline RAE, \(p_{\text{D}}\)=256 & \multirow{2}{*}{SAM} & 4.8b & 49.9 & 44.2 & 46.0 \\ RAE, \(p_{\text{D}}\)=256, \(\beta_{\text{I}}\)=\(\beta_{\text{R}}\)=.6 & & 7.3b & **50.6** & **45.1** & **46.8** \\ \end{tabular}
\end{table}
Table 10: Exploring **better regions** from SAM [39] to validate RAE. We simply swap FH regions with off-the-shelf SAM ones, and with a larger decoder and changes in mask ratios, we find RAE alone can achieve better results with less compute.
\begin{table}
\begin{tabular}{l|c c|c c|c c} pre-train & \multicolumn{2}{c|}{COCO} & \multicolumn{2}{c|}{COCO++} & \multicolumn{2}{c}{ImageNet} \\ \cline{2-7} learning rate & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) \\ \hline MAE w / 1.5e-4 & 49.9 & 44.4 & 51.6 & 45.7 & 51.6 & 45.9 \\ MAE w / 1e-4 & 50.1 & 44.6 & 51.5 & 45.9 & 51.8 & 46.1 \\ \end{tabular}
\end{table}
Table 8: MAE with **different base learning rates**. For ImageNet w/ 1.5e-4, we directly cite the results from ViTDet [43], while others are from our own experiments. Our default setting (w/ 1e-4), chosen due to better stability, can reproduce _all_ the MAE results.
\begin{table}
\begin{tabular}{l|c c|c c|c} pre-train & \multicolumn{2}{c|}{ViT-Base} & \multicolumn{2}{c}{ViT-Large} \\ \cline{2-6} & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU & AP\({}^{\text{b}}\) & AP\({}^{\text{m}}\) & mIoU \\ \hline MAE & 51.8 & 46.1 & **47.9** & 55.6 & 49.3 & 52.3 \\ R-MAE & **52.3** & **46.4** & 47.5 & **55.8** & **49.7** & **52.5** \\ \end{tabular}
\end{table}
Table 9: **Larger backbones** pre-trained on ImageNet. The gains from R-MAE can hold despite less relative computation overheads.
Figure 10: **Additional qualitative results on COCO val2017, following the same format as Fig. 3. See there for explanations.** |
2310.06826 | Finding cliques and dense subgraphs using edge queries | We consider the problem of finding a large clique in an Erd\H{o}s--R\'enyi
random graph where we are allowed unbounded computational time but can only
query a limited number of edges. Recall that the largest clique in $G \sim
G(n,1/2)$ has size roughly $2\log_{2} n$. Let $\alpha_{\star}(\delta,\ell)$ be
the supremum over $\alpha$ such that there exists an algorithm that makes
$n^{\delta}$ queries in total to the adjacency matrix of $G$, in a constant
$\ell$ number of rounds, and outputs a clique of size $\alpha \log_{2} n$ with
high probability. We give improved upper bounds on
$\alpha_{\star}(\delta,\ell)$ for every $\delta \in [1,2)$ and $\ell \geq 3$.
We also study analogous questions for finding subgraphs with density at least
$\eta$ for a given $\eta$, and prove corresponding impossibility results. | Endre Csóka, András Pongrácz | 2023-10-10T17:56:12Z | http://arxiv.org/abs/2310.06826v2 | # Finding cliques and dense subgraphs using edge queries
###### Abstract
We consider the problem of finding a large clique in an Erdos-Renyi random graph where we are allowed unbounded computational time but can only query a limited number of edges. Recall that the largest clique in \(G\sim G(n,1/2)\) has size roughly \(2\log_{2}n\). Let \(\alpha_{\star}(\delta,\ell)\) be the supremum over \(\alpha\) such that there exists an algorithm that makes \(n^{\delta}\) queries in total to the adjacency matrix of \(G\), in a constant \(\ell\) number of rounds, and outputs a clique of size \(\alpha\log_{2}n\) with high probability. We give improved upper bounds on \(\alpha_{\star}(\delta,\ell)\) for every \(\delta\in[1,2)\) and \(\ell\geq 3\). We also study analogous questions for finding subgraphs with density at least \(\eta\) for a given \(\eta\), and prove corresponding impossibility results.
_Keywords and phrases:_ Graph algorithms, random graphs, cliques, dense subgraphs, adaptive algorithms
_MSC2020 codes:_ 05C80, 05C85, 68Q87, 68W20
## 1 Introduction
Finding a subgraph with certain properties in a graph is a central topic of theoretical computer science. It is well-known that finding a maximum clique or a Hamiltonian cycle are NP-complete problems [12]. Even approximating the size of the maximum clique within a given factor is hard in standard computational models [8].
Ferber et al. proposed the Subgraph Query Problem in [9, 10]. The general question is to find a subgraph in \(G(n,p)\) with high probability that satisfies a given monotone graph property by querying as few pairs as possible, where a query is simply checking whether the given pair constitutes an edge. In some sources, the requirement that the algorithm succeeds with high probability is replaced by the equivalent requirement that the algorithm succeeds with probability at least \(1/2\). In [9, 10] Hamiltonian cycles and long paths were considered in sparse Erdos-Renyi graphs. In the same setup, Conlon et al. [4] studied the problem of finding a fixed subgraph (e.g., a clique of given size). For the related Planted Clique Problem, see [11, 15, 17, 18].
A natural special case of the Subgraph Query Problem is the Maximum Clique Query Problem (MCQP) introduced in [7]: for a given \(p\in(0,1)\), what is the size of the largest clique that we can find in \(G(n,p)\) with high probability by using at most \(n^{\delta}\) queries (\(\delta\in[1,2]\))? It turns out that the parameter \(p\) is not so important (as long as it is a fixed constant): it is usually set to \(p=1/2\). The present paper also works under this assumption, although the results could be generalized to arbitrary \(p\in(0,1)\).
The size of the largest clique in \(G(n,1/2)\) is asymptotically \(2\log n\) with high probability, where \(\log\) is the base \(2\) logarithm; for a more precise estimate, see [16, 3, 14]. This classical result answers the question for \(\delta=2\): if we are allowed to query all edges, we can find the maximum clique in the graph, and it has approximately \(2\log n\) vertices with high probability. Note that the complexity of the algorithm is only measured in the number of queries. In particular, the actual runtime of the algorithm is irrelevant: even though finding the maximum clique is an NP-complete problem, which means that to our best knowledge it likely requires an exponentially large amount of time in standard computational models, we still view this as a quadratic algorithm in our framework.
In the \(\ell\)-adaptive variant of MCQP, the queries are divided into \(\ell\) rounds for some \(\ell\in\mathbb{N}\). The algorithm makes the queries within a round simultaneously. The effect of this modification is that within a round, we have to make a decision for several moves ahead, only taking into consideration the results of the previous rounds. Informally speaking, we cannot adapt our next choice for the queried pair if another pair in the same round yielded an unfavorable result. The original version of the problem can be viewed as the infinitely adaptive variant, that is, \(\ell=\infty\).
The first non-trivial upper bound was shown by Feige at al. in [7] for the \(\ell\)-adaptive version. They proved that for all \(\delta\in[1,2)\) and \(\ell\in\mathbb{N}\) there exists a constant \(\alpha<2\) such that it is impossible to find a clique of size \(\alpha\log n\) in \(\ell\) rounds using \(n^{\delta}\) queries altogether. That is, \(\alpha_{\star}(\delta,\ell)<2\) for all \(\ell\in\mathbb{N}\). Soon after, an improvement was shown in [1] by Alweiss et al. They studied the fully adaptive variant (\(\ell=\infty\)) of MCQP, and proved that \(\alpha_{\star}(\delta,\infty)\leq 1+\sqrt{1-(2-\delta)^{2}/2}\). Clearly, this is also an upper bound for the \(\ell\)-adaptive variant for any \(\ell\in\mathbb{N}\). To date, this was the strongest known upper bound, except for the well-understood \(\ell=2\) case and some estimates for \(\ell=3\). These special cases were investigated in [6] by Feige and Ferster. In the \(\ell=2\) case they have shown that \(\alpha_{\star}(\delta,2)=4\delta/3\) for \(\delta\in[1,6/5]\) and \(\alpha_{\star}(\delta,2)\leq 1+\sqrt{1-(2-\delta)^{2}}\) for \(\delta\in[6/5,2]\).
In this paper, we improve on the idea introduced by Feige and Ferster in [6]. A monotone increasing function \(\gamma:\mathbb{N}\cup\{\infty\}\to\mathbb{R}^{+}\) is defined in Section 2 as a solution of a combinatorial problem. Using this function, we re-prove the results of [1] for \(\ell=\infty\) and [6] for \(\ell=2\), and obtain stronger results for \(\ell=3\) and non-trivial estimates for all \(\ell\geq 4\).
**Theorem 1**.: _For every \(\delta\in[1,2]\) and \(\ell\geq 3\), including \(\ell=\infty\), we have_
\[\alpha_{\star}\left(\delta,\ell\right)\leq 1+\sqrt{1-\frac{(2-\delta)^{2}}{4 \gamma(\ell)}}. \tag{1}\]
_Furthermore, for \(\ell=2\) the same estimate applies for \(\delta\in[6/5,2]\), and \(\alpha_{\star}\left(\delta,\ell\right)\leq 4\delta/3\) for \(\delta\in[1,6/5]\)._
We compute some values of the function \(\gamma\) precisely, namely \(\gamma(1)=0\), \(\gamma(2)=1/4\), \(\gamma(3)=3/8\), and \(\gamma(\infty)=1/2\). For \(\ell\in\mathbb{N},\ell\geq 3\) we prove the upper bound \(\gamma(\ell)\leq 1/2-1/(3\cdot 2^{\ell-1}-4\ell+8)\); see Theorem 7. In particular, \(\gamma(4)\leq 7/16=0.4375\). The bounds provided by Theorem 7 are probably not tight for \(\ell\geq 4\); for \(\ell=4\), the best lower bound we could prove is \(\gamma(4)\geq(9+\sqrt{3})/26\approx 0.41277\), which we also do not believe to be tight. Nevertheless, putting these values and estimates into Theorem 1 yields the following more concrete result.
**Corollary 2**.: _For every \(\delta\in[1,2]\) and \(\ell\geq 3\) we have_
\[\alpha_{\star}\left(\delta,\ell\right)\leq 1+\sqrt{1-\frac{(2-\delta)^{2}}{2-1/ (3\cdot 2^{\ell-3}-\ell+2)}}. \tag{2}\]
In particular, \(\alpha_{\star}\left(\delta,3\right)\leq 1+\sqrt{1-\frac{2}{3}(2-\delta)^{2}}\). For instance, \(\alpha_{\star}\left(1,3\right)\leq 1+1/\sqrt{3}\approx 1.577\) by Corollary 2. The earlier best estimate, proven in [6], was \(\alpha_{\star}\left(1,3\right)\leq 1.62\).
We define the Maximum Dense Subgraph Query Problem (MDSQP), a natural generalization of MCQP. Given a \(\delta\in[1,2]\), \(\ell\in\mathbb{N}\cup\{\infty\}\), and \(\eta\in(1/2,1]\). The problem is to find the largest possible subgraph by using at most \(n^{\delta}\) queries (and unlimited computational time) in \(G(n,1/2)\) with edge density at least \(\eta\), with high probability. A recent result of Balister et al. [2] determines the size of the largest subgraph in \(G(n,1/2)\) having edge density at least \(\eta\) (with high probability). For \(\eta\in(1/2,1]\), it is asymptotically \(\frac{2}{1-H(\eta)}\log n\), where \(H(\eta)=-\eta\log\eta-(1-\eta)\log(1-\eta)\) is the Shannon entropy. Note that this is consistent with the above discussion: cliques in a graph are exactly the subgraphs with edge density \(\eta=1\), and indeed \(\frac{2}{1-H(1)}\log n=2\log n\). Just as in the Maximum Clique Query Problem, we only expect to achieve this size \(\frac{2}{1-H(\eta)}\log n\) by an \(\ell\)-adaptive algorithm using \(n^{\delta}\) queries if \(\delta=2\), that is, when the whole graph is uncovered. The natural problem is to determine \(\alpha_{\star}\left(\delta,\ell,\eta\right)\), the supremum of all \(\alpha\) such that an appropriate \(\ell\)-adaptive algorithm using \(n^{\delta}\) queries finds a subgraph in \(G(n,1/2)\) with density at least \(\eta\) and size \(\alpha\log n\). For a lower bound, a greedy algorithm using a linear number of queries (i.e., \(\delta=1\)) was presented in [5] by Das Sarma et al. Their method could provide a general lower estimate, however it is hard to make it explicit. So rather than proving a formula, they focused on a numerical result, and showed that \(\alpha_{\star}\left(1,\infty,0.951\right)\geq 2\). In other words, there is a fully adaptive algorithm using a linear number of queries that finds a subgraph in \(G(n,1/2)\) with size at least \(2\log n\) and density at least \(0.951\) with high probability.
Using the same techniques as above for the MCQP, we prove the following upper estimate for \(\alpha_{\star}\left(\delta,\ell,\eta\right)\).
**Theorem 3**.: _Let \(\delta\in[1,2]\), \(\ell\geq 3\), including \(\ell=\infty\), and \(\eta\in(3/4,1]\). Given an \(\alpha\), we define \(m_{1}\) as the smallest solution of the equation \(4\gamma(\ell)m\left(1+\log\frac{\eta\alpha^{2}/2-2\gamma(\ell)m^{2}}{\alpha^{ 2}/2-2\gamma(\ell)m^{2}}\right)=2-\delta\) on \([0,\alpha/2]\); if there is no such solution, then \(m_{1}=\alpha_{1}=\infty\). Let \(m_{2}=\alpha/2\). Let \(\alpha_{i}\) be the largest solution of the equation_
\[(\alpha^{2}/2-2\gamma(\ell)m_{i}^{2})\left(1-H\left(\frac{\eta\alpha^{2}/2-2 \gamma(\ell)m_{i}^{2}}{\alpha^{2}/2-2\gamma(\ell)m_{i}^{2}}\right)\right)- \alpha+(2-\delta)m_{i}=0\]
_for \(i=1,2\). Let \(\alpha_{0}=\min(\alpha_{1},\alpha_{2})\). Then \(\alpha_{\star}\left(\delta,\ell,\eta\right)\leq\alpha_{0}\)._
In contrast to Theorem 1, this upper bound is only given implicitly. Even telling when \(\alpha_{0}=\alpha_{2}\) holds, either because \(m_{1}=\infty\) or because \(\alpha_{1}>\alpha_{2}\), seems to be challenging. In MCQP, the analogous degenerate condition only applies when \(\ell=2\) and \(\delta\in[1,6/5]\), yielding the exceptional case in Theorem 1. Nevertheless, this formula can be used to obtain numerical results up to any prescribed precision, in principle. For instance, this shows that a linear, fully adaptive algorithm (\(\delta=1,\ell=\infty\)) cannot find a subgraph of size \(2\log n\) whose density is at least \(0.98226\) with high probability. Or, to complement the above lower estimate, it also shows that \(\alpha_{\star}\left(1,\infty,0.951\right)\leq 2.48227\). The trivial upper bound for \(\alpha_{\star}\left(1,\infty,0.951\right)\) is \(\frac{2}{1-H(0.951)}<2.7861\), since there is no subgraph in \(G(n,1/2)\) with density at least \(0.951\) and size at least \(2.7861\log n\) with high probability.
Note that Theorem 3 is a generalization of Theorem 1: if \(\eta=1\), then \(m_{1}=\frac{2-\delta}{4\gamma(\ell)}\), provided that this value is less than \(\alpha/2\). Then the defining equation of \(\alpha_{1}\) reduces to the same equation \(\alpha^{2}/2-2\gamma(\ell)m_{1}^{2}-\alpha+(2-\delta)m_{1}=0\) that yields the formula in Theorem 1. Furthermore, as \(\delta\to 2\), we have \(m_{0}\to 0\), and then \(\alpha_{0}\) tends to the solution of the equation \((\alpha^{2}/2)(1-H(1-\eta))-\alpha=0\). Thus \(\alpha_{0}\to\frac{2}{1-H(\eta)}\), which is the trivial upper bound. Hence, for any \(\delta\in[1,2)\), the upper bound provided by Theorem 3 is strictly smaller than the size of the largest subgraph with density at least \(\eta\) (divided by \(\log n\)), making it a meaningful estimate.
A combinatorial problem
We pose a question concerning labeled graphs that is closely related to the \(\ell\)-adaptive Maximum Clique Query Problem and Maximum Dense Subgraph Query Problem: an upper estimate to this question yields an upper estimate to both problems.
**Question 4**.: _Given \(\ell\in\mathbb{N}\cup\{\infty\},\ n\in\mathbb{N}\), a labeling \(\lambda:E(K_{n})\rightarrow\{1,\ldots,\ell\}\) of the edges of the complete graph \(K_{n}\), and a perfect matching \(\mathcal{M}\) in \(K_{n}\), we say that an edge \(uv\) is critical if \(\lambda(uv)\) is strictly less than the maximum of the labels of the two edges in \(\mathcal{M}\) covering \(u\) and \(v\). For fixed \(\ell,n\), labeling \(\lambda\) and perfect matching \(\mathcal{M}\), let \(\gamma(\ell,n,\lambda,\mathcal{M})\) be the ratio of critical edges in the \(\binom{n}{2}\) edges. For each \(\ell\in\mathbb{N}\), find_
\[\gamma(\ell)=\limsup_{n\rightarrow\infty}\ \max_{\lambda}\ \min_{\mathcal{M}} \gamma(\ell,n,\lambda,\mathcal{M}).\]
**Remark 5**.: _Using the language of graph limits [13], Question 4 has the following equivalent reformulation. This equivalence also implies that \(\limsup\) in Question 4 can be replaced by \(\lim\)._
_Given \(\ell\in\mathbb{N}\), and a measurable labeling \(\lambda:[0,1]\times[0,1]\rightarrow\{1,\ldots,\ell\}\) of the edges of the complete graph. Or for \(\ell=\infty\), \(\lambda:[0,1]\times[0,1]\rightarrow[0,1]\). Given a measure-preserving bijection \(\mathcal{M}:[0,1]\rightarrow[0,1]\), we say that an edge \((u,v)\in[0,1]\times[0,1]\) is critical if \(\lambda(u,v)<\max\left(\lambda\big{(}u,\mathcal{M}(u)\big{)},\ \lambda\big{(}v,\mathcal{M}(v)\big{)}\right)\). For fixed \(\ell\), measurable labeling \(\lambda\), and measure-preserving bijection \(\mathcal{M}\), let \(\gamma(\ell,\lambda,\mathcal{M})\) be the measure of critical edges. For each \(\ell\in\mathbb{N}\), find_
\[\gamma(\ell)=\max_{\lambda}\ \min_{\mathcal{M}}\gamma(\ell,\lambda,\mathcal{M}).\]
Obviously, \(0=\gamma(1)\leq\gamma(2)\leq\cdots\leq\gamma(\infty)\). If \(\ell=\infty\), it is not worth using the same label twice (in a finite complete graph), hence the problem can be rephrased as follows. Consider all \(\binom{n}{2}!\) orders of the edges of \(K_{n}\). Given a matching \(\mathcal{M}\), an edge \(uv\) is critical if at least one of the edges in \(\mathcal{M}\) covering \(u\) and \(v\) appears later in the ordering. Then \(\gamma(\infty,n,\lambda,\mathcal{M})\) is the ratio of critical edges, and \(\gamma(\infty)=\limsup_{n\rightarrow\infty}\ \max_{\lambda}\ \min_{ \mathcal{M}}\gamma(\infty,n,\lambda,\mathcal{M})\).
Given a perfect matching \(\mathcal{M}\) in \(K_{n}\), we call \(b\) the \(\mathcal{M}\)_-neighbor_ of \(a\) if \(ab\in\mathcal{M}\). Moreover, the \(\mathcal{M}\)_-pair_ of an edge \(e=uv\notin\mathcal{M}\) is the edge linking the \(\mathcal{M}\)-neighbor of \(u\) and the \(\mathcal{M}\)-neighbor of \(v\). Note that this is a proper pairing of non-matching edges. The \(e\)_-switch_ of \(\mathcal{M}\) is the operation that replaces by \(e\) and its \(\mathcal{M}\)-pair \(e^{\prime}\) the two edges in \(\mathcal{M}\) that cover the same quadruple of vertices as \(e\) and \(e^{\prime}\). This produces a new perfect matching of \(K_{n}\).
We first solve the \(\ell=\infty\) case by using a similar idea as that in the proof of [1, Lemma 13], except we need to define the perfect matching in a more complicated way.
**Proposition 6**.: \(\gamma(\infty)=1/2\)__
Proof.: Given an ordering of the set of edges of a graph \(G\), let \(G_{\prec e}\) be the initial segment of the total order up to the edge \(e\), not including \(e\). We construct a perfect matching \(\mathcal{M}\) with edges \(m_{1}\prec m_{2}\prec\cdots\prec m_{n/2}\) such that for every \(1\leq k\leq n/2\) the graph \(K_{n}[m_{1},m_{2},...,m_{k}]_{\prec m_{k}}\) has no perfect matching of size \(k\), where \(G[m_{1},m_{2},...,m_{k}]\) denotes the clique of \(G\) spanned by the listed edges. We can construct \(\mathcal{M}\) recursively in decreasing order of \(k\). Namely, we delete the edges of \(K_{n}\) in decreasing order, and whenever the size of the maximum matching decreases, we add that edge to the matching and delete its two endpoints from the graph.
Assume that a non-matching edge \(e\) and its \(\mathcal{M}\)-pair \(e^{\prime}\) are both critical. Let \(m_{i}\prec m_{k}\) be the two matching edges in \(\mathcal{M}\) that cover the same four points as \(e\) and \(e^{\prime}\). Then \(K_{n}[m_{1},m_{2},...,m_{k}]_{\prec m_{k}}\) has a perfect matching, obtained as the result of the \(e\)-switch restricted to \(K_{n}[m_{1},m_{2},...,m_{k}]\), a
contradiction. Hence, at most one of each \(\mathcal{M}\)-pair of edges can be critical, yielding the upper bound \(\gamma(\infty)\leq 1/2\).
For the lower bound, enumerate the \(n\) vertices of \(K_{n}\), and let \(\lambda\) be the lexicographical order of the edges: \(12,13,14,\ldots,1n,23,24,\ldots,2n,\ldots,(n-1)n\). Let \(\mathcal{M}\) be any perfect matching. If \(uv\in\mathcal{M}\) for some \(u<v\), then it makes exactly the edges \(iv\) and \(uj\) critical for all \(1\leq i\leq u-1\) and \(1\leq j\leq v-1\), \(j\neq u\). That is, the edge \(uv\in\mathcal{M}\) makes exactly \(u+v-3\) edges critical. As each number between \(1\) and \(n\) appears exactly once as an endpoint in a matching edge, the sum of all these expressions \(u+v-3\) for edges \(uv\in\mathcal{M}\) is \(\frac{n(n+1)}{2}-\frac{3n}{2}\), which is asymptotically the number of edges. This is the number of critical edges with multiplicity: each critical edge \(e\) contributes one or two into this sum, depending on whether the label \(\lambda(e)\) is less than only one or both labels of edges in \(\mathcal{M}\) covering the endpoints of \(e\). As all multiplicities are at most two and they add up to (approximately) the number of edges, (approximately) at least half of the edges must be critical.
We note that the matching \(1n,2(n-1),\ldots,(n/2)(n/2+1)\) produces approximately \(n^{2}/4\) critical edges (about half the number of edges) in the construction where edges are labeled in lexicographical order. This is exactly the matching defined in the first half of the proof of Proposition 6.
Proposition 6 is already strong enough to reprove the upper bound on the fully adaptive clique problem that was shown in [1]. Moreover, it provides the upper bound \(\gamma(\ell)\leq 1/2\) for all \(\ell\in\mathbb{N}\). Now we improve on this estimate: this is going to yield a better upper bound for the \(\ell\)-adaptive MCQP for \(\ell\geq 3\) than the state of the art (and reproves the best known estimates for \(\ell=2\)).
**Theorem 7**.:
* \(\gamma(2)=1/4\)__
* \(\gamma(3)=3/8\)__
* \(\gamma(\ell)\leq 1/2-1/(3\cdot 2^{\ell-1}-4\ell+8)\) _for_ \(\ell\geq 4\)__
The rough idea of the proof of Theorem 7 is to find a matching such that
1. at most half of those edges are critical that link matching edges with different labels, and
2. significantly less than half of those edges are critical which link matching edges of the same label.
It is not surprising that if both goals are fulfilled, then the critical edge ratio is pushed below \(1/2\) by some fixed constant (depending on \(\ell\)). The next lemma is the crucial tool to achieve this second goal.
**Lemma 8**.: _Let \(k\in N\) be a fixed number. For an \(x\in\mathbb{N}\), consider the graph on \(2x\) vertices with \(x\) disjoint edges, colored by red. Let \(\beta_{k}(x)\) be the largest number of blue edges that can be added to the red perfect matching in the graph so that_
* _there are no alternating cycles, and_
* _there are no alternating paths containing at least_ \(k\) _blue edges._
_Then \(\beta_{k}(x)=(1-1/k)x^{2}+O_{k}(x)\) as \(x\to\infty\)._
Proof.: For the lower estimate, we show two different constructions: one for even and one for odd \(k\). If \(k\) is even, let us partition the set of red edges into \(k/2\) subsets of roughly equal size: that is, the cardinality of any two should differ by at most \(1\). The possible differences between these sets only
e set of vertices covered by these sets of edges are \(X_{1},\ldots,X_{k/2}\), each containing \(4x/k\) vertices. From each pair of vertices that are red neighbors, we pick one and call it the left vertex of the edge; the other one is the right vertex of the edge. Thus in \(X_{i}\) there are \(2x/k\) left vertices and \(2x/k\) right vertices. The left vertices are all linked by blue edges, contributing \(\binom{x}{2}=x^{2}/2+O(x)\) blue edges. Two right vertices are never blue neighbors. A right vertex \(u\) is linked to a left vertex \(v\) iff the index of the set containing \(u\) is larger than that of \(v\). See the left picture in Figure 2 for an illustration.
There are \(\binom{k/2}{2}=k(k-2)/8\) pairs \((i,j)\) with \(1\leq i<j\leq k/2\), and for all such pairs there are \(4x^{2}/k^{2}\) left-right blue edges between \(X_{i}\) and \(X_{j}\), contributing \((1/2-1/k)x^{2}\) blue edges. Thus there are \((1-1/k)x^{2}+O(x)\) blue edges altogether.
Assuming there is an alternating cycle in this graph, let us pick a red edge in that cycle, and consider its right endpoint. The red edge must be followed by a blue one, and right vertices are only linked to left vertices in the graph by blue edges. Moreover, this blue neighbor of the right vertex is in a lower index set. Thus, the next vertex in the cycle must be on the left, and in a lower index set. Then we have to follow up by the only available red edge, which means that we move to the right side once again. Hence, vertices in any alternating cycle alternate between the left and right side, and the right vertices along the cycle are in sets of descending index. Such a descending walk cannot be circular, a contradiction.
The longest alternating path containing the largest number of blue edges starts off at the right of \(X_{1}\), followed by the red pair of this vertex, then crosses to the right side of \(X_{2}\), followed by the red pair of that vertex, etc. When we enter the right side of \(X_{k/2}\), we move to the pair of that vertex. Then we can pick any other vertex at the left side of \(X_{k/2}\), as there are blue edges between
Figure 1: Left: construction for k=6. Right: an alternating path containing the most blue edges.
left vertices. Then we have to move to the red neighbor of the last vertex, and walk backwards in a similar zig-zag fashion until arriving at the right side of \(X_{1}\). There are \(2(k/2-1)\) cross edges between left and right in such a path, and one more blue edge in \(X_{k/2}\), which is \(k-1\) blue edges altogether. See the right picture in Figure 1 for an illustration.
The construction for odd \(k\) is somewhat more roundabout. This time there are \((k+1)/2\) sets \(X_{1},\ldots,X_{(k-1)/2},X_{(k+1)/2}\), and the last one is half as big as the rest. That is, the number of vertices in \(X_{i}\) is \(2x_{i}\), where \(x_{1}=\cdots=x_{(k-1)/2}=2x/k\) and \(x_{(k+1)/2}=x/k\). Otherwise, the construction is the same, except that there are no blue edges linking two left vertices of \(X_{(k+1)/2}\). The argument that this graph contains no alternating cycle is the same as before. A longest path can still make its way from \(X_{1}\) up to \(X_{(k+1)/2}\) in a zig-zag motion, but it must turn back immediately without gaining an edge on the left side in \(X_{(k+1)/2}\), as there are no blue edges linking left vertices of \(X_{(k+1)/2}\). That is, when we reach a left vertex in \(X_{(k+1)/2}\), the best we can do is to drop down to the left of \(X_{(k-1)/2}\), and zig-zag all the way down to \(X_{1}\). Hence, the most blue edges in an alternating path is \(2(k-1)/2=k-1\). The union of the first \(X_{(k-1)/2}\) sets is the same as the construction for the even number \(k-1\) on \((k-1)x/k\) red edges, thus there are \((1-1/(k-1))((k-1)x/k)^{2}+O(x)=(k-2)(k-1)x^{2}/k^{2}+O(x)\) blue edges in that induced subgraph. In addition, all \(2x/k\) vertices in \(X_{(k+1)/2}\) have \((k-1)x/k\) blue neighbors, contributing \(2(k-1)x^{2}/k^{2}\) blue edges. Thus there are \(((k-2)(k-1)+2(k-1))x^{2}/k^{2}+O(x)=(1-1/k)x^{2}+O(x)\) blue edges in this graph.
We prove the upper estimate by induction on \(k\). Clearly, \(\beta_{1}(x)=0\), which is consistent with the formula for \(k=1\). If \(k=2\), then there cannot be any red edges both of whose endpoints have blue degree at least \(2\). Indeed, if there were such a red edge, then we could match the two endpoints with different blue neighbors, yielding an alternating path with two blue edges. Thus there are \(O(x)\) blue edges incident to vertices with blue degree at most \(1\), and every red edge contains such a vertex. At worst, all other vertices are linked by blue edges, which yields \(\binom{x}{2}+O(x)=x^{2}/2+O(x)\) blue edges altogether, consistently with the formula for \(k=2\). Let \(k\geq 3\) and assume that the assertion holds for all smaller values of \(k\).
Let \(G\) be a graph with maximum number \(\beta_{k}(x)\) of blue edges satisfying the requirements. If an alternating path in \(G\) ends in a blue edge, we can always extend it by the red edge incident to its last vertex. The only obstruction to the addition of this edge would be if the other endpoint of the red edge coincided with the starting vertex of the path. However, that would yield an alternating cycle. Hence, paths with the most blue edges in them are exactly the longest paths in \(G\) (after adding red edges at the end, if necessary). Let \(P\) be a longest alternating path in \(G\). We may assume that \(P\) contains \(k-1\) blue edges, otherwise the formula for \(\beta_{k-1}(x)\) would apply, yielding at least as strong an upper estimate as the one claimed for \(\beta_{k}(x)\). In particular, there are \(k\) red edges in \(P\).
Each endpoint \(u\) of \(P\) has all blue neighbors in \(P\), as otherwise the path could be extended by a blue edge. The red neighbor of \(u\), that is, the second vertex in the path starting from \(u\), cannot be a blue neighbor of \(u\). In any other red edge contained in the path there is a vertex \(v\) such that if \(uv\) were a blue edge, then it would form an alternating cycle together with the segment of \(P\) from \(u\) to \(v\). Hence, each endpoint of \(P\) has blue degree at most \(k-1\) in \(G\).
Let \(D\) be the set of vertices in \(G\) that are incident to a red edge \(e\) which has at least one endpoint of blue degree at most \(k-1\). Let \(d\) be the number of red edges in \(D\). Delete the \(2d\) vertices of \(D\) from \(G\) to obtain the graph \(G^{\prime}\). Let \(P^{\prime}\) be a longest path in \(G^{\prime}\). By repeating the same argument as above, \(G^{\prime}\) starts and ends in a red edge. As we deleted all vertices from \(G\) that could be endpoints of longest paths, \(P^{\prime}\) contains at most \(k-2\) blue edges.
Seeking for a contradiction, assume that \(P^{\prime}\) contains exactly \(k-2\) blue edges. We can repeat the above argument to show that any of the two endpoints \(u,v\) of \(P^{\prime}\) has blue degree at most \(k-2\)
in \(G^{\prime}\). As \(u,v\notin D\), they both have blue degree at least \(k\) in \(G\). Thus both \(u\) and \(v\) are linked to at least two points in the deleted set \(D\) of vertices by blue edges of \(G\); see Figure 2. Let \(w\in D\) be such a vertex in \(D\) linked to \(u\) by a blue edge. Out of the at least two blue neighbors of \(v\) in \(D\), there must be at least one vertex \(z\neq w\). Then we can extend the path \(P^{\prime}\) by the blue edges \(uw\) and \(vz\) to obtain an alternating path with \(k\) blue edges in \(G\), a contradiction.
Hence, there is no alternating path in \(G^{\prime}\) containing at least \(k-2\) blue edges. Clearly, there is also no alternating cycle in \(G^{\prime}\), as that would be an alternating cycle in \(G\). Thus the conditions of the lemma apply to \(G^{\prime}\) with fixed constant \(k-2\). Therefore, there are at most \(\beta_{k-2}(x-d)\) blue edges in \(G^{\prime}\). One endpoint of every red edge in \(D\) contributes at most \(k-1\) further blue edges; this is at most \(d(k-1)\) blue edges altogether, which we are simply going to estimate by \(kx\) from above. Not counting these blue edges again, the remaining \(d\) points in \(D\) can only be linked to each other and to the \(2x-2d\) vertices in \(G^{\prime}\), contributing at most \(\binom{d}{2}+2d(x-d)\leq 2dx-\frac{3}{2}d^{2}\) further blue edges. Hence,
\[\beta_{k}(x)\leq\beta_{k-2}(x-d)+2xd-\frac{3}{2}d^{2}+kx\]
for some \(0\leq d\leq x\). According to the induction hypothesis, there is a \(c_{k-2}\in\mathbb{R}\) such that \(\beta_{k-2}(x-d)\leq(1-1/(k-2))(x-d)^{2}+c_{k-2}(x-d)\). Thus
\[\beta_{k}(x)\leq(1-1/(k-2))(x-d)^{2}+c_{k-2}(x-d)+2xd-\frac{3}{2}d^{2}+kx\leq\]
\[\frac{k-3}{k-2}(x-d)^{2}+2xd-\frac{3}{2}d^{2}+(c_{k-2}+k)x\]
for some \(0\leq d\leq x\). The derivative of this quadratic function with respect to the variable \(d\) is \(\frac{2k-6}{k-2}(d-x)+2x-3d=\frac{2}{k-2}x-\frac{k}{k-2}d\), thus the maximum is attained at \(d=2x/k\); cf. the constructions for the lower bound. By substituting \(d=2x/k\) into the expression, we obtain the upper bound
\[\beta_{k}(x)\leq\frac{k-3}{k-2}\frac{(k-2)^{2}}{k^{2}}x^{2}+4x^{2}/k-6x^{2}/k^ {2}+O_{k}(x)=\]
\[\left(\frac{k^{2}-5k+6}{k^{2}}+\frac{4k}{k^{2}}-\frac{6}{k^{2}}\right)x^{2}+O_ {k}(x)=(1-1/k)x^{2}+O_{k}(x).\]
Figure 2:
We need a final technical lemma before proving the main result of this section.
**Lemma 9**.: _Let \(X_{1}\cup X_{2}\cup\cdots\cup X_{\ell}=K_{n}\) be a partition of the vertex set of the labeled complete graph \(K_{n}\) with \(\ell\) labels so that in a maximum matching \(\mathcal{M}\) the matching edges with label \(t\) are in \(X_{t}\). Let \(|X_{t}|=2x_{t}\). Assume that between different \(X_{i}\) and \(X_{j}\) at most half the edges are critical, and within each \(X_{t}\) there are at most \((1-1/k_{t})x_{t}^{2}+O_{\ell}(x_{t})\) critical edges for some \(k_{t}\geq 1\). Let \(S=\sum\limits_{t=1}^{\ell}k_{t}\). Then the number of critical edges is at most \(\left(\frac{1}{2}-\frac{1}{2S}\right)\binom{n}{2}+O_{\ell}(n)\)._
Proof.: The errors add up to \(O_{\ell}(n)\), so we disregard them. We need to solve the following conditional optimization problem: Under the conditions \(0\leq x_{t}\) for all \(t\) and \(x_{1}+x_{2}+\cdots+x_{\ell}=n/2\), find the maximum of
\[\sum\limits_{1\leq i<j\leq\ell}2x_{i}x_{j}+\sum\limits_{t=1}^{\ell}(1-1/k_{t} )x_{t}^{2}.\]
Note that \(\sum\limits_{1\leq i<j\leq\ell}2x_{i}x_{j}=\sum\limits_{i\neq j}x_{i}x_{j}= \sum\limits_{t=1}^{\ell}x_{t}(n/2-x_{t})=(n/2)\cdot\sum\limits_{t=1}^{\ell}x_ {t}-\sum\limits_{t=1}^{\ell}x_{t}^{2}=n^{2}/4-\sum\limits_{t=1}^{\ell}x_{t}^{2}\). This observation leads to a simplified equivalent formulation of our task: Under the conditions \(0\leq x_{t}\) for all \(t\) and \(x_{1}+x_{2}+\cdots+x_{\ell}=n/2\), find the maximum of
\[n^{2}/4-\sum\limits_{t=1}^{\ell}x_{t}^{2}/k_{t}.\]
An application of Lagrange multipliers shows that the maximum is attained at \(x_{t}=(n/2)\cdot(k_{t}/S)\). This yields the optimum
\[n^{2}/4-\sum\limits_{t=1}^{\ell}(n^{2}/4)(k_{t}^{2}/S^{2})/k_{t}=(n^{2}/4) \left(1-\sum\limits_{t=1}^{\ell}k_{t}/S^{2}\right)=(n^{2}/4)(1-1/S)=\left( \frac{1}{2}-\frac{1}{2S}\right)\binom{n}{2}+O(n).\]
Lemma 8 and Lemma 9 together outline the following strategy to estimate \(\gamma(\ell)\) from above. Assume that given any \(\ell\)-labeling of \(K_{n}\), we can find a (red) perfect matching such that at most half of the edges between different \(X_{i}\) and \(X_{j}\) are critical (blue), and within an \(X_{t}\), there is no alternating cycle and no alternating path with \(k_{t}\) blue edges. Then \(\gamma(\ell)\leq\frac{1}{2}-\frac{1}{2S}\), where \(S=\sum\limits_{t=1}^{\ell}k_{t}\).
Proof of Theorem 7.: Let \(\lambda:E(K_{n})\rightarrow\{1,2,\ldots,\ell\}\) be a labeling of the edges of a complete graph. We assign weights to the edges, depending on their label. Label 1 edges have weight 0, and label 2 edges have weight 1. Then for a small \(\varepsilon>0\), the weight assigned to label 3 edges is \(2+\varepsilon\), to label 4 edges it is \(4+2\varepsilon+\varepsilon^{2}\), etc. In general,
* label 1 edges have weight 0, and
* for \(t\geq 2\), label \(t\) edges have weight \(\sum\limits_{s=0}^{t-2}2^{t-2-s}\varepsilon^{s}\).
Let \(\mathcal{M}\) be a perfect matching of minimum weight. Let \(X_{t}\) be the set of vertices covered by the \(x_{t}\) edges of label \(t\) in \(\mathcal{M}\). Then putting \(|X_{t}|=2x_{t}\) we have \(x_{1}+x_{2}+\cdots+x_{\ell}=n/2\).
We use the terminology introduced before Proposition 6. There is no \(\mathcal{M}\)-pair of critical edges \(e,e^{\prime}\) between different \(X_{t}\). Indeed, the weights are non-negative, monotone increasing, and the
weight assigned to each label is more than twice the weight assigned to the previous one. Hence, the sum of weights of \(e\) and \(e^{\prime}\) would be strictly less than that of the two matching edges covering the same quadruple of vertices, and then the \(e\)-switch would decrease the total weight. Thus at most half of the edges running between different \(X_{t}\) are critical.
We now estimate the number of critical edges in each \(X_{t}\). For all \(\ell\geq 2\), we define a vector \(c_{\ell}\) of length \(\ell\). For small values of \(\ell\) these vectors are \(c_{2}=(1,1)\), \(c_{3}=(1,2,1)\), \(c_{4}=(1,4,2,1)\), \(c_{5}=(1,8,6,2,1)\), \(c_{6}=(1,16,14,6,2,1)\). The precise definition is
* \(c_{2}=(1,1)\), and
* for all \(\ell\geq 3\), \(c_{\ell}[1]=c_{\ell}[\ell]=1\), \(c_{\ell}[2]=2^{\ell-2}\), and for all \(3\leq t\leq\ell-1\) we have \(c_{\ell}[t]=2^{\ell-t+1}-2\).
Our goal is to show that the number of critical edges in \(X_{t}\) is at most \((1-1/c_{\ell}[t])x_{t}^{2}+O_{\ell}(x_{t})\), so that we can apply Lemma 9. According to Lemma 8, it is enough to show that if we restrict the matching \(\mathcal{M}\) to \(X_{t}\) (red edges), and color edges of label less than \(t\) blue, then there is no alternating cycle and there is no alternating path with \(c_{\ell}[t]\) blue edges in this red and blue subgraph with vertex set \(X_{t}\).
Clearly, there is no alternating cycle in this subgraph: by switching the red edges of that cycle in \(\mathcal{M}\) to the blue edges of that cycle, we would decrease the total weight of the perfect matching. For \(t=1\), there cannot be a critical edge in \(X_{t}\) because there is no label less than \(1\). For \(t=\ell\), again, there cannot be a critical edge \(e\) in \(X_{t}\) because the \(e\)-switch would decrease the total weight. Hence, \(c_{\ell}[1]=c_{\ell}[\ell]=1\) is justified: there is no alternating path with \(1\) blue edge either in \(X_{1}\) or in \(X_{\ell}\). For \(t=2\), an alternating path with \(2^{\ell-2}\) blue edges would have \(2^{\ell-2}+1\) red edges. The total weight of red edges in such a path is \(2^{\ell-2}+1\). We propose to switch these red edges in \(\mathcal{M}\) to the blue ones together with the edge linking the endpoints of the path. At worst, the endpoints are linked by a label \(\ell\) edge. As all blue edges have label \(1\), and consequently weight \(0\), the total weight of these \(2^{\ell-2}+1\) edges is the weight of the label \(\ell\) edge, that is, \(\sum\limits_{s=0}^{\ell-2}2^{\ell-2-s}\varepsilon^{s}\). This is barely more than \(2^{\ell-2}\) if \(\varepsilon\) is small enough, that is, less than \(2^{\ell-2}+1\). Hence, the switch along this cycle (the path together with the edge linking the ends) decreases the total weight of the matching. Thus there cannot be an alternating path with \(2^{\ell-2}\) blue edges in \(X_{2}\), justifying the formula \(c_{\ell}[2]=2^{\ell-2}\). Finally, for \(3\leq t\leq\ell-1\), we proceed in a similar fashion: assume that there is an alternating path with \(c_{\ell}[t]=2^{\ell-t+1}-2\) blue edges in \(X_{t}\). Such a path contains \(2^{\ell-t+1}-1\) red edges and \(2^{\ell-t+1}-2\) blue edges. The total weight of the red edges is
\[(2^{\ell-t+1}-1)\sum\limits_{s=0}^{t-2}2^{t-2-s}\varepsilon^{s}=\sum\limits_{ s=0}^{t-2}(2^{\ell-1-s}-2^{t-2-s})\varepsilon^{s}.\]
The total weight of the blue edges together with the edge linking the endpoints is largest if all blue edges have label \(t-1\) and the added edge has label \(\ell\). If this is the case, then the total weight is
\[\sum\limits_{s=0}^{\ell-2}2^{\ell-2-s}\varepsilon^{s}+(2^{\ell-t+1}-2)\sum \limits_{s=0}^{t-3}2^{t-3-s}\varepsilon^{s}=\sum\limits_{s=0}^{t-3}(2^{\ell-1- s}-2^{t-2-s})\varepsilon^{s}+\sum\limits_{s=t-2}^{\ell-2-s}2^{\ell-2-s} \varepsilon^{s}.\]
The coefficients of \(\varepsilon^{s}\) in the two sums coincide for \(0\leq s\leq t-3\). The first difference occurs for \(s=t-2\): in the red sum, the coefficient of \(\varepsilon^{t-2}\) is \(2^{\ell-t+1}-1\), and in the "blue" sum it is \(2^{\ell-t}\). The latter is always smaller than the former as \(t\leq\ell-1\). Thus for a small enough \(\varepsilon\) we could once again improve the total weight of the matching, a contradiction. Note that for any fixed \(\ell\), only finitely many requirements were made for \(\varepsilon\), and all of them hold on an open interval with left endpoint
zero and a positive right endpoint. Hence, for each \(\ell\in\mathbb{N}\) there is a small enough \(\varepsilon=\varepsilon(\ell)>0\) that meets all requirements.
Let \(S_{\ell}=\sum\limits_{l=1}^{\ell}c_{\ell}[t]\). According to Lemma 9 we have \(\gamma(\ell)\leq\frac{1}{2}-\frac{1}{2S_{\ell}}\). An elementary calculation yields that \(S_{2}=2\) and that for all \(l\geq 3\) we have \(S_{\ell}=3\cdot 2^{\ell-2}-2\ell+4\). This translates to the upper bounds \(\gamma(2)\leq 1/4\) and \(\gamma(\ell)\leq 1/2-1/(3\cdot 2^{\ell-1}-4\ell+8)\) for \(\ell\geq 3\). In particular, \(\gamma(3)\leq 3/8\).
For the lower estimates \(\gamma(2)\geq 1/4\) and \(\gamma(3)\geq 3/8\), we provide two constructions. For \(\ell=2\), partition the set of vertices of \(K_{n}\) into two subsets \(U_{1}\) and \(U_{2}\), where \(|U_{1}|=n/4\) and \(|U_{2}|=3n/4\). Edges in \(U_{i}\) have label \(i\), and edges between the two sets have label \(1\). Given a perfect matching \(\mathcal{M}\), let \(nx\) be the number of edges of \(\mathcal{M}\) between \(U_{1}\) and \(U_{2}\). Clearly \(x\leq 1/4\), and there are \(n(1/8-x/2)\) edges of \(\mathcal{M}\) in \(U_{1}\) and \(n(3/8-x/2)\) edges of \(\mathcal{M}\) in \(U_{2}\). Critical edges must have label \(1\), thus they have to lie between \(U_{1}\) and \(U_{2}\) such that the endpoint in \(U_{2}\) is covered by one of the \(n(3/8-x/2)\) edges of \(\mathcal{M}\) in \(U_{2}\). That is, there are \(n(3/4-x)\) possibilities for the endpoint in \(U_{2}\), and \(n/4\) possibilities for the endpoint in \(U_{1}\). Hence, the number of critical edges is \(n^{2}(3/4-x)/4\), which attains its minimum \(n^{2}/8\) at \(x=1/4\).
For \(\ell=3\), let us partition the vertex set of \(K_{n}\) into three subsets \(U_{1},U_{2},U_{3}\) such that \(|U_{1}|=n/8,|U_{2}|=n/4,|U_{3}|=5n/8\). Edges in \(U_{1}\cup U_{2}\) are labeled \(1\) and edges in \(U_{3}\) are labeled \(3\). Edges between \(U_{i}\) and \(U_{3}\) are labeled \(i\) for \(i=1,2\).
Let \(\mathcal{M}\) be a perfect matching. Let \(nx_{ij}\) be the number of edges in \(\mathcal{M}\) between \(U_{i}\) and \(U_{j}\) for \(1\leq i<j\leq 3\). Clearly \(0\leq x_{12},x_{13},x_{23}\) and \(x_{12}+x_{13}\leq 1/8,x_{12}+x_{23}\leq 1/4\).
There are \(n(1/8-x_{12}-x_{13})\) points in \(X_{1}\) covered by matching edges in \(X_{1}\), \(n(1/4-x_{12}-x_{23})\) points in \(X_{2}\) covered by matching edges in \(X_{2}\), and \(n(5/8-x_{13}-x_{23})\) points in \(X_{3}\) covered by matching edges in \(X_{3}\). Let \(nc_{ij}\) denote the number of vertices in \(U_{i}\) that is incident to an edge in \(\mathcal{M}\) with label \(j\). Then
\[c_{11}=1/8,c_{12}=0,c_{13}=0;\]
\[c_{21}=1/4-x_{23},c_{22}=x_{23},c_{23}=0;\]
\[c_{31}=x_{13},c_{32}=x_{23},c_{33}=5/8-x_{13}-x_{23}.\]
For simplicity, we count the non-critical edges, and only up to an \(o(n^{2})\) error. All \(\frac{25}{128}n^{2}\) label \(3\) edges are non-critical. All label \(2\) edges are between \(X_{2}\) and \(X_{3}\). Such an edge is non-critical iff both of its endpoints are covered by a label \(1\) or a label \(2\) matching edge. Hence, there are \((c_{21}+c_{22})(c_{31}+c_{32})n^{2}=\frac{x_{13}+x_{23}}{4}n^{2}\) such edges. Finally, there are several sources of non-critical label \(1\) edges. There are \(c_{11}c_{31}n^{2}=\frac{x_{13}}{8}n^{2}\) between \(X_{1}\) and \(X_{3}\), \(c_{11}c_{21}n^{2}=(\frac{1}{32}-\frac{x_{23}}{8})n^{2}\) between \(X_{1}\) and \(X_{2}\), \(\frac{c_{11}^{2}}{2}n^{2}=\frac{1}{128}n^{2}\) in \(X_{1}\), and \(\frac{c_{21}^{2}}{2}n^{2}=(\frac{1}{32}-\frac{x_{23}}{4}+\frac{x_{23}^{2}}{2} )n^{2}\) in \(X_{2}\). Thus the number of critical edges is
\[\left(\frac{1}{2}-\frac{25}{128}-\frac{x_{13}+x_{23}}{4}-\frac{x_{13}}{8}- \left(\frac{1}{32}-\frac{x_{23}}{8}\right)-\frac{1}{128}-\left(\frac{1}{32}- \frac{x_{23}}{4}+\frac{x_{23}^{2}}{2}\right)\right)n^{2}=\]
\[\left(\frac{15}{64}-\frac{3}{8}x_{13}+\frac{1}{8}x_{23}-\frac{1}{2}x_{23}^{2 }\right)n^{2}\]
We need to find the minimum of this function subject to the constraints \(0\leq x_{12},x_{13},x_{23}\) and \(x_{12}+x_{13}\leq 1/8,x_{12}+x_{23}\leq 1/4\). We may assume that \(x_{12}=0\), as the function does not depend on \(x_{12}\), and it only weakens the constraints to set \(x_{12}>0\). Thus \(0\leq x_{13},x_{23}\) and \(x_{13}\leq 1/8,x_{23}\leq 1/4\). For a fixed \(x_{23}\), it is clearly advantageous to pick the largest possible \(x_{13}\), that is, \(x_{13}=1/8\). Then the revised optimization problem is to find the minimum of \(\left(\frac{3}{16}+\frac{1}{8}x_{23}-\frac{1}{2}x_{23}^{2}\right)n^{2}=\left( \frac{3}{16}+\frac{1}{8}x_{23}(1-4x_{23})\right)n^{2}\) subject to the constraint \(0\leq x_{23}\leq 1/4\). On this interval, we have
\(x_{23}(1-4x_{23})\leq 0\) and equality holds iff \(x_{23}=0\) or \(x_{23}=1/4\). Thus the minimum is attained at these two points, and the minimal value of \(\left(\frac{3}{16}+\frac{1}{8}x_{23}(1-4x_{23})\right)n^{2}\) is \(\frac{3}{16}n^{2}\sim\frac{3}{8}\binom{n}{2}\). It is easy to see that \(x_{12}=0\) is necessary to obtain this optimum, as otherwise we lose by having to set \(x_{13}<1/8\).
Hence, the minimum ratio of critical edges is \(3/8\), and it is attained by exactly two different (family of) matchings. We can pair up all \(n/8\) vertices of \(U_{1}\) with vertices in \(U_{3}\), and match every other vertex within its own \(U_{i}\). Alternatively, we can pair up all \(n/8\) vertices of \(U_{1}\) with vertices in \(U_{3}\), pair up all \(n/4\) vertices of \(U_{2}\) with vertices in \(U_{3}\), and match the remaining vertices of \(U_{3}\) among themselves.
The second matching at the end of the proof has the exact structure that is necessary for the upper estimate to be sharp: \(|X_{1}|=|X_{3}|=n/4,|X_{2}|=n/2\), exactly half of the edges between different \(X_{i}\) and \(X_{j}\) are critical, and exactly a quarter of edges in \(X_{2}\) are critical (and no other edges). The quarter of the edges in \(X_{2}\) that are critical form a clique which contain exactly one endpoint of every matching edge in \(X_{2}\). It is somewhat surprising that there is also a completely different matching in the extremal structures that yield the same ratio of critical edges. Perhaps this is a symptom of the existence of another proof method for the upper bound, which finds a different matching that coincides with the first one in the extremal structures. Finding such a proof might lead to better estimates for \(\gamma(\ell)\) when \(\ell\geq 4\).
Another possible way to improve the upper bound for \(\ell\geq 4\) is to analyze the structure suggested by the proof. It seems that this structure is never optimal. We believe that the number \(c_{\ell}[2]=2^{\ell-2}\) should be replaced by \(c_{\ell}[2]=3\) when \(\ell\geq 4\). In particular, if \(\ell=4\), this would yield the vector \(c_{4}=(1,3,2,1)\) rather than \(c_{4}=(1,4,2,1)\), and the upper estimate \(\gamma(4)\leq 3/7\approx 0.4286\) (see Lemma 9) rather than the current one \(\gamma(4)\leq 7/16=0.4375\) provided by Theorem 7. The best lower bound we have found is \(\gamma(4)\geq(9+\sqrt{3})/26\approx 0.41277\). This is obtained by the following construction. Let \(a_{1}=(8-2\sqrt{3})/52,a_{2}=(9+\sqrt{3})/52,a_{3}=(1+3\sqrt{3})/52,a_{4}=(34- 2\sqrt{3})/52\). Let \(n\) be large and consider four sets \(A_{1},A_{2},A_{3},A_{4}\) of size \(a_{1}n,a_{2}n,a_{3}n,a_{4}n\), respectively. (Obviously, rounding these numbers to integers does not introduce a significant error.) Every edge incident to a vertex in \(A_{1}\) or \(A_{2}\) has label 1, except for edges between \(A_{2}\) and \(A_{4}\) which have label 2. Edges in \(A_{3}\) have label 2, those in \(A_{4}\) have label 4, and edges between \(A_{3}\) and \(A_{4}\) have label 3. There are two optimal maximum matchings in this labeled graph yielding the above critical edge ratio \((9+\sqrt{3})/26\approx 0.41277\). This can be shown for example by an elaborate Fourier-Motzkin elimination. It is not unlikely that \(\gamma(4)\) is strictly between \((9+\sqrt{3})/26\) and \(3/7\).
## 3 Cliques
In this section, we prove Theorem 1. Let \(\delta\in[1,2]\) and \(\ell\geq 2\) (including \(\infty\)). Assume that an \(\ell\)-adaptive algorithm using \(n^{\delta}\) queries finds a clique of size \(\alpha\log n\) with high probability.
We summarize the main strategy, already introduced in [6]. Given a (non-negative integer) \(m\leq\alpha/2\), we encode a subgraph of size at most \(\alpha\log n\) by a tuple whose first \(M\approx m\log n\) coordinates are in \(\{1,\ldots,n^{\delta}\}\) and the remaining \((\alpha-2m)\log n\) coordinates are \(\{1,\ldots,n\}\). The first \(M\) entries are steps in the process, and the rest are vertices of the graph. Such a tuple encodes the subgraph whose vertices are the set \(U\) of endpoints of the edges queried in the given \(m\log n\) steps together with the remaining \((\alpha-2m)\log n\) vertices. In a _good_ tuple
* the encoded graph has \(\alpha\log n\) vertices, that is, there is no overlap in the above description of vertices,
* every edge was queried at some point during the process in the subgraph spanned by these \(\alpha\log n\) vertices, and
* the \(M\) edges are independent, and in the subgraph spanned by their \(2M\) vertices these edges form a perfect matching yielding a minimum number of critical edges, where the label of an edge is the round in which it was queried.
**Proposition 10**.: _Assume that it is possible to find a clique of size \(\alpha\log n\) with \(\ell\) adaptive rounds and \(n^{\delta}\) queries in \(G(n,1/2)\) for any large enough \(n\) with high probability. Then for all \(0\leq m\leq\alpha/2\) we have_
\[\alpha^{2}/2-\alpha-2\gamma(\ell)m^{2}+(2-\delta)m\leq 0.\]
Proof.: The number of possible tuples that encode a subgraph is
\[(n^{\delta})^{m\log n}\cdot n^{(\alpha-2m)\log n}=2^{(\alpha+(\delta-2)m)\log ^{2}n}.\]
The probability that a given tuple encodes a clique is at most \(2^{-\left(\binom{\alpha\log n}{2}-C\right)}\), where \(C\) is the number of critical edges in the subgraph spanned by the \(2M=2m\log n\) vertices. As \(C\leq\gamma(\ell)\binom{2M}{2}\), this probability is at most
\[2^{-\left(\binom{\alpha\log n}{2}-C\right)}\leq 2^{-\left(\binom{\alpha\log n }{2}-\gamma(\ell)\binom{2m\log n}{2}\right)}\leq 2^{-(\alpha^{2}/2-2\gamma( \ell)m^{2})\log^{2}n+O(\log n)}.\]
Using the trivial estimate that the probability of a union of events is at most the sum of the probabilities of the events yields
\[\mathbb{P}(\text{the algorithm finds a clique of size }\alpha\log n)\leq\]
\[2^{(\alpha+(\delta-2)m)\log^{2}n}\cdot 2^{-(\alpha^{2}/2-2\gamma(\ell)m^{2}) \log^{2}n+O(\log n)}=\]
\[2^{-(\alpha^{2}/2-\alpha-2\gamma(\ell)m^{2}+(2-\delta)m)\log^{2}n+O(\log n))}.\]
Hence, if \(\alpha^{2}/2-\alpha-2\gamma(\ell)m^{2}+(2-\delta)m>0\) for some \(0\leq m\leq\alpha/2\), then the above probability would be asymptotically \(0\), a contradiction.
Proof of Theorem 1.: Given \((\delta,\ell)\), we are looking for the minimum \(\alpha\) such that there is a \(0\leq m\leq\alpha/2\) that makes the left hand side of the inequalities in Proposition 10 positive. To this end, we first find the maximum of the expressions in \(m\in[0,\alpha/2]\), and then compute the minimum \(\alpha\) that makes the expressions positive (non-negative) for that \(m\). In the fully adaptive case \(\ell=\infty\), we have \(\gamma(\ell)=1/2\), thus the derivative of the left hand side \(f(m)=\alpha^{2}/2-\alpha-m^{2}+(2-\delta)m\) is \(\frac{\partial}{\partial m}(\alpha^{2}/2-\alpha-m^{2}+(2-\delta)m)=-2m+(2-\delta)\), which has a unique root at \(m_{0}=\frac{2-\delta}{2}\). Since \(\delta\in[1,2[\), as long as \(\alpha\geq 1\), the linear function \(-2m+(2-\delta)\) is positive at \(m=0\) and non-positive at \(m=\alpha/2\). Hence, under the assumption \(\alpha\geq 1\), the unique root \(m_{0}=\frac{2-\delta}{2}\) is in the interval \([0,\alpha/2]\).
Substituting \(m=m_{0}\) in the expression yields \(f(m_{0})=\alpha^{2}/2-\alpha+(2-\delta)^{2}/4\). The roots of this quadratic function are \(1\pm\sqrt{1-(2-\delta)^{2}/2}\). Thus the maximum \(\alpha\) where the expression is non-negative is \(\alpha=1+\sqrt{1-(2-\delta)^{2}/2}\). Note that it was justified to substitute \(m_{0}\), as this \(\alpha\) is indeed at least \(1\), therefore \(m_{0}\in[0,\alpha/2]\).
We argue similarly in the \(\ell\)-adaptive case. This time \(f(m)=\alpha^{2}/2-\alpha-2\gamma(\ell)m^{2}+(2-\delta)m\), whose derivative is \(\frac{\partial}{\partial m}(\alpha^{2}/2-\alpha-2\gamma(\ell)m^{2}+(2-\delta)m )=-4\gamma(\ell)m+(2-\delta)\). The unique root of this linear function is \(m_{0}=\frac{2-\delta}{4\gamma(\ell)}\). Once again, \(f^{\prime}(0)=2-\delta>0\). At the other endpoint of the interval \([0,\alpha/2]\), we have \(f^{\prime}(\alpha/2)=-2\gamma(\ell)\alpha+(2-\delta)\). It is unclear whether this is necessarily non-positive at the interesting values; for example, if \(\delta=1\) and \(\ell=2\), making \(\gamma(\ell)=1/4\), then \(\alpha\) would have to be at least \(2\) to make this expression non-positive. However, as the maximum clique in \(G(n,1/2)\) has size \(2\log n\), this cannot provide us with a meaningful upper bound. So we carry
on with the calculation as before, substituting \(f(m_{0})\) and computing the optimal \(\alpha\), and then we check whether \(-2\gamma(\ell)\alpha+(2-\delta)\) is non-positive in that optimum.
Substituting \(m=m_{0}\) in the expression yields \(f(m_{0})=\alpha^{2}/2-\alpha+\frac{(2-\delta)^{2}}{8\gamma(\ell)}\). The larger root is \(\alpha=1+\sqrt{1-\frac{(2-\delta)^{2}}{4\gamma(\ell)}}\). Observe that the expression under the square root is non-negative as \(\delta\in[1,2)\) and \(\gamma(\ell)\geq 1/4\) for all \(\ell\geq 2\). As noted above, this is only justified if \((2-\delta)\leq 2\gamma(\ell)\alpha\). If \(\ell=2\) then \(\gamma(\ell)=1/4\), that is, we need to check if \(2-\delta\leq\frac{1}{2}+\frac{1}{2}\sqrt{1-(2-\delta)^{2}}\), or equivalently whether \(3-2\delta\leq\sqrt{1-(2-\delta)^{2}}\). The left hand side is decreasing and the right hand side is increasing in \(\delta\), and they are equal when \(\delta=6/5\). Thus the substitution \(m=m_{0}\) is justified for \(\delta\in[6/5,2[\). For \(\ell=2\) and \(\delta\in[1,6/5]\), the best estimate we can obtain is by substituting \(m=\alpha/2\) into the function \(f\) and optimize for \(\alpha\). Then \(f(\alpha/2)=\alpha^{2}/2-\frac{\ell-1}{\ell}\alpha^{2}/4-\alpha+(2-\delta) \alpha/2=\frac{3}{8}\alpha^{2}-\frac{\delta}{2}\alpha\) with larger root \(4\delta/3\).
Now assume that \(\ell\geq 3\); then \(\gamma(\ell)\geq 3/8\) according to Theorem 7. We show that in this case the above \(\alpha=1+\sqrt{1-\frac{(2-\delta)^{2}}{4\gamma(\ell)}}\) satisfies the inequality \(2-\delta\leq 2\gamma(\ell)\alpha\) for any \(\delta\in[1,2)\), thereby justifying the substitution \(m=m_{0}\). Once again, the left hand side is decreasing and the right hand side is increasing in \(\delta\). Thus it is enough to verify the inequality for \(\delta=1\), that is, \(1\leq 2\gamma(\ell)\left(1+\sqrt{1-\frac{1}{4\gamma(\ell)}}\right)\). The right hand side is increasing as a function of \(\gamma(\ell)\). Thus it is enough to check the inequality for \(\gamma(\ell)=3/8\), in which case the right hand side is approximately \(1.183\).
## 4 Dense subgraphs
We show how the same techniques can be applied to prove estimates for the Maximum Density Subgraph Query Problem.
**Proposition 11**.: _Assume that it is possible to find a subgraph with edge density \(\eta\in(3/4,1]\) of size \(\alpha\log n\) with \(\ell\) adaptive rounds and \(n^{\delta}\) queries in \(G(n,1/2)\) for any large enough \(n\) with high probability. Then for all \(0\leq m\leq\alpha/2\) we have_
\[(\alpha^{2}/2-2\gamma(\ell)m^{2})(1-H(p))-\alpha+(2-\delta)m\leq 0\]
_where \(p=\frac{\eta\alpha^{2}/2-2\gamma(\ell)m^{2}}{\alpha^{2}/2-2\gamma(\ell)m^{2}}\)._
Proof.: We follow the argument of the proof of Proposition 10. The number of possible tuples that encode a subgraph is
\[(n^{\delta})^{m\log n}\cdot n^{(\alpha-2m)\log n}=2^{(\alpha+(\delta-2)m)\log ^{2}n}.\]
Let \(C\) be the number of critical edges in the subgraph spanned by the \(2M=2m\log n\) vertices. Let \(p^{\prime}\) be so that \(p^{\prime}\binom{\alpha\log n}{2}-C)=\eta\binom{\alpha\log n}{2}-C\). That is, \(p^{\prime}=\frac{\eta\binom{\alpha\log n}{2}-C}{\binom{\alpha\log n}{2}-C}=1- \frac{(1-\eta)(\alpha^{\log n})}{\binom{\alpha\log n}{2}-C}\). Let \(p^{\prime\prime}\) be the expression we obtain by increasing \(C\) in the defining formula of \(p^{\prime}\) to \(\gamma(\ell)\binom{2M}{2}\). That is, \(p^{\prime\prime}=1-\frac{(1-\eta)\binom{\alpha\log n}{2}}{\binom{\alpha\log n }{2}-\gamma(\ell)\binom{2M}{2}}\). Note that \(p^{\prime}\geq p^{\prime\prime}>1/2\) for \(n\) large enough. Indeed, since \(C\leq\gamma(\ell)\binom{2M}{2}\), \(m\leq\alpha/2\), \(\eta>3/4\), and \(\gamma(\ell)\leq 1/2\), we have \(p^{\prime}\geq p^{\prime\prime}\) and
\[p^{\prime\prime}=1-\frac{(1-\eta)\binom{\alpha\log n}{2}}{\binom{\alpha\log n} {2}-\gamma(\ell)\binom{2M}{2}}=1-\frac{(1-\eta)(\alpha^{2}\log^{2}n)/2+O(\log n )}{(\alpha^{2}\log^{2}n)/2-2\gamma(\ell)M^{2}+O(\log n)}=\]
\[1-\frac{(1-\eta)(\alpha^{2}\log^{2}n)/2}{(\alpha^{2}\log^{2}n)/2-2\gamma(\ell) m^{2}\log^{2}n}+O(1/\log n)=1-\frac{(1-\eta)\alpha^{2}/2}{\alpha^{2}/2-2\gamma(\ell)m^{2}}+O( 1/\log n)\geq\]
\[1-\frac{(1-\eta)\alpha^{2}/2}{\alpha^{2}/2-m^{2}}+O(1/\log n)\geq 1- \frac{(1-\eta)\alpha^{2}/2}{\alpha^{2}/2-\alpha^{2}/4}+O(1/\log n)=1-\frac{(1- \eta)\alpha^{2}/2}{\alpha^{2}/4}+O(1/\log n)=\] \[1-2(1-\eta)+O(1/\log n)\to 2\eta-1>1/2.\]
Due to the definition of \(p^{\prime}\), if the encoded tuple encodes a subgraph with density at least \(\eta\), then at least a proportion \(p^{\prime}\) of pairs not constituting a critical edge in the set of size \(\alpha\log n\) turned out to be edges of \(G(n,1/2)\). These are independent events with probability \(1/2\), hence we can use the estimate that this probability is at most \(2^{-\left(\binom{\alpha\log n}{2}-C\right)(1-H(p^{\prime}))}\). If we decrease \(p^{\prime}\) in this estimate to \(p^{\prime\prime}\), a number still at least \(1/2\), then the expression increases, since the Shannon entropy function \(H\) is monotone decreasing on \([1/2,1]\). Moreover, we can once again replace \(C\) by \(\gamma(\ell)\binom{2m\log n}{2}\), further increasing the upper estimate of the above probability, yielding the weaker upper bound \(2^{-\left(\binom{\alpha\log n}{2}-\gamma(\ell)\binom{2m\log n}{2}\right)(1-H( p^{\prime\prime}))}\). Since
\[p^{\prime\prime}=1-\frac{(1-\eta)\alpha^{2}/2}{\alpha^{2}/2-2\gamma(\ell)m^{ 2}}+O(1/\log n)=\frac{\eta\alpha^{2}/2-2\gamma(\ell)m^{2}}{\alpha^{2}/2-2 \gamma(\ell)m^{2}}+O(1/\log n)\sim p\]
as \(n\to\infty\), and because the function \(H\) is continuous, this upper bound is
\[2^{-\left(\binom{\alpha\log n}{2}-\gamma(\ell)\binom{2m\log n}{2}\right)(1-H( p^{\prime\prime}))}=2^{-\left(\left(\alpha^{2}\log^{2}n\right)/2-2\gamma( \ell)m^{2}\log^{2}n\right)(1-H(q))\cdot(1+o(1))}=\]
\[2^{-\left(\alpha^{2}/2-2\gamma(\ell)m^{2}\right)(1-H(p))\log^{2}n\cdot(1+o(1))}.\]
Using the trivial estimate that the probability of a union of events is at most the sum of the probabilities of the events yields
\[\mathbb{P}\left(\text{the algorithm finds a subgraph of size $\alpha\log n$ with density at least $\eta$}\right)\leq\]
\[2^{(\alpha+(\delta-2)m)\log^{2}n}\cdot 2^{-\left(\alpha^{2}/2-2\gamma(\ell)m^{ 2}\right)(1-H(p))\log^{2}n\cdot(1+o(1))}=\]
\[2^{-\left((\alpha^{2}/2-2\gamma(\ell)m^{2})(1-H(p))-\alpha+(2-\delta)m\right) \log^{2}n\right)\cdot(1+o(1))}.\]
Hence, if \((\alpha^{2}/2-2\gamma(\ell)m^{2})(1-H(p))-\alpha+(2-\delta)m>0\) for some \(0\leq m\leq\alpha/2\), then the above probability would be asymptotically \(0\), a contradiction.
We note that the assumption \(\eta\in(3/4,1]\) in Proposition 11 could be relaxed to the condition that \(p>1/2\) for \(n\) large enough. We have seen in the proof of Proposition 11 that the latter requirement is stronger than the former, as \(p\) is asymptotically \(p^{\prime\prime}\) and \(p^{\prime\prime}>1/2\) for \(n\) large enough. However, switching the condition \(\eta\in(3/4,1]\) to "\(p>1/2\) for \(n\) large enough" would make the already complex phrasing of Theorem 3 even more roundabout.
Proof of Theorem 3.: The strategy is similar to the proof of Theorem 1. Given \((\delta,\ell,\eta)\), we are looking for the minimum \(\alpha\) such that there is a \(0\leq m\leq\alpha/2\) that makes the left hand side of the inequality in Proposition 11 positive (non-negative). To this end, we first find the maximum of the expression in \(m\in[0,\alpha/2]\), and then compute the minimum \(\alpha\) that makes the expression non-negative for that \(m\).
Let \(f(m)=(\alpha^{2}/2-2\gamma(\ell)m^{2})(1-H(p))-\alpha+(2-\delta)m\), where \(p\) is short for \(\frac{\eta\alpha^{2}/2-2\gamma(\ell)m^{2}}{\alpha^{2}/2-2\gamma(\ell)m^{2}}\). This function is continuous and defined on the bounded, closed interval \([0,\alpha/2]\), thus it has a maximum. The derivative is \(f^{\prime}(m)=-4\gamma(\ell)m(1+\log p)+(2-\delta)\). As \(f^{\prime}(0)>0\) whenever \(\delta\in[1,2)\), the maximum cannot be at the left endpoint of the domain interval. The maximum might be attained at the right endpoint \(\alpha/2\). If this is not the case, then the maximum is in an inner point where the derivative is zero. The third derivative is \(f^{\prime\prime\prime}(m)=\frac{32\alpha^{2}\gamma(\ell)^{2}m(1-\eta)(-16 \gamma(\ell)^{2}m^{4}+3\alpha^{4}n-4\alpha^{2}\gamma(\ell)m^{2}(1+\eta))}{( \alpha^{2}-4\gamma(\ell)m^{2})^{2}(-4\gamma(\ell)m^{2}+\eta\alpha^{2})^{2}}\).
Every factor in this expression is clearly positive except for \(-16\gamma(\ell)^{2}m^{4}+3\alpha^{4}n-4\alpha^{2}\gamma(\ell)m^{2}(1+\eta)\). By using \(\gamma(\ell)\leq 1/2\) and \(m\leq\alpha/2\) we obtain
\[-16\gamma(\ell)^{2}m^{4}+3\alpha^{4}n-4\alpha^{2}\gamma(\ell)m^{2}(1+\eta)\geq( 10\eta-3)\alpha^{4}/4>0.\]
Hence, the derivative \(f^{\prime}(m)\) is convex, and in particular, it has at most two roots. If \(f^{\prime}(m)\) has only one root, then it must be the locus of the maximum of \(f(m)\). If \(f^{\prime}(m)\) has two roots, then due to the convexity of \(f^{\prime}(m)\), the first one is a local maximum and the second one is a local minimum of \(f(m)\). Thus only the first one can be the locus of the global maximum of \(f(m)\), justifying the definition of \(m_{1}\) in the assertion.
Because \(m\leq\alpha/2\), clearly we have \((\alpha^{2}/2-m^{2})(1-H(p))-\alpha+(2-\delta)m\to\infty\) as \(\alpha\to\infty\). Hence, the maximum \(\alpha\) where the expression is non-negative for any given \(m\) is the largest root. Given any substitution into \(m\), this largest root is an upper estimate for \(\alpha_{\star}\left(\delta,\ell,\eta\right)\). Thus the minimum of the two candidates \(\min(\alpha_{1},\alpha_{2})\) yields the stronger upper bound.
Computer assisted numerical calculations (setting \(\delta=1\)) suggest that just as it is the case in the MCQP, in the MDSQP the root \(m_{1}\) of the derivative \(f^{\prime}(m)\) is always in the interval \([0,\alpha/2]\) as long as \(\ell\geq 3\). For \(\ell=2\), this is not the case. For instance, if \(\delta=1\), then there is a threshold \(\eta_{0}\approx 0.936\) such that if \(\eta\leq\eta_{0}\) then \(m_{1}\in[0,\alpha/2]\), but if \(\eta>\eta_{0}\) then \(m_{1}\notin[0,\alpha/2]\). The calculations suggest that (given \(\delta=1,\ell=2\)) whenever \(m_{1}\in[0,\alpha/2]\) (that is, if \(\eta<0.936\)), then \(\alpha_{1}<\alpha_{2}\), thus \(\alpha_{1}\) provides the stronger estimate. The following graphs represent the best upper bound provided by Theorem 3. See a slightly more elaborate explanation below.
For the first diagram, we numerically approximated \(\alpha_{0}\) for \(\eta\) between \(0.75\) and \(0.99\), with step size \(0.01\). For the second diagram, we numerically approximated \(\alpha_{0}\) for \(\eta\) between \(0.98\) and \(0.999\), with step size \(0.001\). In both cases, we added the values for \(\eta=1\) estimated by Theorem 1; that is, \(1+1/\sqrt{2}\approx 1.707\) for \(\ell=\infty\), \(1+1/\sqrt{3}\approx 1.577\) for \(\ell=3\), and \(4/3\) for \(\ell=2\). Moreover, at \(\eta=1\) the trivial upper bound \(2\) was added to the graph. The four graphs represent the trivial upper bound \(2/(1-H(\eta))\), the upper bound \(\alpha_{1}\) for \(\delta=1\) and \(\ell=\infty\), the upper bound \(\alpha_{1}\) for \(\delta=1\) and \(\ell=3\), and the best upper bound according to the above explanation for \(\delta=1\) and \(\ell=2\) (that is, for \(\eta\leq 0.936\) we use \(\alpha_{1}\), and for \(\eta>0.936\) we use \(\alpha_{2}\)). The estimates are significantly below the trivial upper bound if \(\eta\) is close to \(1\). As \(\eta\) approaches \(0.75\), the gap slightly increases between each estimate and the trivial upper bound.
We provide some further justification for computing \(\alpha_{1}\) rather than \(\alpha_{2}\) for \(\ell=\infty\) and \(\ell=3\). In the \(\delta=1,\ell=\infty\) case it is easy to see why \(\alpha_{2}\) is irrelevant. So assume that \(m=\alpha/2\). As \(\gamma(\infty)=1/2\), the equation in Theorem 3 simplifies to \((\alpha^{2}/4)\left(1-H\left(2\eta-1\right)\right)-\alpha/2=0\), which
has the unique solution \(\alpha_{2}=\frac{2}{1-H(2\eta-1)}\). This estimate is even worse than the trivial bound \(\frac{2}{1-H(\eta)}\), since \(1/2\leq 2\eta-1\leq\eta\), making \(H\left(\eta\right)\leq H\left(2\eta-1\right)\). In the \(\delta=1,\ell=3\) case we only provide numerical justification. As \(\gamma(3)=3/8\), the equation in Theorem 3 simplifies to \((5/16)\alpha^{2}\left(1-H\left((8\eta-3)/5\right)\right)-\alpha/2=0\), which has the unique solution \(\alpha_{2}=\frac{8}{5(1-H((8\eta-3)/5))}\). This function is sketched in the following diagrams together with \(\alpha_{1}\) (for \(\delta=1,\ell=3\)) in the same way as before.
These graphs suggest that for \(\delta=1\) and \(\ell=3\), the substitution \(m=m_{2}(=\alpha/2)\) never yields a better estimate than the stationary point \(m=m_{1}\).
We can compare \(\alpha_{1}\) and \(\alpha_{2}\) for \(\delta=1,\ell=2\) similarly. In this case, \(\alpha_{2}=\frac{4}{3(1-H((4\eta-1)/3))}\). The two diagrams are harder to distinguish than in the previous case. So we also provide the following numerical results (the data used to prepare the diagram on the right):
As a final remark, we explain why the requirement we worked with throughout the paper, namely that the algorithm should succeed with high probability, is equivalent to the seemingly weaker requirement that the probability of success should be at least \(1/2\). Assume that there is an algorithm which finds a clique (or a subgraph with edge density at least \(\eta\)) of size \(\alpha\log n\) in \(G(n,1/2)\)
with probability at least \(1/2\). Then we can partition the underlying set of \(G(n,1/2)\) into \(\log n\) subsets of size roughly \(n/\log n\). The algorithm finds a clique (or a subgraph with edge density at least \(\eta\)) of size \(\alpha\log(n/\log n)\) in each subset with probability at least \(1/2\). As \(\alpha\log(n/\log n)\sim\alpha\log n\), this yields an algorithm that finds a solution of size asymptotically \(\alpha\log n\) with probability at least \(1-1/n\). Indeed, the probability that the original algorithm fails in all \(\log n\) subsets is at most \(2^{-\log n}=1/n\). This argument can be generalized to some other subgraph query problems, as well. It does not work when we are looking for a global structure such as a Hamiltonian cycle; nevertheless, the statement itself (that the two requirements are equivalent) can be true in such a setup as well.
## Acknowledgements
This paper was initiated at the Focused Workshop on Networks and Their Limits held at the Erdos Center (part of the Alfred Renyi Institute of Mathematics) in Budapest, Hungary in July 2023. We thank the organizers, Miklos Abert, Istvan Kovacs, and Balazs Rath, for putting together an excellent event, and the participants of the workshop for helpful discussions. We are especially thankful to Miklos Racz for posing the main problem and providing useful insights. The workshop was supported by the ERC Synergy grant DYNASNET 810115. The authors were supported by the NRDI grant KKP 138270.
|
2302.09904 | WW-FL: Secure and Private Large-Scale Federated Learning | Federated learning (FL) is an efficient approach for large-scale distributed
machine learning that promises data privacy by keeping training data on client
devices. However, recent research has uncovered vulnerabilities in FL,
impacting both security and privacy through poisoning attacks and the potential
disclosure of sensitive information in individual model updates as well as the
aggregated global model. This paper explores the inadequacies of existing FL
protection measures when applied independently, and the challenges of creating
effective compositions.
Addressing these issues, we propose WW-FL, an innovative framework that
combines secure multi-party computation (MPC) with hierarchical FL to guarantee
data and global model privacy. One notable feature of WW-FL is its capability
to prevent malicious clients from directly poisoning model parameters,
confining them to less destructive data poisoning attacks. We furthermore
provide a PyTorch-based FL implementation integrated with Meta's CrypTen MPC
framework to systematically measure the performance and robustness of WW-FL.
Our extensive evaluation demonstrates that WW-FL is a promising solution for
secure and private large-scale federated learning. | Felix Marx, Thomas Schneider, Ajith Suresh, Tobias Wehrle, Christian Weinert, Hossein Yalame | 2023-02-20T11:02:55Z | http://arxiv.org/abs/2302.09904v3 | # HyFL: A Hybrid Approach For Private
###### Abstract
As a distributed machine learning paradigm, federated learning (FL) conveys a sense of privacy to contributing participants because training data never leaves their devices. However, gradient updates and the aggregated model still reveal sensitive information. In this work, we propose HyFL, a new framework that combines private training and inference with secure aggregation and hierarchical FL to provide end-to-end protection and facilitate large-scale global deployments. Additionally, we show that HyFL strictly limits the attack surface for malicious participants: they are restricted to data-poisoning attacks and cannot significantly reduce accuracy.
## 1 Introduction
Federated learning (FL) as proposed by Konecny et al. (2016); McMahan et al. (2017) is a leading paradigm in distributed machine learning; in a cross-device setting, FL allows thousands or more clients to participate in a training process. To improve _scalability_ for large-scale deployments, also hierarchical FL has been proposed (Bonawitz et al., 2019; Lin et al., 2020; Yang, 2021), which layers multiple levels of aggregators.
One of the primary benefits of FL outlined in McMahan et al. (2017) is the (perceived) _privacy_ of training data and thus increased user engagement: as participating clients train the model locally and transfer only gradient updates to an aggregator, training data never leaves the clients' devices. However, it was shown that these gradient updates still leak a significant amount of information (Zhu et al., 2019; Geiping et al., 2020). Hence, _secure aggregation_ protocols have been introduced that either ensure that a single aggregator sees only blinded (or _masked_) values (Bonawitz et al., 2017; Bell et al., 2020), use a distributed aggregator based on secure multi-party computation (MPC) (Fereidooni et al., 2021; Ben-Itzhak et al., 2022), or by adding noise can guarantee differential privacy (DP) (Quadrhiri and Abdelhadi, 2022).
Unfortunately, there are still three pressing issues:
* Malicious participants can perform _attacks_ (e.g., backdoor (Bagdasaryan et al., 2020; Xie et al., 2020), model-poisoning (Wang et al., 2020; Fang et al., 2020), or data-poisoning attacks (Biggio et al., 2012; Tolpegin et al., 2020)) to manipulate the aggregated model.
* Recently, serious concerns about privacy vulnerabilities when using secure aggregation with a _single_ aggregator have been raised (So et al., 2021; Boenisch et al., 2021, 2022; Fowl et al., 2022; Wen et al., 2022).
* Research has shown that with unrestricted access to the aggregated model, it is still possible to extract traces of the original training data (Pasquini et al., 2022; Boenisch et al., 2023).
Our ContributionsIn this paper, we address all of the issues outlined above in a unified framework called HyFL that enables private and robust distributed machine learning at scale. Our framework is based on a novel abstraction that also captures existing regular and hierarchical FL architectures in a _hybrid_ manner. One key property of HyFL is that we achieve _complete model privacy_.
Briefly, in our framework, FL participants use secret-sharing techniques to securely outsource training data to _distributed training clusters_ that are based on MPC. The participants then might leave and only sporadically return to provide more training data - this makes our framework robust against real-world issues such as drop-outs, requires no interaction between clients, and relieves resource-constraint (mobile or edge) devices from significant workload. The trained models are then aggregated across all training clusters using one or multiple levels of distributed aggregators. For secure distributed aggregation, we again utilize MPC. Note that after aggregation, models are not publicly released but in secret-shared form handed back to training clusters for the next training iteration. After training is completed, known secure inference protocols can be used to allow private queries (Mann et al., 2022) in a controlled (potentially rate-limited) way. This architecture design addresses issues P2 and P3.
We observe that a neat property of our framework is the strictly limited attack surface: malicious participants are restricted to data-poisoning attacks as there is no possibility to access and manipulate the model itself. We show experimentally that state-of-the-art data-poisoning attacks in the suggested hierarchical configuration are less effective than in plain FL. Furthermore, we implement and evaluate different robust aggregation schemes to further mitigate the effect of such attacks; for this, we additionally propose new heuristics that improve the efficiency for the corresponding MPC implementation. This addresses issue P1.
Finally, we implement all HyFL components based on Meta's CrypTen MPC framework (Knott et al., 2021) and evaluate the performance when training neural networks for standard image classification tasks in realistic network settings and using GPU-accelerated AWS EC2 instances.
In summary, we provide the following contributions:
* New scalable (hierarchical) FL framework called HyFL that achieves complete model privacy, supports resource-limited mobile or edge devices, and significantly limits the attack surface for malicious participants.
* Analysis of data-poisoning attacks by malicious participants with new efficiency improvements for secure robust aggregation.
* Open-source implementation and evaluation of HyFL on standard image classification tasks.
In Tab. 1, we furthermore clarify how HyFL distinguishes itself from related works. In addition to this concise summary, we provide a detailed overview in SSA.3.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Categories} & \multirow{2}{*}{Representative Work(s)} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Privacy (\(\mathcal{S}\))} & \multicolumn{2}{c}{Privacy (\(\mathcal{C}\))} & \multirow{2}{*}{Defense} & \multirow{2}{*}{Congs} & \multirow{2}{*}{No Client} & \multirow{2}{*}{Dropout} \\ & & & GM & LM & GM & & & & \\ \hline Agog. (Pchain) & (Mc
HyFL Framework
We now present the details of our HyFL framework, which aims to address multiple key requirements: complete model privacy, scalability, support for resource-constraint (mobile/edge) devices, reduction of attack surface, ability to defend against multiple remaining threats, and high levels of user engagement. Additionally, our framework seeks to capture various proposed architectures for FL in a single abstraction. We illustrate our framework in Fig. 1 and detail the underlying algorithm in Alg. 1.
### HyFL Architecture
Our HyFL framework is based on a three-layer architecture and extends the established hierarchical FL paradigm (Bonawitz et al., 2019; Lin et al., 2020; Yang, 2021). In hierarchical FL, clients are initially organized into clusters, and their data is aggregated at cluster level. This cluster-level data is then further aggregated globally, resulting in an additional layer of aggregation.
We pursue a hierarchical approach as it facilitate large-scale deployments: Firstly, it can effectively model the taxonomy of trust among clients in real-world scenarios, such as trust among members of the same region or country (Marti & Garcia-Molina, 2006). Secondly, it distributes the workload among multiple clusters instead of relying on a central aggregator. Furthermore, this type of hierarchy is ubiquitous in real-world scenarios such as P2P gaming, organizations, and network infrastructures (Subramanian et al., 2002; Hong & Varghese, 2019; Lin et al., 2020).
HyFL utilizes a different approach for model training than prior works: they involve interactive collaboration among clients to preserve privacy during model training and thus are costly for a large number of clients (Sav et al., 2021). Another existing method is pairwise training among clients and a representative, followed by an aggregation step similar to FL (Mandal & Gong, 2019); however, this approach only preserves privacy among clients and not against the aggregator, assuming no collusion to violate privacy.
We provide details for each layer in HyFL next. We focus on the sequence of operations performed by the entities in our architecture (cf. Fig. 1) for a training over \(T\) iterations while considering necessary MPC protocols and setup requirements. Since we have a generic design, MPC protocols are abstracted and their specifics are given in Tab. 6 in SSB.1. The notations used in HyFL are listed in Tab. 2.
#### 2.1.1 Layer III: Clients
This layer is composed of \(\mathsf{N}_{\mathcal{M}}\) distinct sets of clients \(\mathcal{C}_{i}^{\mathsf{U}}\) (with \(i\in[\mathsf{N}_{\mathcal{M}}]\)), called _clusters_, which are formed based on specific criteria relevant to the application (e.g., European Union (EU) member states for EU smart metering scheme (Cuijpers & Koops, 2013; Commission, 2014)). Similar to
Figure 1: Three-layer architecture in HyFL for federated training of a machine learning model.
standard FL, only a random subset of clients, denoted by \(\mathcal{C}_{i}\subseteq\mathcal{C}_{i}^{\text{U}}\), will be selected by the training algorithm in an iteration \(t\in[1,T]\).
During iteration \(t\), each client \(\mathsf{C}_{i}^{j}\in\mathcal{C}_{i}\) (with \(j\in[\mathsf{N}_{\mathcal{C}_{i}}]\)) holding data \(D_{t}^{\mathsf{C}_{i}^{j}}\) uses the Share protocol to securely distribute its data to a set of cluster servers \(\mathcal{M}_{i}\). As detailed in SS2.1.2, \(\mathcal{M}_{i}\) constitute a representative group of high-performance servers that clients have a sufficient level of trust in. HyFL allows clients to share input and then leave at any time. They can also rejoin the system later and provide additional data in the next iteration they get selected. Hence, the clusters \(\mathcal{C}_{i}^{\text{U}}\) are dynamic and change with each iteration.
Our method differs from the standard concept of "data residing at the clients" in FL, but we expect it to not negatively impact user engagement as data remains within the users' trust zone. Additionally, the reduced computational load allows for the use of resource-constrained devices in training complex models and eliminates the need for shared-key setup among clients, making it easier to handle dropouts.
#### 2.1.2 Layer II: MPC Clusters
The second layer consists of \(\mathsf{N}_{\mathcal{M}}\) sets of distributed training servers \(\mathcal{M}_{i}\) (with \(i\in\mathsf{N}_{\mathcal{M}}\)), called _MPC clusters_, with each \(\mathcal{M}_{i}\) corresponding to the cluster \(\mathcal{C}_{i}^{\text{U}}\) in Layer III. In iteration \(t\), Layer I servers (denoted by \(\mathcal{G}\)) initiate ML training by sharing the current global model \(W_{t-1}\) among servers in \(\mathcal{M}_{i}\). As will be discussed in SS2.1.3, \(W_{t-1}\) is also in a secret-shared form among \(\mathcal{G}\), represented by \(\langle W_{t-1}\rangle_{\mathcal{G}}\). To account for varying availability and trustworthiness of servers across regions, MPC clusters in HyFL may use different MPC configurations and differ, e.g., in their corruption threshold and security model (Evans et al., 2018). Therefore, \(\mathcal{G}\) uses the Reshare protocol to convert the secret shares of \(\langle W_{t-1}\rangle_{\mathcal{G}}\) to those of \(\mathcal{M}_{i}\), i.e., \(\langle W_{t-1}\rangle_{\mathcal{M}_{i}}\).
Given \(\langle W_{t-1}\rangle_{\mathcal{M}_{i}}\), servers in \(\mathcal{M}_{i}\) use Train to employ MPC-based PPML techniques for private ML training (Knott et al., 2021; Keller and Sun, 2022) on the cumulative data from all clients in the cluster \(\mathcal{C}_{i}\), denoted by \(\langle D_{t}\rangle_{\mathcal{M}_{i}}\). This data may include leftover data from the same cluster in the previous iteration. Furthermore, by utilizing a larger pool of training data, we can leverage the known benefits of batching, resulting in faster convergence (Goyal et al., 2017; Bottou et al., 2018). After completing training, servers in \(\mathcal{M}_{i}\) utilize Reshare to secret-share the updated model with the Layer I servers, i.e., \(\langle W_{i}^{i}\rangle_{\mathcal{G}}\).
To preserve the system's integrity, the servers for each MPC cluster must be chosen with care to ensure that clients are willing to share their data among the servers and that not all of the servers are colluding. One possible option is to build non-profit partnerships, such as in the MOC alliance (Zink et al., 2021), where organizations with mutual distrust can securely co-locate servers in the same data center with high-speed network connections. Alternatively, trusted entities like government
\begin{table}
\begin{tabular}{c l} \hline \hline Nota. & Description \\ \hline MPC & Secure Multi-Party Computation \\ \(\mathcal{G}\) & Set of Global MPC Servers. \\ \(\mathcal{M}_{i}\) & Set of MPC Servers in the \(i\)-th cluster. \\ \(\mathcal{C}_{i}^{\text{U}}\) & Set of clients in the \(i\)-th cluster. \\ \(\mathcal{C}_{i}\) & Selected Clients in the \(i\)-th cluster. \\ \(\mathsf{N}_{\mathcal{M}}\) & Size of the set \(s\in\{\mathcal{G},\mathcal{M}_{i},\mathcal{C}_{j}\}\). \\ \(\mathsf{N}_{\mathcal{M}}\) & Total number of clusters. \\ \(W_{t}\) & Global model available at round \(t\). \\ GS\({}_{\text{S}}\) & Layer I MPC Global Server; \(\mathsf{GS}_{i}\in\mathcal{G}\). \\ CS\({}_{i}^{j}\) & Layer II MPC Cluster Server; \(\mathsf{CS}_{i}^{j}\in\mathcal{M}_{i}\). \\ \(\mathsf{C}_{i}^{j}\) & \(\text{\small{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{ 0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0} \definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0} \pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{0}{\small{\color[rgb]{0,0,0}\definecolor[named]{ pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray
organizations with limited infrastructure can host their servers in confidential cloud computing environments (Russinovich et al., 2021).
#### 2.1.3 Layer I: Global Servers
The top layer consists of a set of MPC servers \(\mathcal{G}\), named _Global Servers_, that _securely_ aggregate trained models from all the MPC clusters in Layer II, similarly to a standard FL scheme with a distributed aggregator (Fereidooni et al., 2021). Concretely, given the locally trained models \(W_{t}^{i}\) for \(i\in[\mathbb{N}_{\mathcal{M}}]\), servers in \(\mathcal{G}\) execute the secure aggregation protocol Agg (Mansouri et al., 2023) to compute the updated global model in secret-shared form, i.e., \(\left\langle W_{t}\right\rangle_{\mathcal{G}}\). Global servers \(\mathcal{G}\) use the Reshare protocol to distribute the aggregated model \(W_{t}\) to each of the Layer II clusters \(\mathcal{M}_{i}\) to start the next iteration (\(t+1\)).
HyFL - The Complete Picture
Alg. 1 provides the model training algorithm in HyFL. Though HyFL has a three-layer architecture, it can easily accommodate more levels of hierarchy depending on the size of the deployment. For this, additional layers of MPC clusters can be added between layers I and II, with the clusters performing secure aggregation instead of PPML training. Existing schemes for global model privacy, such as Mandal & Gong (2019) and Fereidooni et al. (2021), only protect the model from either clients or aggregator servers, leaving the possibility of collusion especially in a cross-device setting. HyFL addresses this issue by keeping the global model in a secret-shared fashion, ensuring that no single entity or group of colluding entities (up to an allowed corruption threshold) can access the model. This provides a stronger sense of privacy and also protection against unauthorized use or misuse, such as a client disclosing the trained model to another organization for further training or commercial use.
```
0:\(\mathcal{G},\mathcal{M},\mathcal{C}\) # \(\mathcal{M}=\bigcup_{i}\mathcal{M}_{i}\), \(\mathcal{C}=\bigcup_{i}\mathcal{C}_{i}\); \(i\in\mathbb{N}_{\mathcal{M}}\)
0:\(W_{0},\{D^{\mathcal{C}}\}_{\mathcal{C}\in\mathcal{C}}\) # \(W_{0}-\text{initial model, }D-\text{client C's data}\)
0:\(\left\langle W_{T}\right\rangle\) # \(W_{T}-\text{global model after }T\text{ iterations}\)
1:initialize:\(\left\langle W_{0}\right\rangle_{\mathcal{G}}\leftarrow\textsc{Share}(W_{0}, \mathcal{G})\)
2:for each training iteration \(t\in[1,T]\)do
3:for all\(i\in\mathbb{N}_{\mathcal{M}}\)do
4:\(\left\langle W_{t-1}\right\rangle_{\mathcal{M}_{i}}\leftarrow\mathcal{G}. \textsc{Reshare}(\left\langle W_{t-1}\right\rangle_{\mathcal{G}},\mathcal{M}_ {i})\) # in parallel
5:\(\mathcal{C}_{i}\leftarrow\mathcal{M}_{i}.\textsc{Sample}(\mathcal{C}_{i}^{0},t)\) # \(\mathcal{C}_{i}^{0}-\text{total clients in }i\text{-th cluster}\)
6:for all\(j\in[\mathcal{N}_{c_{i}}]\)do
7:\(\left\langle D_{t}^{\mathcal{C}_{i}}\right\rangle_{\mathcal{M}_{i}}\gets \mathcal{C}_{i}^{\prime}.\textsc{Share}(D_{t}^{\mathcal{C}_{i}},\mathcal{M}_ {i})\)
8:endfor
9:\(\left\langle D_{t}\right\rangle_{\mathcal{M}_{i}}\leftarrow\bigcup_{j\in[ \mathcal{N}_{c_{1}}]}\left\langle D_{t}^{\mathcal{C}_{i}}\right\rangle_{ \mathcal{M}_{i}}\bigcup\left\langle D_{t-1}\right\rangle_{\mathcal{M}_{i}}\) # \(D_{0}=0\)
10:\(\left\langle W_{t}^{i}\right\rangle_{\mathcal{M}_{i}}\leftarrow\mathcal{M}_{i}. \textsc{Train}(\left\langle W_{t-1}\right\rangle_{\mathcal{M}_{i}},\left\langle D _{t}\right\rangle_{\mathcal{M}_{i}})\)
11:\(\left\langle W_{t}^{i}\right\rangle_{\mathcal{G}}\leftarrow\mathcal{M}_{i}. \textsc{Reshare}(\left\langle W_{t}^{i}\right\rangle_{\mathcal{M}_{i}}, \mathcal{G})\)
12:endfor
13:\(\left\langle W_{t}\right\rangle_{\mathcal{G}}\leftarrow\mathcal{G}.\textsc{ Agg}(\left\{\left\langle W_{t}^{i}\right\rangle_{\mathcal{G}}\right\}_{i\in[\mathbb{N}_{ \mathcal{M}}]})\)
14:endfor
```
**Algorithm 1** HyFL (Training)
### Private Inference in HyFL
In HyFL, after the defined number of training iterations \(T\) are completed, the MPC clusters begin to function as clusters for ML inference. Here, we again utilize PPML techniques to enable clients to query their clusters in a privacy-preserving way (Knott et al., 2021; Mann et al., 2022). Consider the scenario where client C holding query \(Q\) wants to use the inference service on a model \(W\) that is secret shared with a cluster \(\mathcal{M}_{k}\). This is accomplished by C generating \(\left\langle Q\right\rangle_{\mathcal{M}_{k}}\) using Share, followed by cluster servers in \(\mathcal{M}_{k}\) invoking Predict on \(\left\langle W\right\rangle_{\mathcal{M}_{k}}\) and \(\left\langle Q\right\rangle_{\mathcal{M}_{k}}\) to generate the inference result in secret-shared form. Finally, \(\mathcal{M}_{k}\) reveals the result to C using Reveal protocol.
### Abstraction of Existing FL Schemes
Our three-layer HyFL architecture (cf. Fig. 1) consolidates many existing FL frameworks (cf. Tab. 3). This abstraction simplifies comparisons and facilitates advanced hybrid designs, such as incorporating differential privacy.
Standard FL with a single aggregator (Single\(\mathcal{S}\)) (McMahan et al., 2017) is a variant of HyFL, where each Layer III cluster \(\mathcal{C}_{i}\) consists of only one client that also serves as the MPC cluster server \(\mathcal{M}_{i}\) in Layer II. Thus, it is sufficient to conduct ML training without privacy concerns and then send the results to a single global server \(\mathcal{G}\) in Layer I for aggregation. The case of distributed aggregators (Multi\(\mathcal{S}\)) (Fereidooni et al., 2021) follows similarly, except secure aggregation being performed at Layer I with multiple (\(\mathsf{N}_{\mathcal{G}}>1\)) global servers. Finally, existing hierarchical FL schemes (Yang, 2021) share a similar three-layer architecture with HyFL, but have a single server at both the global and cluster level (\(\mathsf{N}_{\mathcal{G}}=1\), \(\mathsf{N}_{\mathcal{M}_{i}}=1\)). While HyFL employs PPML training at the cluster-server level, hierarchical FL uses secure aggregation. Additionally, clients in the hierarchical FL approach perform local model training, as opposed to data sharing in HyFL.
## 3 Performance Evaluation
We evaluate the practical performance of HyFL in terms of computation and communication overhead empirically.
ImplementationWe implement HyFL based on the CrypTen framework developed by Meta (Knott et al., 2021). CrypTen provides a TensorFlow/PyTorch-style interface but implements operations based on secure multi-party computation (MPC) with GPU support. Specifically, CrypTen implements semi-honest arithmetic and Boolean two- and multi-party protocols that use a third "helper" party to generate correlated randomness. CrypTen provides a "simulation" mode where the specified computation is performed on a single node in plaintext yet simulates all effects that computation in MPC would have on the results (e.g., due to limited fixed-point precision and function). We leverage this mode to efficiently evaluate HyFL accuracy and later the impact of data-poisoning attacks; yet we run the full MPC computation to obtain realistic run-time and communication measurements. In all our experiments, the fixed-point precision in CrypTen is set to 22 decimal bits (the maximum developer-recommended number).
We use CrypTen to implement (I) private training on Layer II and (II) distributed aggregation on Layer I. CrypTen out of the box supports private inference between Layer II and III, which, however, is not the focus of our evaluation. We extend CrypTen with an identity layer to enable model conversions and re-sharing. Additionally, we extend the implementation of convolutional layers to enable full GPU-accelerated training for such model architectures. Moreover, we provide the necessary code to orchestrate the various parties and components, thereby creating a unified simulation framework.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Scheme & & Layer I & Layer II & Layer III & Remark \\ \hline Aggregation & \(\mathsf{N}_{s}\) & \(\mathsf{N}_{\mathcal{G}}=1\) & \(\mathsf{N}_{\mathcal{M}_{i}}=1\) & \(\mathsf{N}_{c_{i}}=1\) & \(\mathcal{M}_{i}=\mathcal{C}_{i}\) \\ \cline{2-6} (Single\(\mathcal{S}\)) & Role & (S.)Agg. & & ML Training & \\ \hline Aggregation & \(\mathsf{N}_{s}\) & \(\mathsf{N}_{\mathcal{G}}>1\) & \(\mathsf{N}_{\mathcal{M}_{i}}=1\) & \(\mathsf{N}_{c_{i}}=1\) & \(\mathcal{M}_{i}=\mathcal{C}_{i}\) \\ (Multi\(\mathcal{S}\)) & Role & S.Agg. & & ML Training & \\ \hline Hierarchical & \(\mathsf{N}_{s}\) & \(\mathsf{N}_{\mathcal{G}}=1\) & \(\mathsf{N}_{\mathcal{M}_{i}}=1\) & \(\mathsf{N}_{c_{i}}>1\) & \(\mathcal{M}_{i}\neq\mathcal{C}_{i}\) \\ \cline{2-6} FL & Role & (S.)Agg. & (S.)Agg. & ML Training & \\ \hline Hybrid FL (HyFL) & \(\mathsf{N}_{s}\) & \(\mathsf{N}_{\mathcal{G}}>1\) & \(\mathsf{N}_{\mathcal{M}_{i}}>1\) & \(\mathsf{N}_{c_{i}}>1\) & \(\mathcal{M}_{i}\neq\mathcal{C}_{i}\) \\ \cline{2-6}
**This Work** & Role & S.Agg. & PPML Training & Data Sharing & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Abstraction of existing FL schemes (cf. Tab. 1) using our HyFL architecture (cf. Fig. 1). \(\mathcal{S}\) denotes aggregation server(s), S.Agg. denotes secure aggregation, and (S.)Agg. marks secure aggregation as optional. See Tab. 2 for other notations.
SetupPlaintext FL and CrypTen-based HyFL _simulations_ are run on a single computing platform equipped with two Intel Xeon Platinum 8168 CPUs, 1.5TB RAM, and 16 NVIDIA Tesla V100 GPUs. To provide realistic results for a _distributed MPC deployment_ with two computational and one helper party, we use three Amazon AWS g3s.xlarge instances with 4 vCPUs, and 8GB GPU memory on a NVIDIA Tesla M60. These instances are located in the same AWS availability zone (due to high costs associated with routing traffic between different zones), yet we simulate intra- and inter-continental network connections by setting the bandwidth and latency to 1Gbps/100Mbps and 20ms/100ms, respectively.
TasksFollowing prior work (Ben-Itzhak et al., 2022), we evaluate HyFL on two standard image classification tasks: recognizing (I) hand-written digits using LeNet trained on MNIST (LeCun et al., 1998) and (II) objects in one of 10 classes using ResNet9 trained on CIFAR10 (Krizhevsky, 2009). We simulate 1000 clients from which 100 are randomly selected per round. For plain FL, we use batch size 8 and learning rate 0.005, and train locally for 5 epochs before central aggregation. For HyFL, we simulate 10 Layer II clusters that use a correspondingly _scaled_ batch size of 80 and a learning rate of 0.05 (Goyal et al., 2017).
OverviewUsing the implementation and setups described above, we run an empirical accuracy evaluation to answer the following questions:
* What is the accuracy difference between FL (McMahan et al., 2017) and HyFL (in plaintext)?
* What is the impact on accuracy for HyFL when moving from plaintext to (simulated) MPC?
* What are the run-time and communication overheads of (MPC-based) HyFL compared to FL?
In the following, we describe how we answer the individual questions and discuss our results.
Q1 - FL vs HyFLIn Fig. 2, we compare the validation accuracy of FL and HyFL for image classification tasks for 500 rounds. Here, we note that HyFL converges significantly faster than regular FL, e.g., after 500 rounds of training ResNet9 on CIFAR10, HyFL reaches 85.68% validation accuracy, whereas regular FL only reaches 65.95%. We attribute this to HyFL pooling training data at cluster level and thus being able to exploit the known benefits of batching (Goyal et al., 2017; Bottou et al., 2018). Plots for up to 2000 epochs can be found in Fig. 6 in App. B.2.3.
Figure 2: Validation accuracy for FL and HyFL for 500 iterations (top: LeNet/MNIST, bottom: ResNet9/CIFAR10).
Q2 - Impact of MPCIn Fig. 3, we compare the plaintext validation accuracy (cf. Q1) to our CrypTen simulation to measure the impact of MPC (i.e., fixed-point representation with 22 bit decimal representation and truncation). Here, we can only provide results for LeNet/MNIST, as ResNet9 training on GPU in CrypTen is currently not supported due to limitations in the backward pass implementation. While there is a slight difference in initial rounds, both implementations quickly converge to almost the same validation accuracy, with only a small difference on the order of 0.1%.
Q3 - MPC OverheadFinally, we study the overhead of MPC for secure training and aggregation. For this, we measure the run-times and communication for one iteration of LeNet/MNIST training (i.e., 5 local epochs) in AWS for one cluster (with 1Gbps bandwidth and 20ms latency) and one iteration of global aggregation (with 100Mbps bandwidth and 100ms latency). The training on cluster level takes 315.17s and requires 5279.25MB inter-server communication, which is multiple orders of magnitude overhead compared to local plaintext training in PyTorch (which only takes 0.07s). The aggregation over 10 cluster inputs is very efficient with 0.023s run-time and has no communication overhead since only linear operations are required, which can be conducted locally over shares in MPC.
Additional overhead that must be considered for clients is sharing data with the training cluster servers. In our setup, clients on expectation have to upload 3.31MB and 9.86MB in total for 500 rounds of training for MNIST and CIFAR10, respectively. Furthermore, we have to account for sharing the trained models from training clusters to the aggregation servers. Given the number of model parameters and CrypTen sharing semantics, each training cluster must transfer 0.49MB and 39.19MB per server for LeNet and ResNet9, respectively. This clearly shows that it is significantly more efficient for participants to upload their training data in secret-shared form compared to down- and uploading model parameters for each training round. Note that in our evaluation setup, training clusters and the aggregation layer use the same MPC configuration, hence no interactive re-sharing is necessary.
## 4 Attacks
Malicious FL participants can try to manipulate the global model to either produce specific outputs for specific inputs or simply degrade the overall accuracy. These attacks are referred to as backdoor (Bagdasaryan et al., 2020; Xie et al., 2020) and poisoning attacks (Tian et al., 2022), respectively. In terms of poisoning attacks, the two options are to perform data poisoning (Biggio et al., 2012; Tolpegin et al., 2020) or model poisoning (Wang et al., 2020; Fang et al., 2020). Since models in our setting are not available to clients at any time, malicious participants are _inherently_ limited to manipulate the training data they provide. This rules out the entire class of more powerful model-poisoning attacks (Bhagoji et al., 2019). Hence, we evaluate the the effectiveness of state-of-the-art data-poisoning attacks in the HyFL setting as well as possible mitigations.
Figure 3: Validation accuracy for FL and HyFL in plaintext and MPC (CrypTen simulation) for LeNet/MNIST training.
### Data-Poisoning Attacks
In data-poisoning attacks, malicious clients can perform arbitrary manipulations to the training data. State-of-the-art attacks are based on label flipping, where clients keep the legitimate training samples, yet exchange the associated labels according to different strategies.
Specifically, we consider the following attacks (cf. App. SSB.2): random (RLF), static (SLF), dynamic (DLF), and targeted (TLF) label flipping. RLF changes the labels of samples at random (Xiao et al., 2012). In SLF, labels are swapped following a fixed assignment (Fang et al., 2020; Shejwalkar et al., 2022). In DLF, the attacker trains a surrogate model locally; this is then used to flip the label of each sample to the least probable output and thus can be considered the most powerful attack (Shepjwalkar et al., 2022). Finally, TLF changes all labels from a source class to a specified target class (Tolpein et al., 2020).
### Robust Aggregation Schemes
The most common FL aggregation scheme is "FedAvg", which simply computes a (weighted) average of all inputs (cf. App. A.3). In contrast, _robust_ aggregation schemes detect and then exclude outliers, and are thus a suitable mitigation against data poisoning. An overview of such schemes is given in Shejwalkar et al. (2022). From the surveyed schemes, we identify "FLTrust" (Cao et al., 2021) and "Trimmed Mean" (TM) (Yin et al., 2018) as the most MPC-efficient ones.
_FLTrust_ assumes the aggregator has access to a clean training set and can train the global model of the previous iteration on that; then it measures the cosine similarity of the own training result against the inputs of participants and excludes the least similar ones.
_Trimmed Mean_ for each coordinate computes the mean across the provided gradient updates and excludes the values that deviate the most in either direction of the mean. For our experiments, the number of excluded coordinates corresponds to the maximum assumed poison rate in the system (e.g., when assuming at most 20% of clients are corrupted, we discard the top and bottom 20%). Performing this aggregation obliviously in MPC requires implementing costly sorting to determine the ranking in each coordinate.
We observe that, intuitively, data poisoning in contrast to model poisoning does not result in specific coordinates producing extreme outliers. Hence, we propose a heuristic "Trimmed Mean Variant" that computes the mean and ranking only for a small _randomly sampled subset_ of coordinates. Then, during aggregation, it excludes those gradient updates that occurred the most as outliers in the sample. We detail the algorithm of our variant in Alg. 2.
```
0:\(\mathcal{W}=\{W^{i}\}_{i\in\mathcal{W}_{\mathcal{M}}}\), \(\alpha\), \(\beta\), \(\gamma\) # \(\gamma=|W^{i}|\), \(\alpha\) - trim threshold, \(\beta\) - sample size
0:\(\langle W_{AGC}\rangle\) # aggregated model after removing outliers
1:initialize:\(\mathcal{Z}\leftarrow\emptyset\) # set of outliers
2:\(\mathcal{I}\leftarrow\) sample random \(\beta\) indices from \([1,\gamma]\).
3:\(\mathcal{U}\leftarrow\) TM-List\((\mathcal{W}^{2},\alpha)\) # \(\mathcal{V}^{2}\)-Truncated \(\mathcal{W}\) with only indices in \(\mathcal{I}\), TM-List performs Trimmed Mean algorithm and returns \(2\alpha\) outlier values (top and bottom \(\alpha\)) for each index in \(\mathcal{I}\), \([\mathcal{U}]=2\alpha\beta\)
4:\(\mathcal{V}\leftarrow\) TopK-Hitter\((\mathcal{U},2\alpha)\) # returns list of \(2\alpha\) indices that occur most frequently in \(\mathcal{U}\)
5:forall\(i\in\mathcal{V}\)do
6:\(\mathcal{Z}\leftarrow\mathcal{Z}\bigcup\{W^{i}\}\)
7:endfor
8:\(\langle W_{AGC}\rangle\leftarrow\) Agg\((\mathcal{W}\setminus\mathcal{Z})\)
```
**Algorithm 2** Our Trimmed Mean (TM) Variant in HyFL
### Evaluation
We now empirically evaluate the impact of data-poisoning attacks on HyFL considering different (robust) aggregation schemes. For this, we implement all four attacks in our framework and add CrypTen-based implementations of the three robust aggregation schemes. Using the setup described in SS3, we want to answer the following questions:
1. [label=Q0]
2. What is the impact of data-poisoning attacks on the accuracy of FL and HyFL using FedAvg and robust aggregation schemes?
* What is the run-time and communication overhead for different robust aggregation schemes in MPC?
* How does our TM variant compare to regular TM w.r.t. accuracy and MPC performance.
#### Q4 - Attack Impact
For our evaluation, we consider three poison rates (0.01, 0.1, 0.2) and distributions of attackers: in the _equally-distributed_ setting, we assume that on expectation each cluster has the same number of malicious clients; in the _focused_ setting, we assume that malicious clients are concentrated on as few clusters as possible while there is still an honest majority in each cluster (a standard assumption in FL); finally, in the _cluster-focused_ setting, we see what happens if we lift the honest-majority assumption and concentrate all malicious clients in as few clusters as possible. In Fig. 4, we study the effectiveness of the most powerful DLF data-poisoning attack on both regular FL and HyFL when training ResNet9 on CIFAR10 in the equally distributed and focused setting. Results for less powerful attacks, the unrealistic cluster-focused setting, and training of LeNet on MNIST can be found in App. B.2.4.
For the fairly aggressive 0.2 poison rate, we see in both visualized attacker distributions a significant negative impact of the DLF attack on FL when using FedAvg with drops below 30% accuracy. However, these can be successfully mitigated with robust aggregation schemes. While there is also negative impact on HyFL, especially in the focused setting, the accuracy even with FedAvg never drops below that of FL. And even though robust aggregation schemes help to slightly smoothen the curve, we conclude that _applying defenses in HyFL against data-poisoning attacks can be considered optional but not strictly necessary_.
#### Q5 - Robust Aggregation in MPC
We evaluate the run-time and communication overhead of our FLTrust and TM implementation in CrypTen in Tab. 4. The run-time overhead for both robust aggregation schemes compared to FedAvg is four to five orders of magnitude. Also, FLTrust requires 5\(\times\) more run-time and communication than TM. Given that both produce fairly similar results when applied to HyFL, the overhead for FLTrust seems not warranted.
#### Q6 - Trimmed Mean Variant
In Fig. 5, we additionally compare the effectiveness of our TM variant to the original TM (Yin et al., 2018) for three sample sizes (10, 100, and 1000). It turns out that our heuristic approach barely reduces the effectiveness, even with aggressive parameters. In fact, in the focused setting, the TM variant outperforms the original. This is because our variant
\begin{table}
\begin{tabular}{l r r r} \hline \hline Parameter & FedAvg & Trimmed Mean & FLTrust \\ \hline Comm. (in MB) & 0 & 1021.59 & 5329.19 \\ Time (in s) & 0.023 & 326.57 & 1713.44 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Communication (Comm.) and run-time (Time) for various aggregation schemes in CrypTen.
Figure 4: Validation accuracy for FL and HyFL with FedAvg, FLTrust, and trimmed mean as aggregation schemes under DLF attack for three different poison rates (top: equally distributed, bottom: focused setting).
completely excludes gradient updates of (poisoned) outliers, whereas in regular trimmed mean, those poisoned updates might still be considered for some coordinates. Results for all other settings are presented in App. B.2.5.
In Tab. 5, we also provide run-times and communication results for our optimizations. Compared to the original with 1021.59MB of communication, we can see an improvement by two orders of magnitude with a communication of 11.90MB for the variant with 100 random samples. However, we see a higher and fairly stable run-time across all three examined variants. This is because the algorithm for determining the overall ranking of outliers across coordinates increases the number of MPC communication rounds compared to the original. In the studied inter-continental WAN setting, this has severe impact but does not correspond to actual compute time.
Overall, if HyFL is combined with a robust aggregation scheme, our TM variant offers an excellent trade-off between accuracy and MPC overhead compared to significantly more expensive FLTrust and the original TM.
## 5 Conclusion
In this work, we presented HyFL, a novel unified abstraction and framework for (hierarchical) federated learning that provides complete model privacy, faster convergence, smaller attack surface and better resilience against poisoning attacks than regular FL.
As part of future work, we plan to investigate potential further performance improvements by incorporating quantization techniques for private training (Keller and Sun, 2022) and secure aggregation (Ben-Itzhak et al., 2022).
|
2301.10905 | Uncovering Optimal Attached Eddies in Wall-bounded Turbulence | Townsend's attached eddy hypothesis decomposes the logarithmic layer of high
Reynolds number turbulent boundary layers as a field of randomly distributed
self-similar eddies that are assumed to be attached to the wall and obey a
scaling relationship with the wall-distance. The attached eddy model has
emerged as an elegant and successful theory to explain the physics, structure
and statistics of the logarithmic layer in wall turbulence. Building on the
statistical framework of Woodcock and Marusic (2015), the present work details
quantitative results on the structure of the attached eddies and their impact
on velocity moment predictions. An inverse problem is posed to infer the ideal
eddy contribution function that yields precise first and second order velocity
moments obtained from direct numerical simulations. This ideal function is
further simplified and is used to guide the proposal of a hairpin-type
prototypical eddy which is shown to yield reasonable predictions of the
velocity moments. This hairpin-type structure is improved upon by a) solving a
constrained optimization problem to infer an idealized eddy shape, and b)
inferring the circulation distribution of a hairpin packet. The associated
forward and inverse modeling procedure is general enough to serve as a
foundation to model the flow beyond the log layer and the codes are open
sourced to the community to aid further research. | Karthik Duraisamy | 2023-01-26T02:27:43Z | http://arxiv.org/abs/2301.10905v3 | # Uncovering Optimal Attached Eddies in Wall-bounded Turbulence
###### Abstract
Townsend's attached eddy hypothesis decomposes the logarithmic layer of high Reynolds number turbulent boundary layers as a field of randomly distributed self-similar eddies that are assumed to be attached to the wall and obey a scaling relationship with the wall-distance. The attached eddy model has emerged as an elegant and successful theory to explain the physics, structure and statistics of the logarithmic layer in wall turbulence. Building on the statistical framework of Woodcock & Marusic (2015), the present work details quantitative results on the structure of the attached eddies and their impact on velocity moment predictions. An inverse problem is posed to infer the ideal eddy contribution function that yields precise first and second order velocity moments obtained from direct numerical simulations. This ideal function is further simplified and is used to guide the proposal of a hairpin-type prototypical eddy which is shown to yield reasonable predictions of the velocity moments. This hairpin-type structure is improved upon by a) solving a constrained optimization problem to infer an idealized eddy shape, and b) inferring the circulation distribution of a hairpin packet. The associated forward and inverse modeling procedure is general enough to serve as a foundation to model the flow beyond the log layer and the codes are open sourced to the community to aid further research.
Attached eddy hypothesis, Turbulent flows, Near-wall turbulence, Inverse modeling
## 1 Introduction
The physics of the turbulent flow is a fundamental problem in physics, and it is often often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics. The physics of the turbulent flow is a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics. It is often a fundamental problem in physics, and it is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics. It is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics. It is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, it is often a fundamental problem in physics, and it is often a fundamental problem in physics, it is often a
region. The attached eddy model (AEM) was given a firm mathematical footing by Perry & Chong (1982), and further developed by Marusic and co-workers over the past 25 years (Perry & Marusic 1995; Marusic _et al._ 2013; Woodcock & Marusic 2015; de Silva _et al._ 2016_b_). Among other useful features, the AEM is able to explain scaling behaviors of velocity moments, provide an explanation for uniform momentum zones (de Silva _et al._ 2016_a_), and serve as a predictive model for the von-Karman constant as a function of Reynolds number. A review of the theory and developments of the AEM of wall turbulence can be found in Marusic & Monty (2019). Over the past few years, theoretical and numerical analyses (e.g. McKeon (2019); Lozano-Duran & Bae (2019)) have added further credibility to this theory. Despite the success of the AEM, it is pertinent to remember that it is fundamentally a statistical theory, and other hypotheses (e.g Davidson _et al._ (2006); Davidson & Krogstad (2009)) can also be used to explain the statistical characteristics of the log layer.
The attached eddy hypothesis is based on the principle that the physics and statistical properties of the logarithmic layer can be explained by considering geometrically self-similar eddies that extend from the wall. A key assumption is that the length scale of each individual eddy follows a probability distribution that is a function of the distance from the wall. Hence, the term 'attached' alludes to the fact that every eddy can be assumed to be randomly placed on the wall. The AEM is effectively an inviscid theory, yet the range of scales is set by the Reynolds number.
A major advance was made by Woodcock & Marusic (2015) (henceforth W&M) who established a rigorous statistical foundation for AEM. They provide a complete derivation for _all_ the velocity moments and demonstrate logarithmic scaling relationships therein. They were also able to provide expressions for the skewness and flatness of the wall-normal and spanwise fluctuations as a function of the Reynolds number. While variants of the AEM continue to be developed in the literature (e.g. Hwang & Eckhardt (2020)), we consider W&M as the starting point of our exploration and exclusively consider zero pressure gradient boundary layers. Particularly, the following questions are addressed:
(i) Townsend's original work introduced the concept of the 'eddy intensity function', later formalized by W&M as the 'eddy contribution function' defined to be proportional to the average (in the streamwise and spanwise directions) of the individual and pairwise velocity components. Velocity moments can then be described by a weighted integral of the eddy contribution function over all eddy sizes. While Townsend gave remarkably insightful descriptions of the nature of this function (Figure 5.7, page 155 of Townsend (1976)), its precise from is not typically discussed in the literature. We provide a quantitative characterization of this function and solve an inverse problem to infer the shape of this function to precisely match first and second order velocity moments obtained from DNS data. A simple model of the eddy contribution function is proposed and the resulting statistics is also described.
(ii) Since the AEM invokes self-similarity, the mechanics of the velocity fluctuations is governed by the prototypical eddy shape. W&M utilize an eddy that has a complex shape (Figure 1 in W&M), presumably configured using insight from DNS and/or PIV fields. It is also unclear whether the W&M eddy satisfies the Navier-stokes equations. This work assumes extremely simple line vortex-based eddies and uses them as building-blocks for quantitative development.
(iii) The sensitivity of the statistics and velocity moments to the eddy structure is unknown. In this work, we solve a constrained optimization problem to extract optimal eddy shapes. It is noted that Perry & Chong (1982) provide valuable yet _qualitative_ insight on the impact of simple eddy shapes. The role of individual vs packets of eddies is also not well-quantified. In this work, we attempt to characterize the impact of both of these prototypes on the statistics of interest.
3. The AEM is one of the most elegant and successful theories that explains the physics, structure and statistics in turbulence. While the theory has always provided valuable insight and W&M have provided a firm statistical foundation, in the author's opinion, the AEM remains somewhat esoteric to a sizeable fraction of the fluid mechanics community. The author hopes that the building-block nature of the present work, and the fact that the associated forward and inverse modeling tools are available to the community 1 will make this topic more accessible, and serve as a starting point for further research.
Footnote 1: [https://github.com/CaslabUM/AttachedEddy](https://github.com/CaslabUM/AttachedEddy)
## 1 Statistics of attached eddies
We begin with a short description of the attached eddy hypothesis and modeling. This presentation generally follows W&M with a slightly different pedagogy. Consider a collection of \(n\) eddies (Figure 1) of length scale \(h_{e,i}\) that are placed on the wall at locations \(\mathbf{x}_{e,i}\). Since these are attached eddies, it is implicit that only the streamwise and spanwise components of \(\mathbf{x}_{e,i}\) are variable. Define \(\mathbf{h}_{e}\triangleq\{h_{e,1},h_{e,2},..,h_{e,n}\}\) and \(\mathbf{X}\triangleq\{\mathbf{x}_{e,1},\mathbf{x}_{e,2},..,\mathbf{x}_{e,n}\}\). Assuming self-similarity and linearity, a quantity \(q\) (e.g. spanwise velocity) evaluated at a location \(\mathbf{x}\) can be determined using the superposition \(q(\mathbf{x},\mathbf{X}_{e},\mathbf{h}_{e},n)\triangleq\sum_{i=1}^{n}q\left( \frac{\mathbf{x}-\mathbf{x}_{e,i}}{h_{e,i}}\right).\)
Restricting our attention to a streamwise and spanwise square wall patch of side \(2L\) in which the eddies are assumed to be idependently and uniformly distributed, and assuming that the range of eddy length scales follows a probability density function \(p(h)\), the expectation of \(q\) in this region is given by
\[q(\mathbf{x},n)\triangleq\mathbb{E}_{\mathbf{X}_{e},\mathbf{h}_{e}}[q(\mathbf{ x},\mathbf{X}_{e},\mathbf{h}_{e},n)]=\frac{1}{4L^{2}}\int_{h_{min}}^{h_{max}} \int_{-L}^{L}\int_{-L}^{L}\sum_{i=1}^{n}q\left(\frac{\mathbf{x}-\mathbf{x}_{e,i}}{h}\right)p(h)d\mathbf{x}_{e,i}dh. \tag{1}\]
Homogenizing in the \(x,y\) directions, assuming \(L\) is large enough that each eddy centered at the origin has a negligible induced contribution outside an area \(4L^{2}\) (Refer Campbell's
Figure 1: Schematic of discrete representation of attached eddies with \(n=21\) and \(m=3\).
theorem Rice (1944) and appendix of W&M), it can be shown that
\[q(z,n)\triangleq\mathbb{E}_{x,y}\left[q(\mathbf{x},n)\right]\approx\frac{n}{4L^{ 2}}\int_{h_{min}}^{h_{max}}\int_{-L}^{L}\int_{-L}^{L}q\left(\frac{\mathbf{x}}{h }\right)p(h)dxdydh.\]
Note that in the above equation, the entire field is written as a function of _one_ prototypical eddy, which is scaled by the probability density function of the eddy sizes \(p(h)\) and the eddy density \(n/(4L^{2})\). If the mean eddy density is \(\beta\), then using Poisson's law, the expected value (over all numbers of eddies) is
\[Q(z)\triangleq\mathbb{E}_{n}[q(z,n)]=\beta\int_{h_{min}}^{h_{max}}\int_{-L}^{L }\int_{-L}^{L}q\left(\frac{\mathbf{x}}{h}\right)p(h)dxdydh. \tag{2}\]
Note that all the \(q\)'s defined above are random variables, yet \(Q(z)\) is a deterministic quantity. Now we are in a position to define the mean streamwise velocity \(U(z)\) as a superposition of eddies of various sizes \(h\). This can be written in terms of the induced velocity field \(u_{1}(\cdot)\) of one prototypical eddy.
\[U(z) \triangleq\beta\int_{h_{min}}^{h_{max}}\int_{-L}^{L}\int_{-L}^{L}u_ {1}\left(\frac{\mathbf{x}}{h}\right)p(h)dxdydh+U_{\infty}\] \[=\beta\int_{h_{min}}^{h_{max}}p(h)h^{2}\left[\int_{-L/h}^{L/h} \int_{-L/h}^{L/h}u_{1}\left(\frac{\mathbf{x}}{h}\right)d\left(\frac{x}{h} \right)d\left(\frac{y}{h}\right)\right]\,dh+U_{\infty}\] \[=\beta\int_{h_{min}}^{h_{max}}p(h)h^{2}I_{1}\left(\frac{z}{h} \right)dh+U_{\infty}, \tag{3}\]
where the mean flow eddy contribution function is
\[I_{1}\left(\frac{z}{h}\right)\triangleq\int_{-L/h}^{L/h}\int_{-L/h}^{L/h}u_{1 }\left(\frac{\mathbf{x}}{h}\right)d\left(\frac{x}{h}\right)d\left(\frac{y}{h} \right). \tag{4}\]
Similarly, we can define the Reynolds stress tensor as
\[R_{ij}(z) \triangleq\beta\int_{h_{min}}^{h_{max}}p(h)I_{ij}\left(\frac{z}{h }\right)dh,\] \[\text{where }I_{ij}\left(\frac{z}{h}\right) \triangleq\int_{-L/h}^{L/h}\int_{-L/h}^{L/h}u_{i}\left(\frac{ \mathbf{x}}{h}\right)u_{j}\left(\frac{\mathbf{x}}{h}\right)d\left(\frac{x}{h }\right)d\left(\frac{y}{h}\right).\]
Note the presence of an additional freestream velocity in the definition of \(U(z)\). This is required because we are working with induced velocity fluctuations. The final piece we need is the probability distribution of the eddy sizes. Using insight from Townsend (1976) and Perry & Chong (1982), W&M propose that \(p(h)\propto 1/h^{3}\) and thus \(p(h)=\frac{2/h^{3}}{1/h_{\min}^{2}-1/h_{\min}^{2}}\), with the note that \(h_{\max}=\delta\) (i.e.) the boundary layer thickness and \(h_{\min}\) is set by the friction Reynolds number \(Re_{\tau}\). For notational simplicity, all length and velocity scales should be assumed to be in wall units henceforth.
## 2 The ideal eddy contribution function and a practical model
We now address the following question: How should the 'perfect' eddy contribution function look like? Townsend and Perry & Chong have provided insight into the behavior of the eddy distribution function, but the current objective is to be precise. Towards this end, consider a
discrete set of eddy sizes \(\tilde{\mathbf{h}}=\{h_{1},h_{2},h_{3},..,h_{m}\}=\{h_{\min},h_{\min}+\Delta h,h_{ \min}+2\Delta h,..,h_{\max}\}\) and a similar discretization of the wall-normal coordinate \(\tilde{\mathbf{z}}=\{z_{1},z_{2},z_{3},..,z_{m}\}=\{h_{\min},h_{\min}+\Delta h,h_ {\min}+2\Delta h,..,h_{\max}\}\). Then, the discretized velocity moments are
\[U(z_{r};\mathbf{d}) =\beta\sum_{l=1}^{m}p(h_{l})\left[\sum_{q}\sum_{p}u_{1}\left( \frac{x_{p},y_{q},z_{r}}{h_{l}};\mathbf{d}\right)\left(\frac{\Delta x}{h_{l}} \right)\left(\frac{\Delta y}{h_{l}}\right)\right]\Delta h+U_{\infty}\] \[R_{ij}(z_{r};\mathbf{d}) =\beta\sum_{l=1}^{m}p(h_{l})\left[\sum_{q}\sum_{p}u_{i}\left( \frac{x_{p},y_{q},z_{r}}{h_{l}};\mathbf{d}\right)u_{j}\left(\frac{x_{p},y_{q}, z_{r}}{h_{l}};\mathbf{d}\right)\left(\frac{\Delta x}{h_{l}}\right)\left(\frac{ \Delta y}{h_{l}}\right)\right]\Delta h.\]
This can be represented as a matrix vector product in the form \(\mathbf{U}(\tilde{\mathbf{z}})-U_{\infty}=\mathbf{Ab}\), where \(A_{ij}=I_{1}(z_{i}/h_{j})\) and \(b_{i}=\beta p(h_{i})h_{i}^{2}\Delta h\). We would now like to extract these coefficients. Towards this end, we define the eddy contribution function at selected locations, and interpolate for the values in other locations in a piecewise linear fashion. In other words
\[\mathbf{A}=\begin{bmatrix}I_{1}(1)&I_{1}(h_{1}/h_{2})&I_{1}(h_{1}/h_{3})&..&I_ {1}(h_{1}/h_{m-1})&I_{1}(h_{1}/h_{m})\\ I_{1}(h_{2}/h_{1})&I_{1}(1)&I_{1}(h_{2}/h_{3})&..&I_{1}(h_{2}/h_{m-1})&I_{1}(h_ {2}/h_{m})\\..&..&..&..&..\\ I_{1}(h_{m}/h_{1})&I_{1}(h_{m}/h_{2})&I_{1}(h_{m}/h_{3})&..&I_{1}(h_{m}/h_{m-1}) &I_{1}(1)\end{bmatrix}\]
\[\triangleq\begin{bmatrix}c_{1}&c_{m+1}&c_{m+2}&..&c_{2m-2}&c_{2m-1}\\ c_{2}&c_{1}&I_{1}(h_{2}/h_{3})&..&I_{1}(h_{2}/h_{m-1})&I_{1}(h_{2}/h_{m})\\..&..&..&..&..\\ c_{m}&I_{1}(h_{m}/h_{2})&I_{1}(h_{m}/h_{3})&..&I_{1}(h_{m}/h_{m-1})&c_{1}\end{bmatrix}.\]
The unknown \(I_{1}(\cdot)\) values are then interpolated \(\dagger\) from the nodal locations \(\mathbf{c}\). Note that there are more unknowns (\(2m\)) than equations (\(m\)), and so while it is possible to determine the unknowns that lead to perfect match of a reference (DNS or experiment) velocity profiles, a choice has to be made on the problem formulation. We choose to define the following least norm problem:
\[\{\mathbf{c}_{\text{opt}},\beta_{\text{opt}}\}=\min_{\mathbf{c},\beta}|| \mathbf{U}_{ref}\left(\tilde{\mathbf{z}}\right)-U_{\infty}-\mathbf{Ab}||_{2} ^{2}.\]
Figure 2 shows the optimal eddy contribution functions inferred from the channel flow data (Lee & Moser, 2015) at friction Reynolds number \(Re_{\tau}\approx 5200\). Figure 3 confirms that the first and second velocity moments are perfectly reproduced.
Consider a collection of attached eddies that result in a simple function as shown in Fig. 4.
Figure 2: Optimal Influence functions for \(Re_{\tau}\approx 5200\) for the mean flow (left) and Reynolds stresses (right, with red=streamwise; green=spanwise; blue=wall-normal, and black=shear).
It will be shown below that it is indeed possible to construct an attached eddy that corresponds to such a function. Using this influence function, one can reconstruct the mean streamwise velocity as
\[U(z) =\frac{2\beta}{1/h_{\min}^{2}-1/h_{\max}^{2}}\int_{h_{min}}^{h_{max} }\frac{I_{1}\left(z/h\right)}{h}dh+U_{\infty}\] \[=\frac{2\beta}{1/h_{\min}^{2}-1/h_{\max}^{2}}\int_{z/h_{max}}^{z/ h_{min}}\frac{I_{1}\left(z/h\right)}{z/h}d(z/h)+U_{\infty}\] \[\approx\frac{2\beta}{1/h_{\min}^{2}-1/h_{\max}^{2}}\left[\int_{z/ h_{max}}^{1}\frac{-a_{0}}{z/h}d(z/h)+\int_{1}^{a_{1}/a_{2}}\frac{a_{1}-a_{2}z/h}{z/h}d (z/h)\right]+U_{\infty}\] \[=\frac{2\beta}{1/h_{\min}^{2}-1/h_{\max}^{2}}\left[\int_{z/h_{max} }^{1}\frac{-a_{0}}{z/h}d(z/h)+\int_{1}^{a_{1}/a_{2}}\frac{a_{1}-a_{2}z/h}{z/h} d(z/h)\right]+U_{\infty}\] \[=\frac{2\beta}{1/h_{\min}^{2}-1/h_{\max}^{2}}\left[a_{0}\log[z/h_ {\max}]+a_{1}\log[a_{1}/a_{2}]-a_{1}+a_{2}\right]+U_{\infty}.\]
It is thus clear that the Karman constant can be constructed as
\[\kappa=\frac{1/h_{\min}^{2}-1/h_{\max}^{2}}{2a_{0}\beta}.\]
Note that \(W\&M\) use a Taylor series approximation on a generic eddy to derive an alternate expression for \(\kappa\).
Now, we answer the question whether it is possible to recreate the above hypothetical influence function using an attached eddy. Consider a square hairpin (Figure 5) with unit circulation as the attached eddy, along with its image across the \(z=0\) plane to enforce
Figure 4: A hypothetical model of the eddy influence function corresponding to the mean streamwise velocity (blue dashed lines) compared to the optimal influence function.
Figure 3: Reference (symbols) vs optimal attached eddy statistics for \(Re_{T}\approx 5200\)
no-penetration. It is remarked that this eddy has a square structure, yet the leg of the eddy aligned with the wall is canceled by an equal and opposite image vortex pair, and is thus consistent with the kinematics. Using the Biot-savart law to compute the induced velocity \(u_{1}\) in Eqn. 4, Figure 6 shows that such a simple attached eddy can reproduce the mean streamwise velocity extremely accurately. Further, the second moments are also seen to be reasonably well-predicted as shown in Figure 6.
## 3 The optimal attached eddy
To assess whether the above attached eddy shape can be improved to yield more accurate statistics, we pose a constrained optimization problem. The prototypical attached eddy is now represented using 20 parameters as shown in Fig. 7. It is noted that the shape is symmetric across the \(y=0\) plane, and a mirror image is used across the \(z=0\) plane to satisfy inviscid boundary conditions.
The following optimization problem is solved
\[\mathbf{d}_{\text{opt}} =\min_{\mathbf{d},\beta}(||\mathbf{U}_{ref}\left(\mathbf{\bar{z}} \right)-\mathbf{U}(\mathbf{\bar{z}};\mathbf{d})||_{2}^{2}+||\mathbf{R}_{11, ref}\left(\mathbf{\bar{z}}\right)-\mathbf{R}_{11}(\mathbf{\bar{z}};\mathbf{d})||_{2}^ {2}+||\mathbf{R}_{22,ref}\left(\mathbf{\bar{z}}\right)-\mathbf{R}_{22}(\mathbf{ \bar{z}};\mathbf{d})||_{2}^{2}\] \[+||\mathbf{R}_{33,ref}\left(\mathbf{\bar{z}}\right)-\mathbf{R}_ {33}(\mathbf{\bar{z}};\mathbf{d})||_{2}^{2}+||\mathbf{R}_{13,ref}\left( \mathbf{\bar{z}}\right)-\mathbf{R}_{13}(\mathbf{\bar{z}};\mathbf{d})||_{2}^ {2})\] \[\text{s.t. }\mathbf{Cd}\leqslant\mathbf{f}\text{ and }\mathbf{d}_{l} \leqslant\mathbf{d}\leqslant\mathbf{d}_{u},\]
where the constraints \(\mathbf{C}\) and \(\mathbf{f}\) are designed to ensure that \(d_{20}>d_{17}>d_{14}>d_{11}>d_{8}>d_{5}>d_{2}>0\), and \(1>d_{19}>d_{16}>d_{13}>d_{10}>d_{7}>d_{4}>0\). The bounds \(\mathbf{d}_{l},\mathbf{d}_{u}\) ensure that the x and y extents of the prototypical eddy is bounded. Sequential quadratic programming is used to solve the above constrained optimization problem. The results of the optimization is shown in Figures 8, 9, and improvement is noticeable over the simpler eddy (Figure 6).
strengths did not yield meaningful results, and thus the circulation strength of the individual hairpins in the packet (Figure 9) were optimized and found to follow a roughly quadratic distribution with hairpin size. It is noted that - in contrast to the above square hairpins, single / stacked lambda vortices did not yield accurate predictions of the statistics.
We conclude by stating that there are several directions for future work, including a) a more complete consideration of statistics such as the structure function, high-order moments,
Figure 6: Velocity moments and eddy influence function. Lines: Optimal eddy influence function. Symbols: Square hairpin at 45\({}^{\circ}\). For the right figures, colors follow Figure 2.
Figure 7: Representation of attached eddy using 20 parameters.
and the energy spectrum; b) assessment of the impact of the prototypical eddy in explaining physical characteristics of the flow; c) use of more sophisticated topological optimization and richer parametrizations of the eddies; and d) extension to other scenarios including boundary layers with pressure gradients and in combination with detached eddies. Further we note that opportunities exist in modeling flow regions beyond the log layer. In fact, the current inference approach and predictions already include the outer layer.
**Acknowledgement** This research was funded by NASA grant # 80NSSC18M0149 (Technical monitor: Dr. Gary Coleman).
Figure 8: Velocity moments and eddy influence function. Lines: Optimal eddy influence function. Symbols: Optimal attached eddy. For the right figures, colors follow Figure 2.
Figure 9: Prototypical eddies: Optimal attached eddy (left), Hairpin packet (right).
## Declaration of interests
The author reports no conflict of interest.
|
2305.13723 | PIEClass: Weakly-Supervised Text Classification with Prompting and
Noise-Robust Iterative Ensemble Training | Weakly-supervised text classification trains a classifier using the label
name of each target class as the only supervision, which largely reduces human
annotation efforts. Most existing methods first use the label names as static
keyword-based features to generate pseudo labels, which are then used for final
classifier training. While reasonable, such a commonly adopted framework
suffers from two limitations: (1) keywords can have different meanings in
different contexts and some text may not have any keyword, so keyword matching
can induce noisy and inadequate pseudo labels; (2) the errors made in the
pseudo label generation stage will directly propagate to the classifier
training stage without a chance of being corrected. In this paper, we propose a
new method, PIEClass, consisting of two modules: (1) a pseudo label acquisition
module that uses zero-shot prompting of pre-trained language models (PLM) to
get pseudo labels based on contextualized text understanding beyond static
keyword matching, and (2) a noise-robust iterative ensemble training module
that iteratively trains classifiers and updates pseudo labels by utilizing two
PLM fine-tuning methods that regularize each other. Extensive experiments show
that PIEClass achieves overall better performance than existing strong
baselines on seven benchmark datasets and even achieves similar performance to
fully-supervised classifiers on sentiment classification tasks. | Yunyi Zhang, Minhao Jiang, Yu Meng, Yu Zhang, Jiawei Han | 2023-05-23T06:19:14Z | http://arxiv.org/abs/2305.13723v2 | PromptClass: Weakly-Supervised Text Classification with Prompting Enhanced Noise-Robust Self-Training
###### Abstract
Recently proposed weakly-supervised text classification settings train a classifier using the label name of each target class as the only supervision. Such weakly-supervised settings have been gaining increasing attention since they can largely reduce human annotation efforts compared to fully-supervised and semi-supervised settings. Most existing methods follow the strategy that first uses the label names as static features to generate pseudo labels, which are then used for classifier training. While reasonable, such a commonly adopted framework suffers from two limitations: (1) words can have different meanings in different contexts, so using label names for context-free matching can induce very noisy pseudo labels; and (2) the errors made in the pseudo label generation stage will directly propagate to the classifier training stage without a chance of being corrected. In this paper, we propose a new method, PromptClass, consisting of two modules: (1) a pseudo label acquisition module that uses zero-shot prompting of pre-trained language models (PLM) to get pseudo labels based on contextualized text understanding, and (2) a noise-robust self-training module that iteratively trains the classifier and updates pseudo labels by utilizing two PLM fine-tuning strategies that regularize each other. Extensive experiments show that PromptClass achieves overall better performance than existing strong baselines on four benchmark datasets and even achieves similar performance to fully-supervised classifiers on sentiment classification tasks.
## 1 Introduction
Text classification is a fundamental text mining task with a wide range of downstream applications, such as question answering Rajpurkar et al. (2016), sentiment analysis Tang et al. (2015), and event detection Zhang et al. (2022). Earlier works train text classifiers in a fully-supervised manner that requires a substantial amount of training data Zhang et al. (2015); Yang et al. (2016), which are generally costly to obtain. Later, semi-supervised settings are studied to reduce the labeling efforts by leveraging a few labeled samples and abundant unlabeled data for classifier training Miyato et al. (2017); Xie et al. (2020); Chen et al. (2020). However, they still require at least dozens of labeled samples per class, involving human annotations and domain knowledge. To completely eliminate the need for labeled training samples, weakly-supervised text classification settings Meng et al. (2018, 2020); Wang et al. (2021) are proposed, which aim to train text classifiers using the label names of target classes as the only supervision. Such settings are intriguing especially when obtaining high-quality labels is prohibitively expensive.
With the advancements of pre-trained language models (PLM) such as GPT Radford et al. (2019); Brown et al. (2020) and BERT Devlin et al. (2019), directly prompting PLMs in a zero-shot manner becomes a valid approach for text classification without labeled data Schick and Schutze (2021). The general idea of prompt-based methods is to use a task-specific template that converts the classification task to a token prediction problem which is in the same format as the pre-training task. This allows PLMs to perform classification using the generic knowledge acquired through pre-training. For example, given the input _"Best pizza ever! It was_ [MASK]."_ where the underlined part is the prompt, a PLM such as BERT can predict that _"good"_ is more likely to appear in the masked position than _"bad"_, by contextualizing the masked token within the entire sentence. However, such methods need a very large PLM for competitive performance on the downstream task, which is hard to deploy Brown et al. (2020); Wei et al. (2022). Also, directly prompting PLMs does not utilize any task-specific information hidden in the unlabeled data that can benefit classifier training.
Another line of studies that is tailored for
weakly-supervised text classification aims to train a moderate-size classifier with a task-specific _unlabeled_ corpus. Given the label names, these methods first acquire class-indicative keywords using PLMs Meng et al. (2020); Wang et al. (2021) or corpus-level co-occurrence features Zhang et al. (2021). The keywords are then used to generate pseudo-labeled documents for fine-tuning the final classifier. Despite their promising performance, the aforementioned weakly-supervised methods may suffer from two major limitations. **First**, these methods are keyword-driven by using class-indicative keywords as static context-free features to generate pseudo labels. However, the meanings of words are highly dependent on their specific contexts, so using them as static features can lead to very noisy pseudo labels. Such an issue is more serious for abstract classes like sentiments that require understanding rhetoric. For example, a food review _"It is to die for!"_ contains the keyword _"die"_ which itself is negative, but the entire review expresses a strong positive sentiment, and keyword-driven methods will likely struggle in these cases. **Second**, most of the existing methods are two-stage by conducting pseudo label acquisition and text classifier training in two successive steps. Although different pseudo label acquisition methods are explored to improve their quality (e.g., masked language modeling Meng et al. (2020), clustering of PLM embeddings Wang et al. (2021), and large textual entailment models trained with external data Park and Lee (2022)), there is still a large performance gap between weakly-supervised and fully-supervised classifiers. The reason is that erroneous pseudo labels in the first stage will propagate to and harm the classifier training stage without a chance to be corrected.
To address the limitations of existing works, in this paper, we propose PromptClass: **Prompting** Enhanced Noise-Robust Self-Training for Weakly-Supervised Text **Class**ification. PromptClass consists of two modules. (1) Pseudo label acquisition via PLM prompting. By designing a task-specific prompt, the model can infer the class label of documents based on the entire input sequence, which is thus contextualized and beyond static keyword features. In this work, besides the well-studied prompting method using PLMs pre-trained with the masked language modeling task (MLM) (e.g., BERT Devlin et al. (2019), RoBERTa Liu et al. (2019)), we also explore a different prompting method for a discriminative pre-trained language model, ELECTRA Clark et al. (2020), and compare them in the experiments. (2) Iterative classifier training and pseudo label expansion. In each iteration, we train text classifiers using the current pseudo labels and then use the confident predictions to re-select the pseudo labels. In this way, we can gradually expand the pseudo labels which can be used to train better text classifiers. To avoid erroneous pseudo labels accumulated during the iterative process, we propose to utilize two PLM fine-tuning strategies, head token fine-tuning and prompt-based fine-tuning, as two complementary views of the data: One captures the semantics of the entire sequence while the other interprets the contexts based on the prompts. We use the two views as regularization of each other and further apply model ensemble to improve the noise robustness of the pseudo label expansion process.
To summarize, the contributions of this paper are as follows:
* We propose to use PLM prompting to get pseudo labels for the weakly-supervised text classification task instead of static keyword-based features.
* We explore the prompting method of a discriminative PLM on the text classification task and compare it with standard prompting methods for MLM.
* We propose a noise-robust self-training method to iteratively train text classifiers and update pseudo labels. To deal with noisy pseudo labels, we utilize two fine-tuning strategies of PLMs to regularize each other and apply model ensemble to enhance the quality of pseudo labels.
* On four benchmark datasets, PromptClass overall performs better than strong baselines and even achieves a similar performance of fully-supervised methods.
## 2 Preliminaries
In this section, we first give a formal definition of the weakly-supervised text classification task. Then, we briefly introduce two different fine-tuning strategies of pre-trained language models for text classification.
### Problem Definition
The weakly-supervised text classification task aims to train a text classifier using label names as the only supervision. Formally, given a set of doc
uments \(\mathcal{D}=\{d_{1},\ldots,d_{n}\}\) and \(m\) target classes \(\mathcal{C}=\{c_{1},\ldots,c_{m}\}\) with their associated label names \(l(c)\), our goal is to train a text classifier \(F\) that can classify a document into one of the classes in \(\mathcal{C}\). For example, we may classify a collection of news articles using the label names "politics", "sports", and "technology". Notice that, there are previous works using more than one topic-indicative keyword or a few labeled documents as supervision, whereas here, we follow the _extremely weak supervision_ setting (Wang et al., 2021) and only use the sole surface name of each class as supervision.
### Pre-Trained Language Model Fine-Tuning
Recently, Transformer-based large language models achieve remarkable performance on downstream tasks by first pre-training on large corpora to capture generic knowledge and then fine-tuning with task-specific data. There are generally two fine-tuning strategies for the sequence classification tasks: head token fine-tuning and prompt-based fine-tuning. See Figure 1 for some examples.
**Head Token Fine-Tuning.** PLMs like BERT and RoBERTa add an additional [CLS] token at the beginning of the input sequence and it can be fine-tuned for sequence classification tasks to capture the information of the entire sequence. To fine-tune for a downstream task, the contextualized representation of the [CLS] token \(\mathbf{h}^{\text{CLS}}\) of a document \(d\) is fed into a linear classification head \(g\) to get
\[p(c|d)=\text{Softmax}(g(\mathbf{h}^{\text{CLS}})). \tag{1}\]
Then, given the training examples \(\{(d_{i},c_{i})\}\), the PLM model and the randomly initialized classification head (normally a single linear layer) are optimized with the cross-entropy loss:
\[\mathcal{L}^{\text{head}}=-\sum_{i}\log p(c_{i}|d_{i}). \tag{2}\]
Because the PLM is not pre-trained for any specific downstream task, the [CLS] token embedding does not contain the necessary information if not fine-tuned. Besides, the randomly initialized classification head also needs to be trained. Therefore, normally the head token fine-tuning needs a substantial amount of labeled data for training. Otherwise, the model can easily overfit the training data given a large number of parameters to update. For example, existing weakly-supervised text classification methods use class-indicative keywords to assign pseudo labels to documents which are then used to fine-tune a PLM using its [CLS] token.
**Prompt-Based Fine-Tuning.** To close the gap between PLM's pre-training task and the downstream applications, prompt-based fine-tuning is proposed to convert the input and output of the downstream task to a similar form of the pre-training task. Because of the similarity between pre-training and fine-tuning tasks, prompt-based fine-tuning only needs a small set of examples to achieve competitive performance with head token fine-tuning. For common PLMs pre-trained with masked language modeling (e.g., BERT, RoBERTa), prompt-based fine-tuning uses a template to convert an input sequence into a cloze-style task. Each class also associates with one or more verbalizers, and PLM will predict the likelihood of each verbalizer for the masked position. For example, for a sentiment classification task, a template \(\mathcal{T}^{\text{MLM}}\) can transform a document \(d\) as:
\[\mathcal{T}^{\text{MLM}}(d)=d\text{\leavevmode\nobreak\ \text{\small{it was [MASK]}}}.\]
Given \(\mathcal{T}^{\text{MLM}}(d)\) as input, the pre-trained PLM and its pre-trained MLM head \(f\) will generate a probability distribution over its vocabulary, indicating the likelihood of each token appearing in the masked position,
\[p(w|d)=\text{Softmax}(f(\mathbf{h}^{\text{MASK}})). \tag{3}\]
The probability of predicting a class \(c\), assuming its label name \(l(c)\) as its only verbalizer, is the probability of its verbalizer \(p(l(c)|d)\). During fine-tuning, the PLM and its MLM head can be trained with standard cross-entropy loss.
There are also other types of PLMs with different pre-training objectives, such as auto-regressive PLM (Radford et al., 2019) and discriminative PLM (Clark et al., 2020). Because of their various pre-training tasks, different prompting and fine-tuning methods are needed to convert a downstream task to the form of the corresponding pre-training task. In the next section, we will introduce how we perform zero-shot prompting and fine-tuning with a discriminative pre-trained language model for the text classification task.
## 3 Methodology
To tackle the problems of existing methods for weakly-supervised text classification, we introduce our method, PromptClass, in this section, which
contains two major modules: (1) _zero-shot prompting for pseudo label acquisition_ that obtains pseudo labels with PLM's ability to understand the text and goes beyond static keyword-based features, and (2) _iterative classifier training and pseudo label expansion_ that iteratively updates pseudo labels by using two PLM fine-tuning strategies as two complementary views of data for more noise robustness. Figure 2 shows an overview of PromptClass.
### Zero-Shot Prompting for Pseudo Label Acquisition
Most current weakly-supervised text classification methods use a set of static class-indicative keywords to assign pseudo labels to unlabeled documents by either direct string matching Meng et al. (2018) or calculating static class embeddings Wang et al. (2021). However, keywords can only provide limited supervision with very low coverage, given that most of the documents do not contain any class-indicative keywords. Also, a document containing keywords does not necessarily indicate that it belongs to the corresponding class because the keywords can mean differently in different contexts. Such issues are more serious for abstract classes that involve more rhetoric, such as sentiment classification. For example, a food review _"It is to die for!"_ does not have any single keyword indicating the positive sentiment and even contains the word _"die"_ that seems negative, but we can still infer its strong positive sentiment based on our contextualized text understanding beyond static keywords.
To tackle the problem of existing methods and acquire pseudo labels beyond context-free keyword features, we propose to apply _zero-shot prompting_ of PLMs. The prompt-based method aims to close the gap between the pre-training task of PLM and its downstream applications, so we can directly use a pre-trained but not fine-tuned PLM with prompts to get high-quality pseudo labels for the text classification task. Also, prompting the PLM guides it to understand the entire context, and thus its predictions are contextualized.
In this work, we utilize a discriminative pre-trained language model, ELECTRA Clark et al. (2020), for zero-shot prompting. During pre-training, ELECTRA uses an auxiliary model to generate training signals and trains the main model to denoise it. More specifically, a small Transformer model called a "generator" is trained with masked language modeling to replace the randomly masked tokens of the input text, and then the main Transformer model called a "discriminator" is trained to predict whether each token in the corrupted example is original or replaced by the generator.1
Footnote 1: Please refer to Clark et al. Clark et al. (2020) for more details.
Recent studies have shown the potential of ELECTRA in prompt-based methods Xia et al. (2022); Yao et al. (2022); Li et al. (2022). Figure 1 (right) shows an example. To use prompts for text classification, we can fill in a template \(\mathcal{T}^{\text{ELECTRA}}\) with a document \(d\) and one of the label names \(l(c)\). The template is designed in a way that the correct label name should be the "original" token of this constructed input while the wrong ones are "replaced". Take the sentiment classification task as an example. If we want to classify whether a restaurant review \(d\) expresses a positive or negative sentiment given their label names "good" and "bad", we can construct the following two input
Figure 1: Examples of different fine-tuning strategies on the text classification task. (left) Head token fine-tuning randomly initializes a linear classification head and directly predicts class distribution using the [CLS] token, which needs a substantial amount of training data. (middle) Prompt-based fine-tuning for MLM-based PLM converts the document into the masked token prediction problem by reusing the pre-trained MLM head. (right) Prompt-based fine-tuning for ELECTRA-style PLM converts documents into the replaced token detection problem by reusing the pre-trained discriminative head. Given a document, one input example is constructed for each label name.
sequences to ELECTRA,
\[\mathcal{T}^{\text{ELECTRA}}(d,\text{good}) =d\,\text{It was good}.\] \[\mathcal{T}^{\text{ELECTRA}}(d,\text{bad}) =d\,\text{It was bad}.\]
The constructed inputs are individually fed into a pre-trained ELECTRA discriminator and its discriminative classification head \(f\) to get the probability of being original for each label name,
\[p(l(c)|d)=\text{Sigmoid}(f(\mathbf{h}^{l(c)})). \tag{4}\]
The confidence of document \(d\) belonging to a class \(c\) is the normalized probability,
\[p(c|d)=\frac{p(l(c)|d)}{\sum_{c^{\prime}\in\mathcal{C}}p(l(c^{\prime})|d)}. \tag{5}\]
After getting the predictions for all the documents in \(\mathcal{D}\), we take the top \(t^{0}\) percentage of the documents with the highest confidence scores as our initial pseudo label pool \(\mathcal{P}^{0}\).
### Iterative Classifier Training and Pseudo Label Expansion
After getting the initial pseudo labels, existing methods will directly fine-tune (using the head token) a text classifier with such labels to get the final classifier. However, since the pseudo labels are noisy, the performance of the final classifier is limited by the quality of pseudo labels, which leads to a large performance gap between weakly-supervised and fully-supervised settings. Therefore, inspired by the self-training method for semi-supervised learning, we propose to iteratively train a text classifier and use its confident predictions to find more high-quality pseudo labels, which can help to train an even better classifier.
However, unlike semi-supervised settings where the initial labels are perfect, here we only have noisy pseudo labels \(\mathcal{P}^{0}\) from the last step. When we train a text classifier with noisy data as supervision, the classifier will likely predict those wrongly labeled examples wrong with high confidence again. Therefore, if we strictly follow the standard self-training method, the noise will stay and accumulate in the pseudo label pool. To tackle such a challenge, in our framework, we propose to utilize prompt-based fine-tuning as a denoiser in the process of iterative head token fine-tuning of text classifiers. The idea is similar to a co-training framework Blum and Mitchell (1998), where two views of the same data complement each other by expanding the label pool in turn. In this way, the two views of data can mutually enhance each other by providing different training signals toward the same task. Here, the head token fine-tuning behaves like a _sequence-level view_ of documents by capturing the information of the entire input document, while prompt-based fine-tuning serves as a
Figure 2: Overview of the PromptClass framework. Given the unlabeled corpus, (1) it first applies zero-shot prompting of PLM to acquire the initial pseudo label pool \(\mathcal{P}^{0}\). (2) It then iteratively trains text classifiers and updates the pseudo label pool. At each iteration, it first uses current pseudo labels for head token fine-tuning whose top predictions are used for multiple prompt-based fine-tuning. Then, the top predictions of all models are combined to get the updated pseudo labels \(\mathcal{P}^{i}\).
token-level view_ by focusing more on the context surrounding the label name (or masked token if using MLM) in the prompt. However, as just stated, we need to deal with the noisy pseudo labels, so we propose a different way to use two views than co-training. In our framework, while expanding pseudo labels, the two views of data (i.e., two fine-tuning methods) are used as _regularization_ of each other to filter out potentially wrong labels and ensure the quality of pseudo labels.
Specifically, for iteration \(i\), we first use the head token to fine-tune a text classifier, \(F^{i}_{0}:\mathcal{D}\rightarrow\mathcal{C}\), using the current pseudo labels \(\mathcal{P}^{i-1}\) in a fully-supervised way. After training, we use the classifier to make a prediction on each document to get \((d_{j},F^{i}_{0}(d_{j}))\) and a confidence score \(cf^{i}_{0}(d_{j})\) which is the normalized probability of prediction \(F^{i}_{0}(d_{j})\). Then, we will rank the predictions based on their confidence scores and select the top \(t_{i}\) percentage of them whose confidence scores are greater than a threshold \(p\) as candidate pseudo labels \(\mathcal{P}^{i}_{0}\). The threshold \(t_{i}\) is linearly increasing with iteration numbers, \(t_{i}=i\cdot s\), where \(s\) is a hyperparameter.
Because \(\mathcal{P}^{i}_{0}\) can be noisy, we then utilize prompt-based fine-tuning as a second view to improve the quality of pseudo labels. Since the prompt-based method converts the downstream task into the same form as the pre-training task and reuses the pre-trained classification head, it only requires a small amount of data to achieve competitive performance with head token fine-tuning. This allows us to apply model ensemble by fine-tuning multiple individual prompt-based classifiers, which is shown to be more noise-robust (Laine and Aila, 2017; Meng et al., 2021). Here, we randomly sample \(r\) subsets of \(\mathcal{P}^{i}_{0}\), \(S_{k}\), each of size \(q\cdot|\mathcal{P}^{i}_{0}|\), and fine-tune \(r\) classifiers \(F^{i}_{k}\), \(k\in\{1,\dots,r\}\), using prompt-based fine-tuning. Because the noisy labels are unlikely to be sampled repeatedly into different subsets, this sampling process will further improve the noise robustness of model ensemble. To fine-tune ELECTRA-style PLMs using prompts, each data example \(d\) will have \(|\mathcal{C}|\) individual input sequences \(\{\mathcal{T}^{\text{ELECTRA}}(d,l(c))\}_{c\in\mathcal{C}}\), and the target class \(F^{i}_{0}(d)\) should be predicted as "original" while all the others "replaced". The model is trained with binary cross entropy loss
\[\begin{split}\mathcal{L}^{\text{ELECTRA}}=-\sum_{d\in S_{k}} \Big{(}\log p(F^{i}_{0}(d)|d)+\\ \sum_{c^{\prime}\neq F^{i}_{0}(d)}\log\big{(}1-p(c^{\prime}|d) \big{)}\,\Big{)}.\end{split} \tag{6}\]
After training, we follow the same process as the classifier \(F^{i}_{0}\) to select the top \(t_{i}\) percentage of most confident predictions by each classifier \(F^{i}_{k}\) as candidate pseudo labels \(\mathcal{P}^{i}_{k}\). Finally, we take the intersection of all the candidate pseudo labels as the final pseudo label pool for this iteration,
\[\mathcal{P}^{i}=\bigcap_{k=0}^{r}\mathcal{P}^{i}_{k}. \tag{7}\]
The intersection operation can be interpreted as follows: a document and its assigned class belong to \(\mathcal{P}^{i}\) only when it is consistently predicted as the same class and its confidence is ranked top \(t_{i}\%\) by **all** the classifiers \(F^{i}_{k}\). Therefore, we can ensure to include only those most confident ones into the pseudo label pool to alleviate the error accumulation problem.
Finally, we will repeat this iterative process by \(T\) full iterations to get the last pseudo label pool \(\mathcal{P}^{T}\). It will then be used for head token fine-tuning of the classifier at iteration \(T+1\) which will be the final classifier of PromptClass.
## 4 Experiments
### Experiment Setup
#### 4.1.1 Datasets
We use four publicly available benchmark datasets for the weakly-supervised text classification task:
* **AGNews**(Zhang et al., 2015) is for topic classification of news from AG's corpus.
* **20News**(Lang, 1995)2 is for topic classification of news corpus. Footnote 2: [http://qwone.com/~jason/2@Newsgroups/](http://qwone.com/~jason/2@Newsgroups/)
* **Yelp**(Zhang et al., 2015) is for sentiment polarity classification of business reviews, adapted from the Yelp Dataset Challenge in 2015. Footnote 2: [http://qwone.com/~jason/2@Newsgroups/](http://qwone.com/~jason/2@Newsgroups/)
* **IMDB**(Maas et al., 2011) is for sentiment polarity classification of movie reviews from IMDB. We follow previous works to use Micro-F1 and Macro-F1 as the evaluation metrics. Table 1 shows data statistics, the label names, and the prompts we used for PromptClass. We use the same label names as previous works (Meng et al., 2020; Wang et al., 2021).
#### 4.1.2 Compared Methods
We compare the following methods on the weakly-supervised text classification task. The first two baselines (WeSTClass and ConWea) are seed-driven and take at least three keywords for each
class as input. Other methods only use the label names as supervision. We also include a fully supervised baseline as a reference.
* **WeSTClass**Meng et al. (2018) trains a CNN classifier with pseudo documents generated based on keyword embeddings and then applies self-training on the unlabeled documents.
* **ConWea**Mekala and Shang (2020) utilizes a pre-trained language model to get pseudo labels using contextualized representations of keywords. It then trains a text classifier and uses the results to further expand the keyword sets.
* **LOTClass**Meng et al. (2020) uses a pre-trained language model to discover class-indicative keywords and then fine-tunes the PLM using self-training with the soft labeling strategy.
* **XClass**Wang et al. (2021) first expands the class-indicative keyword sets to help estimate class and document representations. Then, a clustering algorithm is used to generate pseudo labels for fine-tuning a text classifier.
* **ClassKG**Zhang et al. (2021) constructs a keyword graph with co-occurrence relations and self-train a sub-graph annotator, from which pseudo labels are generated for classifier training and the predictions are used to update keywords iteratively.
* **RoBERTa (0-shot)** is the zero-shot prompting results when using pre-trained RoBERTa Liu et al. (2019).
* **ELECTRA (0-shot)** is the zero-shot prompting results when using pre-trained ELECTRA Clark et al. (2020).
* is our proposed method with prompting-based initial pseudo label acquisition and noise-robust self-training. We include three versions of PromptClass with different combinations of backbone PLMs:
* uses ELECTRA for prompting and BERT for head token fine-tuning for a fair comparison with baselines using BERT as the backbone model.
* **RoBERTa+RoBERTa** uses RoBERTa as backbone models for the entire framework to compare with baselines using only MLM-based PLM.
* **ELECTRA+ELECTRA** uses ELECTRA as backbone models for the entire framework.
* **Fully Supervised** is a fully supervised BERT classifier fine-tuned with training labels.
#### 4.1.3 Implementation Details
We use pre-trained ELECTRA-base-discriminator, BERT-base-uncased, and RoBERTa-base as the backbone models for the corresponding versions of PromptClass. The classification head for head token fine-tuning is a single linear layer. The training batch size is 32 for both head token fine-tuning and prompt-based fine-tuning. We train 5 epochs and use AdamW Loshchilov and Hutter (2017) as the optimizer for all the fine-tuning tasks. The peak learning rate is \(1e-5\) for prompt-based fine-tuning of RoBERTa and \(2e-5\) for prompt-based fine-tuning of ELECTRA and all head token fine-tuning, with linear decay. For Yelp and IMDB that have only two classes, to avoid overfitting when the number of pseudo labels is small, we freeze the first 11 layers of the PLM for fine-tuning in the first several iterations and only fine-tune the full model for the final classifier. The model is run on one NVIDIA RTX A6000 GPU. The threshold for initial pseudo label acquisition is \(t^{0}=10\%\). During the iterative process, the coefficient for the increasing size of pseudo labels is \(s=20\%\) and the threshold of confidence score is \(p=0.95\). We randomly sample \(r=3\) subsets of size \(q=1\%\) of the candidate pseudo labels for prompt-based fine-tuning and model ensemble. The number of full iterations \(T\) is 5 for AGNews, Yelp and IMDB; for 20News that is harder, we run until the number of pseudo labels does not increase, which takes 8 full iterations.
### Experimental Results
Table 2 shows the evaluation results of all methods. PromptClass overall achieves better performance than the compared baselines. It even achieves similar results to the fully supervised baseline on Yelp and IMDB. We can observe that: (1) ELECTTRA+BERT model already outperforms most of the baselines that also use BERT to fine-tune their final classifiers, which shows the effectiveness of
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **Classification Type** & **\# Docs** & **\# Classes** & **Label Names** & **Prompt** \\ \hline
**AGNews** & News Topic & 120,000 & 4 & politics, sports, business, technology & [MASK] News: \textless{}doc\textgreater{} \\
**20News** & News Topic & 17,871 & 5 & computer, sports, science, politics, religion & [MASK] News: \textless{}doc\textgreater{} \\
**Yelp** & Business Review Sentiment & 38,000 & 2 & good, bad & \textless{}doc\textgreater{} It was [MASK]. \\
**IMDB** & Movie Review Sentiment & 50,000 & 2 & good, bad & \textless{}doc\textgreater{} It was [MASK]. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets overview.
our proposed method. (2) ClassKG as an iterative method is the strongest keyword-driven baseline and even achieves better results on 20News than PromptClass. However, it takes drastically longer time to run. Figure 3 shows the run time on 20News. ClassKG takes more than 30 hours while PromptClass takes only 3 hours to achieve similar results. (3) ELECTRA (0-shot) already achieves comparable results to some simple baselines, confirming our idea that using contextualized text understanding can lead to high-quality pseudo labels. Although RoBERTa (0-shot) does not perform well on AGNews, after the iterative classifier training process, the full model achieves the best performance, demonstrating the effectiveness of the iterative process of PromptClass. (4) ELECTRA overall performs better than RoBERTa, especially on the sentiment classification task, but RoBERTa achieves better results on AGNews.
To explain why PromptClass can achieve similar performance to the fully supervised method, we find that there are some errors in the ground true labels which could affect the performance of fully supervised model if used as training data. For example, the following review in Yelp is labeled as negative but predicted as positive by PromptClass: "_My husband had an omelette that was good. I had a BLT, a little on the small side for $10, but bacon was great. Our server was awesome!"_. Because PromptClass only includes the most confident predictions as pseudo labels, it can ensure the quality of its training examples and thus make the correct prediction.
### Ablation Study
To study the effects of each proposed component of PromptClass, we further compare our full model with its three ablations:
* **Two-Stage** is a two-stage version of our methods which directly trains the final text classifier using the pseudo labels obtained from zero-shot prompting (i.e., the head token classifier from the first iteration of PromptClass).
* **Single-View ST (Self-Training)** does not utilize prompt-based fine-tuning as the second view during the iterative process. It thus follows the standard self-training method by using the confident predictions of each iteration's head token classifier as the updated pseudo labels for the next iteration.
* **Co-Training** uses the two views of data (i.e., two PLM fine-tuning strategies) to update the pseudo labels in turn with their confident predictions, while in PromptClass the two views are used to regularize each other.
All the compared methods are based on the ELECTRA+ELECTRA version of PromptClass with the same hyperparameters as described in Sec. 4.1.3.
Table 3 shows the performance of PromptClass and its ablations on four datasets. We can observe that: (1) our full model PromptClass consistently
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c}{**AGNews**} & \multicolumn{3}{c}{**20News**} & \multicolumn{3}{c}{**Yelp**} & \multicolumn{3}{c}{**IMDB**} \\ & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Macro-F1 & Macro-F1 & Micro-F1 & Macro-F1 \\ \hline
**WeSTClass** & 0.823 & 0.821 & 0.713 & 0.699 & 0.816 & 0.816 & 0.774 & - \\
**ConWea** & 0.746 & 0.742 & 0.757 & 0.733 & 0.714 & 0.712 & - & - \\
**LOTClass** & 0.869 & 0.868 & 0.738 & 0.725 & 0.878 & 0.877 & 0.865 & - \\
**XClass** & 0.857 & 0.857 & 0.786 & 0.778 & 0.900 & 0.900 & - & - \\
**ClassKG\({}^{\dagger}\)** & 0.881 & 0.881 & 0.811 & **0.820** & 0.918 & 0.918 & 0.888 & 0.888 \\
**RoBERTa (0-shot)** & 0.581 & 0.529 & 0.507\({}^{\ddagger}\) & 0.445\({}^{\ddagger}\) & 0.812 & 0.808 & 0.784 & 0.780 \\
**ELECTRA (0-shot)** & 0.810 & 0.806 & 0.558 & 0.529 & 0.820 & 0.820 & 0.803 & 0.802 \\ \hline
**PromptClass** & & & & & & & & \\
**ELECTRA+BERT** & 0.884 & 0.884 & 0.789 & 0.791 & 0.919 & 0.919 & 0.905 & 0.905 \\
**RoBERTa+RoBERTa** & **0.895** & **0.895** & 0.755\({}^{\ddagger}\) & 0.760\({}^{\ddagger}\) & 0.920 & 0.920 & 0.906 & 0.906 \\
**ELECTRA+ELECTRA** & 0.884 & 0.884 & **0.816** & 0.817 & **0.957** & **0.957** & **0.931** & **0.931** \\ \hline
**Fully Supervised** & 0.940 & 0.940 & 0.965 & 0.964 & 0.957 & 0.957 & 0.945 & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of all compared methods on four datasets measured by Micro-F1 and Macro-F1, with the best score boldfaced and the second best score underlined. \({}^{\dagger}\) Because ClassKG uses more than one keyword on some datasets in its original setting, we re-run it with its official implementation using only the label names for a fair comparison. Other baseline results come from (Meng et al., 2020) and (Wang et al., 2021) with missing values marked as -. \({}^{\ddagger}\) The results are influenced by RoBERTa’s tokenizer as the label name “religion” is separated into two tokens, so we need to change it to a sub-optimal label name.
Figure 3: Run time (in hours) on 20News. ClassKG takes much longer time than other methods.
outperforms all of its ablations, showing the effectiveness of each ablated component. (2) By removing the iterative pseudo label expansion process, the Two-Stage model performs worse than PromptClass by \(3\sim 6\%\), meaning that the erroneous pseudo labels in the first stage will affect the final classifier training if not corrected. However, the Two-Stage version already achieves comparable results to strong keyword-driven baselines, which shows the power of zero-shot PLM prompting on the text classification task. (3) The Single-View Self-Training model performs similarly to the Two-Stage model and even worse on IMDB. This proves that, with the noisy pseudo labels, the standard self-training strategy can cause the error accumulation problem and harm the classifier training. (4) The Co-Training model performs much better than the previous two ablations, meaning that utilizing two PLM fine-tuning methods as two views of data can improve the pseudo label quality. However, it still performs worse than PromptClass, showing that using two views to regularize each other can further improve the noise robustness.
### Study of the Iterative Process
To study the iterative process of PromptClass, we show the performance of PromptClass and its Single-View Self-Training ablation when varying the number of full iterations from 1 to 5 in Figure 4. From the figure we can see that, although Single-View Self-Training may perform better than PromptClass when the quantity of pseudo labels is small at the beginning, after five iterations, PromptClass consistently outperforms it on all datasets. The reason is that the quality of pseudo labels becomes more crucial when the number of pseudo labels increases. Therefore, the performance of Single-View Self-Training does not increase much during the iterative process due to its error accumulation problem, while the performance of PromptClass is increasing much faster. For efficiency, we set the number of iterations to 5 for all datasets except for 20News, but running more iterations may further improve the results.
Figure 5 additionally shows the quantity and quality of the pseudo labels at each iteration by PromptClass and Single-View ST. The bars represent the percentage of correct and wrong pseudo labels to the entire corpus \(\mathcal{D}\), and the lines are their quality measured by accuracy (the number of correct pseudo labels over the total number of pseudo labels). We can observe that, the Single-View Self-Training model progressively increases the number of pseudo labels but the quality of pseudo labels drops quickly. On the other hand, PromptClass can keep the quality of pseudo labels during the expansion process. The number of pseudo labels does not increase much in the last two iterations, because PromptClass does not blindly expand the pseudo labels with potential errors. By utilizing two PLM fine-tuning methods and model ensemble, PromptClass only includes the most confident pseudo labels to ensure the quality, which contributes to the superior performance of its final classifier.
### Discussions on PLM Prompting
Handling Multi-Token Label NamesAs shown in the experiment results (Table 2), the performance of prompting with MLM-based PLMs such as RoBERTa is affected by the tokenizer, because the MLM classification head cannot naturally handle verbalizers (i.e., label names) with multiple tokens. For example, the label name "religion" of 20News is tokenized by RoBERTa into two tokens, "rel" and "igion". Therefore, prompting RoBERTa for multi-token label names requires substantially more work by inserting multiple [MASK] tokens into the template and iteratively predicting the masked tokens. On the other hand, prompting ELECTRA can easily handle multi-token label names (Xia et al., 2022), because the label names are directly encoded in the input. Assume that a label name \(l(c)\) is tokenized into several pieces \(l(c)=\{w_{1},\dots,w_{|l(c)|}\}\). We can estimate the probability of its being original by taking the average
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{**AGNews**} & \multicolumn{2}{c}{**20News**} & \multicolumn{2}{c}{**Yelp**} & \multicolumn{2}{c}{**IMDB**} \\ & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 & Micro-F1 & Macro-F1 \\ \hline
**Two-Stage** & 0.847 & 0.847 & 0.739 & 0.733 & 0.913 & 0.913 & 0.870 & 0.870 \\
**Single-View ST** & 0.871 & 0.871 & 0.736 & 0.737 & 0.912 & 0.912 & 0.846 & 0.846 \\
**Co-Training** & 0.877 & 0.877 & 0.795 & 0.791 & 0.948 & 0.948 & 0.925 & 0.925 \\
**PromptClass** & **0.884** & **0.884** & **0.816** & **0.817** & **0.957** & **0.957** & **0.931** & **0.931** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of PromptClass and its ablations on four datasets measured by Micro-F1 and Macro-F1. All the models are based on the ELECTRA+ELECTRA version.
of the probabilities of each token,
\[p(l(c)|d)=\frac{1}{|l(c)|}\sum_{i=1}^{|l(c)|}p(w_{i}|d) \tag{8}\]
from domain experts for text data can be a labor-intensive and time-consuming process, thereby attracting significant attention to text classification with limited labeled samples. In an effort to enhance the generalizability of models, many current studies have explored augmentation-based approaches, including the substitution of traditional noise injection methods with data augmentation techniques Xie et al. (2020), back-translation Sennrich et al. (2016), and generation of pseudo training data using hidden states of the model through interpolations Chen et al. (2020) and perturbations Miyato et al. (2017). Additionally, some graph-based methods have been proposed, leveraging graph neural networks Kipf and Welling (2017) or graph embedding learning Tang et al. (2015) to capture semantic relationships between keywords, documents, and label names.
Weakly-Supervised Text Classification Weakly-supervised text classification aims to classify the documents with very limited supervision. Many previous works utilize distant supervision from knowledge bases such as Wikipedia to interpret the document-label semantic relevance Gabrilovich and Markovitch (2007); Chang et al. (2008); Song and Roth (2014). Along this direction, some other supervision signals such as keywords Agichtein and Gravano (2000); Tao et al. (2018); Meng et al. (2018, 2020); Wang et al. (2021); Zhang et al. (2021) and heuristic rules Ratner et al. (2016); Badene et al. (2019); Shu et al. (2020) are explored to reduce the efforts of acquiring any labels or domain-specific data. Recently, the extremely weakly supervised settings, where only the label name of each class is utilized as supervision, are studied and achieve inspiring results Meng et al. (2020); Mekala and Shang (2020); Wang et al. (2021); Zhang et al. (2021). Existing methods are mainly driven by keywords, relying on pre-trained language models (PLMs) to derive static class-indicative keywords, which are then used to generate pseudo labels for classifier training. More specifically, LOTClass Meng et al. (2020) fine-tunes an MLM-based PLM for category prediction and generalizes the model with self-training. ConWea Mekala and Shang (2020) further leverages the seed keywords and the contextualized embeddings to disambiguate the keywords for each class. X-Class Wang et al. (2021) utilizes keywords to obtain static representations of classes and documents and generates pseudo labels with a clustering algorithm. ClassKG Zhang et al. (2021) learns the correlation between keywords by training a GNN over a keyword co-occurrence graph. However, these methods only depend on static keyword features, leading to noisy pseudo-labeled documents for classifier training.
Prompt-Based LearningPLMs Devlin et al. (2019); Radford et al. (2019); Liu et al. (2019) have shown superior performance on various downstream tasks through fine-tuning with task-specific data. Some papers show that PLMs can learn generic knowledge during the pre-training stage and design cloze-style prompts to directly probe its knowledge without fine-tuning Petroni et al. (2019); Davison et al. (2019). Later, task-specific prompts are used to guide PLM fine-tuning and perform well in a low-resource setting for several tasks, such as text classification Han et al. (2021); Hu et al. (2022), relation extraction Chen et al. (2022), and entity typing Ding et al. (2021); Huang et al. (2022). To mitigate the human efforts in prompt engineering, researchers also study automatic methods including prompt search Shin et al. (2020); Gao et al. (2021) and prompt generation Guo et al. (2022); Deng et al. (2022). Soft prompts are also proposed by tuning some randomly initialized vectors together with the input Zhong et al. (2021); Li and Liang (2021); Lester et al. (2021). A recent study also shows that the co-training method can benefit prompt-based learning in a few-shot setting Lang et al. (2022). Besides standard prompting methods for MLM-based PLMs, prompting methods for discriminative PLMs are also studied on few-shot tasks Xia et al. (2022); Yao et al. (2022); Li et al. (2022).
## 6 Conclusion and Future Work
In this paper, we study the task of weakly-supervised text classification that trains a classifier using the label names of target classes as the only supervision. To overcome the limitations of existing keyword-driven methods, we propose PromptClass which consists of two modules: (1) an initial pseudo label acquisition module using zero-shot PLM prompting that assigns pseudo labels based on contextualized text understanding, and (2) an iterative classifier training and pseudo label expansion module that uses two PLM fine-tuning methods as two views to regularize each other for noise robustness. We also explore the prompt
ing method for discriminative PLMs and compare it with the standard prompting methods for MLM-based PLMs. Extensive experiments show that PromptClass can achieve overall better performance than strong baselines, especially on the sentiment classification task where PromptClass achieves similar performance to a fully-supervised baseline.
There are three future directions that can be explored. First, we can extend our method to other forms of text data (e.g., social media) and other abstract classes (e.g., stance detection, morality classification) that require deeper text understanding and keyword-driven methods will likely fail. Second, PromptClass can be integrated with keyword-based methods as two types of training signals to further improve the performance of weakly-supervised text classification. Third, the idea of PromptClass is also generalizable to other text mining tasks with limited supervision, such as named entity recognition and relation extraction.
## Limitations
In this paper, we propose PromptClass, a general method for weakly supervised text classification. We introduce an iterative learning framework by combining two standard PLM fine-tuning methods for noise robustness. Despite its effectivenss shown in the experiments, there is still room for improvements. For example, our learning framework can be integrated with other PLM fine-tuning methods and noise-robust training objectives [14]. Besides, our method uses PLM prompting to acquire pseudo labeled documents. As we only use several popular corpora, verbalizers, and prompts for this task, it may require additional efforts to find verbalizers/prompts if working on other domains. Finally, our iterative pseudo label expansion framework requires access to an amount of unlabeled documents, so it may perform worse if the corpus is too small.
|
2302.03597 | Dual-comb photoacoustic spectroscopy with electro-optic generation for
the analysis of gaseous samples | In this work we present the design and characterization of a dual comb
photoacoustic spectroscopy (DCPAS) set-up for ammonia detection in the near
infrared. The system consists of a dual electro-optic (EO) comb generator that
generates a multiheterodyne beating signal in the gas sample. The input to the
dual EO comb generator is a laser diode tuned to a fixed wavelength within an
absorption feature of ammonia (around 1531.6 nm, 6529 cm-1) and we show how the
dual comb allows to perform PAS measurements and resolve the absorption
features with high spectral resolution. We present results of the ammonia
absorption line profile reconstruction with a bandwidth of 1 cm-1 and a
resolution of 0.08 cm-1. Moreover, we show that dual comb technique based on
electro-optic generation maximally simplifies the optimization of the
multiheterodyne signal according the characteristic of the photoacoustic
detection module. We present results using a resonant gas cell (pipe shape) and
we show how easily the dual comb optical source is adjusted to generate
multiherodyne beating tones within the band of resonance of the gas cell. | Marta Ruiz-Llata, Yuliy M. Sanoyan, Oscar E. Bonilla-Manrique, Pablo Acedo, Pedro Martín-Mateos | 2023-02-07T17:07:56Z | http://arxiv.org/abs/2302.03597v1 | ## Title:
## Abstract
In this work we present the design and characterization of a dual comb photoacoustic spectroscopy (DCPAS) set-up for ammonia detection in the near infrared. The system consists of a dual electro-optic (EO) comb generator that generates a multiheterodyne beating signal in the gas sample. The input to the dual EO comb generator is a laser diode tuned to a fixed wavelength within an absorption feature of ammonia (around 1531.6 nm \(\equiv\) 6529 cm-1) and we show how the dual comb allows to perform PAS measurements and resolve the absorption features with high spectral resolution. We present results of the ammonia absorption line profile reconstruction with a bandwidth of 1 cm-1 and a resolution of 0.08 cm-1. Moreover, we show that dual comb technique based on electro-optic generation maximally simplifies the optimization of the multiheterodyne signal according the characteristic of the photoacoustic detection module. We present results using a resonant gas cell (pipe shape) and we show how easily the dual comb optical source is adjusted to generate multiherodyne beating tones within the band of resonance of the gas cell.
## 1 Introduction
Photoacoustic spectroscopy (PAS) has been used in different applications for trace gas-sensing due to several features such as high selectivity, low cost, linear response, and wide dynamic range compared to other absorption spectroscopic techniques [1]. One of its key advantages is the measuring system compactness that allows to attain high sensitivity levels using very small sample volumes [2-4]. Moreover, the gas absorption photoacoustic detection module, typically a standard microphone [1], a quartz tuning fork in quartz enhance PAS (QEPAS) [3-6], or a cantilever with interferometric readout [7-9], is excitation wavelength independent, making the PAS technique particularly suitable for broadband detection and a very interesting solution for the mid-infrared region of the spectrum, where fundamental molecular vibration and rotation absorption lines are found. Broadband spectral measurement of gas absorption enables accurate measurements in complex systems where interfering substances may be present or environmental parameters cannot be controlled.
The typical methods used in optical transmission based measurements for broadband gas spectroscopy [10] can be also applied for PAS. One of the preferred approaches is the combination of wavelength tuning and wavelength modulation of a semiconductor laser diode [11], but this approach is limited by the tuning range of the semiconductor laser, and the wavelength modulation depth needs to be optimized to the shape of the absorption line, which depends on the temperature and pressure of the gas sample. Another approach is the use of multiple laser sources [12], however those need to be selected to the absorption features of the selected target species and coupling the different optical beams into the detection module may result in a complicated optical set-up.
On the other hand, Fourier-transform infrared (FTIR) spectrometers, in combination with broadband optical sources, have been demonstrated as a powerful tool for PAS [13-15], as it permits wide spectral coverage and very high spectral resolution if, for example, an optical frequency comb (OFC) is used as the light source [16]. However, this technique requires mechanically stable set-ups for optical scanning, which compromise the spectral bandwidth and the acquisition speed. As an alternative to FTIR, dual-comb spectroscopy (DCS) has emerged as revolutionary tool for broadband spectroscopy with unprecedented accuracy and precision, without requiring bulky, alignment sensitive, mechanical parts [17].
DCS, also known as multiheterodyne spectroscopy, relies on two OFCs. An OFC is an optical source with a broad spectrum consisting on evenly spaced, tightly phase-locked, optical frequencies. In DCS systems, an OFC passes thought the gas sample and the transmitted light is heterodyned with a second OFC (the local oscillator OFC) at the detection photodiode. As the local oscillator OFC is coherent with the measurement OFC, but with a slightly different optical frequency spacing, the photodiode signal originates a new comb in the RF domain that contains the spectral information of the measurement OFC. Alternatively, both OFC can be made to propagate simultaneously through the gas sample. In this case, the interference of the two OFC by the absorbing medium can generate a photoacoustic signal if the line-by-line beating of the OFCs occurs at acoustic frequencies. This broadband absorption detection scheme has been recently demonstrated for solid and gas samples in a technique known as dual comb photoacoustic spectroscopy (DCPAS) [18][19].
State of the art dual comb generation technologies are mode-locked fiber lasers [21], microresonator-based frequency combs [22], optical frequency combs based on Quantum Cascade Lasers [23] and electro-optic modulation of CW semiconductor lasers (EO combs) [24
28]. This later approach has shown a growing interest in the recent years for its advantageous features of intrinsically high mutual coherence, efficient use of the optical power with relative narrow spectral coverage and high power per comb line, unbeatable dual comb parameters configuration capability, and technological maturity in the near-infrared [29]. Recent examples of prospect spectroscopy based applications using dual EO combs include space mounted greenhouse gas detection LIDARs [30], and hyperspectral imaging systems [31] and sensors [32]. In this work we investigated the performance of an electro-optic dual comb generator-based DCPAS system for the detection of ammonia absorption features in the near-infrared using a resonant acoustic detection module. We show that the unique properties of the EO comb generator and their flexibility in configuring the mapping of the optical spectrum to the acoustic spectrum allow resolving spectral signatures with detailed spectral information.
## 2 Material and methods
### Measurement set-up
Figure 1 depicts the block diagram of our experimental set-up. It is composed by a dual OFC generated by electro-optic modulation of a CW semiconductor laser diode (LD) that is used as excitation to a standard photoacoustic detection module.
The purpose of the dual EO comb generator is to provide broadband detection of target gases using any standard or custom photoacoustic detection module. According to the background theory of DCS and DCPAS, that can be found on references [17] and [18, 19] respectively and are summarized in the next section, the target gas sample will be excited by a signal whose optical spectrum can be defined by a set of frequencies \(\nu_{n}=\nu_{0}+n\cdot\Delta\nu\), with \(n=0,\pm 1,\pm 2...\), and the photoacoustic response will have spectral components at frequencies \(f_{n}=f_{0}+n\cdot\Delta f\), being the amplitude of each n tone proportional to the absorption of the gas sample at the frequency \(\nu_{n}\).
In our set-up the photoacoustic detection module is a gas cell consisting of an aluminum cylinder resonator (a pipe), with a 5-mm input diameter and a length of 88 mm, terminated with two cylindrical buffer volumes as acoustic filters. The acoustic detector is a microphone (Knowles FG-23329), placed at the center of the resonator, and connected to an amplifier (Femto DHPVA-101). In this work, the acquisition and signal processing is based on a lock-in amplifier (Zurich Instruments HF2UI) with two inputs, one for the acoustic signal and the other
Figure 1: Block diagram of the experimental set-up. \(\nu_{0}\) and \(\Delta\nu\) are the free parameters of the optical excitation spectrum. \(f_{0}\) and \(\Delta f\) are the free parameters of the generated acoustic signal.
for a reference of the optical beat note signal from the dual EO comb generator using a photodiode (Thorlabs PDA10CF-EC). The measurements were made with a gas flow of 60 ml/m of ammonia at 5% concentration though the gas cell at atmospheric pressure.
### Dual-comb photoacoustic spectroscopy (DCPAS) method
In DCPAS, as represented in figure 2, a multiheterodyne signal is generated using two mutually coherent OFCs. An OFC is a pulsed light source with an optical spectrum consisting of many equidistant monochromatic tones (comb teeth). The optical characteristics of an OFC are given by the pulse repetition rate, which also sets the comb teeth separation frequency (\(f_{\text{REP}}\)), and the carrier envelope offset frequency (\(f_{\text{cc0}}\)), so the optical frequency of any comb line N, being N natural number, within the optical span of the OFC, can be expressed by \(f_{n}=f_{\text{cc0}}+\text{N}f_{\text{REP}}\). The mutiheterodyne signal is generated when two OFCs with slightly different \(f_{\text{REP}}\) are combined and propagated through the sample. An additional condition is that the two OFCs must be mutually coherent, that is to say, there is a constant mismatch between the \(f_{\text{cc0}}\) of the two OFCs. As a result, the optical intensity of the combined combs contains a multiheterodyne term corresponding to the beat notes of each (increasingly offset) pair of comb teeth. In the optical domain, because the difference of the repetition rate between the OFCs (\(\Delta f_{\text{REP}}\)) is very small compared to the repetition rate (\(f_{\text{REP}}\)), we can assume the same optical frequency \(\nu_{n}\) for each pair of comb teeth. In the temporal domain each optical tone with frequency \(\nu_{n}\) is amplitude modulated at the frequency \(f_{n}\), being this the beating frequency of the corresponding dual comb teeth.
For DCPAS, the optical span of the dual comb is centered around the absorption features of the target molecules, while the multiheodyne spectrum spam is centered within the acoustic bandwidth of the detection module (i.e. within the bandwidth of the microphone in the case of a non-resonant gas cell). Then, when the gas sample absorbs light from the dual comb source it produces a photoacoustic response that reproduces the absorption profile of the sample, mapping the absorption at each optical frequency \(\nu_{n}\) to a different acoustic frequency \(f_{n}\).
Figure 2: Principle of DCPAS: Two OFCs that are very similar in their optical frequencies (\(\Delta f<f_{\text{REP}}\)) are heterodyned within the gas sample. As OFC1 and OFC2 are synchronized (their \(f_{\text{cc0}}\) are locked), then acoustic tones are generated with a line spacing equal to \(\Delta f\) and with an amplitude that reproduces the absorption profile of the sample.
In this paper we focus on the investigation of the EO comb generation technique for broadband PAS due to its unique ability to easily adjust the operation parameters depending on the specific application and on the acoustic detection module. Electro-optic comb generators have proved important advantages over conventional mode-locked laser-based architectures. Most relevant advantages for DCPAS are exquisite simplicity, tailored spectral coverage and optical resolution (v\({}_{0}\), \(\Delta\)v), frequency agility, high acquisition rates, and the inherently high mutual coherence between combs, that enables mHz intermode beat note linewidths [19, 29] and virtually unmatched spectral compression ratios (\(\Delta\)v/ \(\Delta\)f). The details of the dual EO comb generator used in our experiments are described in section 3. It will allow the configuration of: v\({}_{0}\), the central optical frequency of the excitation light source; \(f_{0}\), the frequency of the acoustic signal generated by the absorption of the gas sample at v\({}_{0}\); \(\Delta\)v, the optical frequency difference between consecutive tones of the excitation light source (optical resolution); and \(\Delta\)f, the frequency difference between acoustic tones.
## 3.- System implementation and characterization
### 3.1.- Electro-optic dual comb generator
The dual comb is obtained from a single near-infrared continuous wave DFB laser diode (LD Laser in Fig. 3) whose output is split to generate two OFCs and then recombined. This guarantees intrinsically high mutual coherence since both OFCs are generated from the same CW LD. The central optical frequency of the resulting dual comb (v\({}_{0}\)) is fixed by the LD wavelength, which can be selected by tuning the LD current and temperature.
Each OFC is generated using one electro-optic phase modulator (PM). Each PM is driven by a different signal generator (SG12 and SG22 in Fig. 3), both set to a very close frequency in the GHz range. The phase modulators generate lower and upper sidebands (comb teeth) around the input optical tone, depending the number of modes and their relative amplitudes on the modulation index (RF power). This configuration is the simplest and most straight forward method for EO comb generation and provides OFCs with a reduced number of teeth and poor flatness, however better comb characteristics in terms of number of comb teeth and flatness can be achieved cascading EO modulators [29]. In our set-up the number of comb teeth we generate is in the order of 10, this number has been demonstrated as an optimum compromise between the quality of the concentration retrieval targeting a single spectral signature and the energy per tone, taking into account a limit on the available total power and the required signal to noise ratio [30].
Electro-optic OFCs have other important advantages for DCPAS. One of the advantages is the relatively large mode spacing (compared with mode-locked laser OFCs) and the flexibility to set this parameter by adjusting the modulation frequency of the PMs. In our set-up we make the frequency of SG21 equal to \(\Delta\)v, which is the desired optical spectral resolution, and the frequency of SG22 equal to \(\Delta\)v + \(\Delta\)f, so that each comb tooth pair provides a single independent beating note with a frequency proportional to \(\Delta\)f.
The flexibility of OE comb generation is extended not only to the optical to acoustic conversion bandwidth, but also to the optical to acoustic central frequency conversion, that can be
adjusted independently. This is done by optical frequency shifting the OFCs using an acousto-optic modulator (AOM) placed at each branch of the dual-comb generator, each driven by a different signal generator (SG11 and SG21 in Fig. 3) set to a very close frequency in the tens of MHz range. In our set-up we make the frequency of SG11 equal to 40 MHz, which is the working frequency of the AOMs we used, and the frequency of SG21 equal to 40 MHz + \(f_{0}\). Because the frequency shift is very small compared with the laser frequency (40 MHz compared with \(\nu_{0}=195.7\) THz), we can assume the same optical central frequency at the output of the dual comb generator (\(\nu_{0}\)) and a beating term that generates a photoacoustic response at the frequency \(f_{0}\), that is set according the detection module we use.
Mutual coherence between combs is obtained because they share the same optical input and the same RF time base, since all signal generators share the same clock signal. Mutual coherence is a requirement to spectrally resolve the photoacoustic comb, so that each dual comb tooth pair provides a single independent beating frequency. The two OFCs are ultimately combined and divided again into two paths, one of them is sent to a photodiode whose output signal is used as a reference of the mutiheterodyne signal and the other provides the light source to the PAS detection module. Figure 3 depicts the components of the dual EO comb generator and its caption specifies the commercial reference of all the components in the set-up.
### Setting the dual EO comb generator for the detection module
The gas cell consists of an aluminum cylinder resonator (a pipe), with a 5-mm diameter and a length of 88 mm, terminated with two cylindrical buffer volumes as acoustic filters. Its theoretical resonance frequency cell is around 1.8 kHz [33]. To characterize the acoustic profile of the detection module, the output of the microphone placed at the center of the gas cell was connected to a lock-in amplifier (Zurich Instruments HF2U) whose frequency reference is the
Figure 3: Dual EO comb generator set-up: (SG) RF signal generator, (IN) Input port for the laser diode (LD) with polarization control, (AMP) Optical fiber amplifier, (AOM) Acousto-optic modulator (G&H SFO2732), (PM) Ultralow Vrt phase modulators (EOSPACE PM-5K4-10/ PM-5SES-20), (TIA) Transimpedance amplifier, (REF) Comb reference output, (OUT) Optical dual comb output, (COL) Fiber collimator. All components are fiber-coupled and polarization matching is achieved by polarization maintaining components along the setup. The LD Qphotonics QDFBLD-1530-20 was used for ammonia detection.
same as the RF signal generators of the EO comb. We turned off the phase modulators of the dual EO comb generator so that a single tone heterodyne signal is obtained at the difference of the frequency of the AOMs (\(f_{0}\)) an we set the laser current to emit within an ammonia absorption line (around 1531.7 nm). Figure 4.a shows the amplitude of the photoacoustic signal when the parameter \(f_{0}\) is tuned from 1700 Hz to 1950 Hz. It can be seen that the output signal depends on the acoustic response of the gas cell, showing its resonance frequency at 1820 Hz and a Q factor equal to 10. Based on this result, it is concluded that for our detection module, the parameter \(f_{0}\) has to be set to \(f_{0}\) = 1820 Hz and the parameter \(\Delta f\) has to be set so that the bandwidth of the acoustic comb is narrower than 180 Hz. The optimum value of \(\Delta f\) will depend on the desired optical resolution (number of comb teeth within the optical spectral bandwidth of interest).
Keeping the PMs turned off, we set \(f_{0}\) = 1820 Hz (at the resonance frequency of the gas cell) and the laser current was swept from 80 mA to 105 mA, keeping a constant temperature of 26\({}^{\circ}\)C. As the output power of the LD depends on the current, also does the amplitude of the heterodyne beat note used to excite the sample, so we used the reference output of the dual EO comb generator (see figure 3) to normalize the signal from the microphone. This reference signal is obtained from the second channel of the lock-in amplifier. It can be seen in Figure 4.b that the absorption profile of the ammonia is reproduced by the amplitude of the photoacoustic signal as expected [34]. This result, fitted to the reference absorption profile of the ammonia obtained from the HITRAN database allowed us to characterize the emission wavelength of the LD with its injection current. It is important to highlight that the tuning range of the LD we used in this experiment is much lower than operation wavelength range allowed by our EO dual-comb generator as it is discussed later.
As a reference of the performance of the set-up we compute the normalized noise equivalent absorption (NNEA): The measured optical power at the output of the fiber collimator was 0.35 mW; the measured average value of the microphone signal when the LD wavelength was tuned to the center of the absorption peak was 615 \(\mu\)V; the standard deviation of the microphone signal when the LD wavelength was tuned out of the absorption line was 1.5 \(\mu\)V, then the signal-to-noise ratio (SNR) was 396; the lock-in amplifier settings provided an equivalent bandwidth of 0.67 Hz. Given the peak absorption coefficient from the HITRAN database equal to 1.1\(\cdot\)10\({}^{-3}\) cm\({}^{-1}\), the resulting NNEA was 1.2\(\cdot\)10\({}^{-9}\) W\(\cdot\)cm\({}^{-1}\cdot\)Hz\({}^{1/2}\), which is a typical value for PAS.
Figure 4: (a) PAS signal obtained when the difference of AOM frequencies (\(f_{0}\)) is swept, PM are turned off and laser wavelength is tuned into an absorption line. (b) PAS signal obtained when the difference of AOM frequencies (\(f_{0}\)) is 1820 Hz, PM are turned off and LD wavelength is swept as its injection current is swept.
## 4.- Results
For broadband PAS we used our dual EO comb generator. Figure 5 shows the dual comb spectra measured with an optical spectrum analyzer (OSA, Yokogawa AQ6370B). The figure shows the spectrum of two identical shape dual combs that only differ on the central wave number. These two dual comb spectra were obtained with identical configuration settings of the dual EO comb generator using a different input wavelength obtained at two different values of injection current of the LD. In this case the modulation frequency of SG11 was 5 GHz and the modulation frequency of SG21 was 5 GHz+5 Hz, both generators set with -16 dBm of RF power. These RF frequencies generate combs with a comb spacing of \(\Delta v=5\) GHz (equivalent to 0.167 cm\({}^{-1}\)). In this figure we can observe that the dual comb generator generates identical combs for given PMs parameters with a central frequency depending on the LD seed wavelength. Given the modulation depth (RF power), we obtain seven dual comb teeth within a -20 dB bandwidth, giving a spectral coverage of 30 GHz (1 cm\({}^{-1}\)) with a resolution of 5 GHz (0.167 cm\({}^{-1}\)).
As stated before, the difference of the modulation frequency of SG11 and SG22 is \(\Delta f=5\) Hz. With these setting the spectral compression ratio \(\Delta v/\Delta f\) equals \(10^{9}\), so that the bandwidth of the resulting acoustic comb is 30 Hz, small enough to fit with-in the bandwidth of our gas cell having set the difference of the frequencies of the AOMs (SG21 and SG22) to the resonance frequency of the gas cell (\(f_{0}=1820\) Hz). To recover the absorption profile of the gas sample when we use the dual EO comb as excitation source we followed the next steps. First, we measured the amplitude of the microphone signal with the lock-in amplifier at the frequencies \(f_{0}+n\Delta f\) (in this case 1805 Hz, 1810 Hz, 1815 Hz, 1820 Hz, 1825 Hz, 1830 Hz and 1835 Hz), then the amplitude of each acoustic mode is corrected taking into account the gas cell resonance profile obtained at calibration and represented in figure 4.a. Second, the amplitude of each mode is normalized with the corresponding reference comb obtained from the photodiode signal. Finally, each acoustic frequency is mapped to the optical domain given the EO comb generator parameters. Figures 6 and 7 represent the results for the dual combs in figure 5 along with the reference absorption profile obtained by LD current swept (figure 4.b). It can be seen that the generated acoustic comb reproduced the absorption profile of the gas sample with a resolution given by the dual-comb source.
Figure 5: Dual comb generated with the following configuration parameters: \(f_{0}=1820\) Hz, \(\Delta v=5\) GHz, \(\Delta f=5\) Hz: The frequency of the RF generators was 5 GHz (SG12), 5 GHz + 5 Hz (SG22), 40 MHz (SG11) and 40 MHz + 1820 Hz (SG21). The center of the spectrum \(v_{0}\) depends on the LD emission wavelength that depends on its injection current: 94.3 mA (blue), and 89 mA (red).
Given the versatility of the EO comb we are able to increase the optical resolution of the measurements by reducing the modulation frequency of the PMs. It is also possible to generate a higher number of modes within the same spectral bandwidth increasing the modulation depth of the PMs. As example, figure 8.a shows the reference comb when the RF power applied to the PMs is increased from -16 dBm (RF power applied for the results represented in figures 5 to 7) to -10 dBm. It can be seen that the amplitude of the comb is reduced as the available total optical power remains constant. As a tradeoff between spectral resolution and SNR, which mainly depends on the optical power at each frequency, we have considered that 10 optical frequencies within the absorption feature offers enough spectral resolution to reconstruct its shape. Figure 8 shows the DCPAS results when the dual EO comb generator is configured to obtain this optimal spectral resolution, the configuration parameters are detailed in the figure caption.
Figure 6: DCPAS results with the following configuration parameters: \(f_{0}\) = 1820 Hz, \(\Delta\)v = 5 GHz and \(\Delta\)f = 5 Hz. (a) The dots are the amplitudes of the reference heterodyne signals measured at the reference photodiode of the EO comb generator. (b) The dots represent the amplitudes of the tones of the photoacoustic signal detected by the microphone. (c) The dots represent the normalized photoacoustic signal after optical - acoustic mapping: In this case \(v_{0}\) = 6529.07 cm-1 (I\({}_{\text{LASER}}\) = 89 mA) and \(\Delta\)v = 0.17 cm-1 (5 GHz). The measured absorption profile of ammonia is represented as reference.
Figure 7: DCPAS results with the following configuration parameters: \(f_{0}\) = 1820 Hz, \(\Delta\)v = 5 GHz and \(\Delta f\) = 5 Hz. (a) The dots are the amplitudes of the reference heterodyne signals measured at the reference photodiode of the EO comb generator. (b) The dots represent the amplitudes of the tones of the photoacoustic signal detected by the microphone. (c) The dots represent the normalized photoacoustic signal after optical - acoustic mapping: In this case \(v_{0}\) = 6528.82 cm-1 (\(I_{\text{LASER}}\) = 94.3 mA) and \(\Delta\)v = 0.17 cm-1 (5 GHz). The measured absorption profile of ammonia is represented as reference.
## 5 Discussion and conclusion:
In this paper we have demonstrated a DCPAS set-up for ammonia detection in the near-infrared using an electro optic dual comb generator. The proposed method permits broadband PAS virtually with any standard or custom detection module and the simultaneous interrogation of the whole spectral range with an easily configurable (very high) optical resolution.
In our system, each comb is generated with a single PM whose input optical frequency is slightly shifted with an AOM. This simple architecture and particular generation method provide us with the ability to spectrally interrogate an absorption signature of the target gas not only, as introduced above, with variable resolution but, most importantly, to perform the
Figure 8: DCPAS results with the following configuration parameters: \(f_{0}=1820\) Hz, \(\Delta\nu=2.5\) GHz and \(\Delta f=5\) Hz. (a) Reference comb (b) Photoacoustic tones (c) Normalized photoacoustic signal after optical - acoustic mapping: \(\nu_{0}=6528.86\) cm-1 and \(\Delta\nu=0.08\) cm-1 (2.5 GHz). The measured absorption profile of ammonia is represented as reference.
optical to acoustic mapping so that all the acoustic modes are generated with-in the bandwidth of the photoacoustic detection module.
One of the main features of the dual comb system employed, is that the two combs generated by the EO modulators typically exhibit relatively narrow spectral bandwidth compared to other complex comb platform. This is an advantage for PAS since the available optical power per teeth can be maximized, thus optimizing the signal-to-noise ratio without the need of optical filtering. Compared to PAS system that use a single optical frequency to excite the sample, when the absorption line is sampled at several frequencies, even if the center frequency of the laser may shift, the shape of the line can be determined from the spectrum of the acoustic signal. This relaxes the need of laser frequency stabilization and the need to control the pressure and temperature of the sample. Also interfering absorptions can be identified.
In this work we have demonstrate the DCPAS technique in the near infrared, were EO components are readily available, their technological maturity is very high and all fiber optic components can be used. Recent advances on EO comb generation architectures in the near infrared, also relevant for PAS, focus on increasing compactness, spectral bandwidth and mutual coherence [11][35]. It is also worth mentioning that the greatest potential of this technique is its direct application in any other region of the spectrum thanks to the indirect measurement of the absorption through the photoacoustic response. In fact, the current implementation of the dual EO comb generator can operate over a wide range of wavelengths and other EO architectures can be used to operate from the visible to the mid infrared [36]. Besides this, frequency conversion techniques can also be employed to directly and easily shift the operation range of EO comb generators to the, extremely interesting for gas detection and identification, mid infrared range [37].
The system whose operation has been analyzed in the previous paragraphs has been experimentally validated for the reconstruction of the absorption profile of ammonia around 6529 cm-1 wavenumbers within a spectral bandwidth of 1 cm-1 and variable resolution, finding 2.5 GHz (0.08 cm-1) a good compromise between spectral resolution and power per comb line. We have also validated the dual EO comb generation technique to easily match the frequencies of the multiheodyne tones within the band of resonance of the gas cell, in this case a tube resonator with resonance frequency 1820 Hz and Q factor equal to 10. The results obtained corroborate not only the technical feasibility of the method, but also the great potential it may have as future developments that incorporate more sophisticated photoacoustic detection modules.
## Acknowledgements
This work is supported by the State Research Agency of Spain under grant PID2020-116439GB-100 and by Madrid Government under personnel grant PEJ-2020-AI/TIC-19407.
|
2306.11464 | One-to-Many Spectral Upsampling of Reflectances and Transmittances | Spectral rendering is essential for the production of physically-plausible
synthetic images, but requires to introduce several changes in the content
generation pipeline. In particular, the authoring of spectral material
properties (e.g., albedo maps, indices of refraction, transmittance
coefficients) raises new problems.While a large panel of computer graphics
methods exists to upsample a RGB color to a spectrum, they all provide a
one-to-one mapping. This limits the ability to control interesting color
changes such as the Usambara effect or metameric spectra. In this work, we
introduce a one-to-many mapping in which we show how we can explore the set of
all spectra reproducing a given input color. We apply this method to different
colour changing effects such as vathochromism -- the change of color with
depth, and metamerism. | Laurent Belcour, Pacal Barla, Gael Guennebaud | 2023-06-20T11:39:01Z | http://arxiv.org/abs/2306.11464v2 | # One-to-Many Spectral Upsampling
###### Abstract
Spectral rendering is essential for the production of physically-plausible synthetic images, but requires to introduce several changes in the content generation pipeline. In particular, the authoring of spectral material properties (e.g., albedo maps, indices of refraction, transmittance coefficients) raises new problems. While a large panel of computer graphics methods exists to upsample a RGB color to a spectrum, they all provide a one-to-one mapping. This limits the ability to control interesting color changes such as the Usambara effect or metameric spectra. In this work, we introduce a one-to-many mapping in which we show how we can explore the set of all spectra reproducing a given input color. We apply this method to different colour changing effects such as vathochromism - the change of color with depth, and metamerism.
+
Footnote †: journal: Computer Graphics and Jahn Wiley & Sons Ltd
## 1 Introduction
Spectral rendering has been increasingly used in recent years, due to raising expectations in photo-realism in cinematography [1], or to applications that require predictive results such as in architecture. However, the generation of spectral material properties presents a challenge for artists and designers [11, 12]. To ease the edition of reflectance and transmittance spectra, several spectral upsampling methods have been introduced in computer graphics [14, 15, 16, 17, 18, 19, 20]. They produce spectra from colors, ensuring that physical bounds are achieved (e.g., reflectances must lie in the \([0,1]\) range). All of these methods are restricted to produce a _one-to-one_ conversion: one RGB triplet converts to a single spectrum.
This limitation restricts the possibilities that spectral rendering offers. Indeed, unlike RGB materials, spectral materials offer the possiblity to produce subtle color effects, such as metamerism
a change of color due to different illuminants [22]. In the optics community, Metameric blacks [24, 25] have been introduced to explore the space of metamers. In this approach, metameric spectra achieving a given desired color are sampled from a null-space in the target color space. Unfortunately, this requires to manipulate physical constraints in a high dimensional null-space, which significantly complicates artistic control. Most importantly, this null-space approach is not easily adapted to deal with non-linear color changes, such as those observed in the Usambara effect (see Figure 1 (a)) - a surprising change of color due to the path length travelled by light in tourmaline gems.
In this paper, we introduce a novel _one-to-many_ approach that enables artists to design non-generic spectra with controlled color effects. The key idea is to build reflectance or transmittance spectra using a small set of basis functions forming a partition of unity (PU), and to express them in chromaticity space. We primarily target non-linear effects: our representation allows us to find many spectra that achieve a same target chromaticity at a unit optical depth, while providing control over the chromaticities at further depths. This is shown in Figure 1(b,c) for a pair of spectra.
A key observation on which we elaborate in Section 3 is that a PU spectral representation is linked to generalized barycentric coordinates in chromaticity space. As demonstrated in Section 4, exploring all the possible barycentric coordinates reconstructing the same target chromaticity point is equivalent to exploring the space of all spectra producing the same chromaticity when integrated with respect to color matching functions of the human visual system. We then use this geometric analogy for the design of the PU basis in Section 5, where we show how to strike a balance between spectral smoothness and color expressivity.
With our one-to-many spectral upsampling approach, we are able to generalize the Usambara effect to any non-generic spectrum that exhibits changes of color with depth, which we suggest to call _vathochromism_, derived from ancient Greek _vathos_ (depth) and _chroma_ (colour).2 We show in Section 6.1 how to build a parametric system in which a user can pick spectra with specific constraints (such as reproducing two given chroma for different optical depths). The same representation also provides control over metamerism, as shown in Section 6.2. We further discuss the differences with Metameric Blacks in Section 7.
Footnote 2: We reserve the usage of the term “Usambara effect” for the typical green-to-red color shift observed in Tourmaline gemstones.
## 2 Previous Work
### Color changes in Nature
Many different kinds of natural materials exhibit changes of color, depending on the angle of view (goniochromism), on temperature (thermochromism) or exposition to light (photochromism) for instance. In all those cases, the reflected or transmitted spectrum is itself changed either due to an alteration of the material itself, or to viewing conditions. In this paper, we are instead interested in materials that exhibit color changes despite the fact that their spectral reflectance or transmittance does _not_ change.
Metamerism.A common example of such materials are those that change color with a change of illumination, called metamers. Two metameric materials can look the same under one illuminant, but will differ when lit by another illuminant. This is explained by the fact that the human visual system integrates the product of light and material on photo-receptors, which is a many-to-one mapping. Another related example of the impact of the illuminant on the appearance of objects is the _Alexandrite effect_[10]. Alexandrite gems are known to change from green when lit by sunlight to red when lit by candle light. This particular effect has been used in computer graphics by Bergner et al. [1] for visualization purposes.
Usambara effect.The Usambara effect was first described for a particular tourmaline found in the Umba valley in Tanzania [10]. It was described as a change of color (from green to red) with an increase of the optical depth of the material. It was later found that other materials (such as topaz and amber) depicted such behaviour [1, 12]. In this work, we use the term _vathochromism_ for such changes of color with depth.
### Computing Metamers in Optics
Metameric blacksOne way to generate a pair of metameric spectra is to add to a first spectrum a spectral curve that corresponds to a black color (i.e., a zero triplet) in the target color space [24]. From the point of view of linear algebra, a metamer is then a point in the null-space of the color space matrix [23, 14]. While this formalism permits the generation of arbitrary metamers, it requires to track hard constraints: a reflectance spectrum must take its values in the \([0,1]\) range. With finely-discretized spectra, those constraints generate a convex hull of valid spectra in a high dimensional space. Alternate methods can avoid this dimensionality issue by using a blending of measured spectra [15, 16]. However, this comes at a cost: each reconstructed spectrum is necessarily within the convex hull of the measured spectra. In particular, one can only reproduce luminance within this convex hull.
ApplicationsMetameric Blacks have been successfully used for camera calibration [1], reflectance acquisition [17, 18, 2], or printing [19]. However, this approach is too limiting for the artistic control of spectral assets, which is our main focus in this work. Furthermore, working with discretized spectra has the additional drawback that spectral maxima are limited to occur at spectral bin locations, potentially preventing interesting effects.
### Spectral Representations in Computer Graphics
Spectral upsampling.In computer graphics, the use of a spectral renderer requires to convert between colors and spectra [10, 11, 23]. Usually, the aim is to convert RGB textures (such as albedo maps, environment maps) to spectral textures with one spectral curve per texel - a _one-to-one mapping_[2, 21, 22, 23, 24, 25]. The difference between those approaches mostly resides in how spectra are built. For instance, [26] and [12] optimize smooth spectra, [27] and [13] use a database to project colors, while [10] build a parametric family of spectra. All these methods ensure that the resulting upsampled spectra are physically-plausible: they remain in the \([0,1]\) range along the spectral dimension.
**Spectral Compression.** The storage of spectral curves raises additional difficulties. Parametric models (such as the one of [1] limit storage requirements, even allowing for on-the-fly conversion of RGB assets. However, they severely restrict the family of spectra that can be represented. An alternative is to decompose spectra using moments [1]. With this approach, it is possible to reconstruct a large family of spectra while keeping memory requirements in check. The storage of spectra is orthogonal to our work, as we could choose to use any compression method to store the spectra produced by our approach.
**Fluorescence.** Spectra defined in the visible range can be extended to incorporate fluorescence effects [10]. While it requires dedicated rendering algorithms [13], it expands the range of achievable appearances [13]. While compact representations for fluorescent spectra have been recently introduced in computer graphics (e.g., [13]), we restrict our approach to spectra in the visible range and put fluorescence aside.
### Scope of this Work
Our goal is to extend the computer graphics toolbox with a _one-to-many_ spectral upsampling method tailored to reflectance and transmittance. Contrary to previous work in optics, we do not rely on convex combinations of measured spectra, since our focus is on the artistic control of color-changing effects, for vathochromism and metamersim alike. As described in the next section, we overcome the difficulties raised by the null-space approach of Metameric Blacks by relying on a spectral Partition of Unity.
## 3 Spectral Partition of Unity
In this section, we use a Partition of Unity to define a space of smooth spectra and show how these spectra are related to generalized barycentric coordinates in chromaticity space.
**Partition of Unity.** A Partition of Unity (PU) is a set of \(K\) basis functions \(B_{k}:U\rightarrow\mathbb{R}\) with \(k\in[0,K-1]\) such that:
\[\sum_{k}B_{k}(x)=1,\forall x\in U. \tag{1}\]
We can use a weighted sum of these basis functions to reconstruct or approximate functions. A notable property of a PU is that bounded weights yield bounded reconstructed functions:
\[\forall k\in[0,K-1],\ w_{k}\in[0,1]\Rightarrow f(x)=\sum_{k}w_{k}B_{k}(x)\in[ 0,1]. \tag{2}\]
**Reconstructing Transmission Spectra.** We use a PU created from non-uniform B-splines to produce reflectance or transmittance spectra. The input domain is the set of visible wavelengths \(U=[U_{0},U_{1}]=[385\text{nm},700\text{nm}]\). The energy conservation constraint on reflectance and transmittance spectra is readily met through Equation 2. We will discuss the choice of the number \(K\) of B-spline basis functions, their degree and the positions of their knots later in Section 5. In this section, for the purpose of illustration, we rely on \(K=5\) bases of degree 2 and uniformly spaced knots with knots at the boundaries of \(U\) having a multiplicity of 3, as shown in Figure 2(left). We also work with the sRGB color space.
**Geometric interpretation** When intergrated with respect to the CIE sensitivity functions \(\bar{x}(\lambda)\), \(\bar{y}(\lambda)\) and \(\bar{z}(\lambda)\) shown in Figure 2(right), each basis function corresponds to a XYZ color:
\[\mathbf{B}_{k}=\begin{bmatrix}B_{k,X}\\ B_{k,Y}\\ B_{k,Z}\end{bmatrix}=\int B_{k}(\lambda)\mathbf{s}(\lambda)\mathrm{d}\lambda, \tag{3}\]
with \(\mathbf{s}(\lambda)=[\bar{v}(\lambda),\bar{y}(\lambda),\bar{z}(\lambda)]^{\top}\). Due to the linearity of reconstruction, a weighted sum of PU basis functions yields a XYZ color that is a weighted sum of basis XYZ colors:
\[\mathbf{F}=\begin{bmatrix}F_{X}\\ F_{Y}\\ F_{Z}\end{bmatrix}=\int f(\lambda)\mathrm{d}\lambda=\sum_{k}w_{k}\mathbf{B}_{k}. \tag{4}\]
\(\mathbf{F}\) may then be converted to the xyY color space. Using Equation 4, we directly obtain its luminance \(F_{Y}=\sum_{k}w_{k}B_{k,Y}\). Its chromaticity \(\mathbf{c}\) is slightly more complicated. If we write \(|F|=F_{X}+F_{Y}+F_{Z}\) and similarly \(|B_{k}|=B_{k,X}+B_{k,Y}+B_{k,Z}\), it is given by:
\[\mathbf{c}=\frac{[F_{X},F_{Y}]^{\top}}{|F|}=\frac{\sum_{k}w_{k}\left[B_{k,X}, B_{k,Y}\right]^{\top}}{\sum w_{l}|B_{l}|}.\]
which may be rewritten as:
\[\mathbf{c} = \sum_{k}a_{k}\mathbf{b}_{k}, \tag{5}\] \[a_{k} = \frac{w_{k}|B_{k}|}{\sum w_{l}|B_{l}|}. \tag{6}\]
where the \(\mathbf{b}_{k}=\frac{[B_{k,X},B_{k,Y}]^{\top}}{|B_{k}|}\) denote basis chromaticities.
Our key observation is thus that the chromaticity \(\mathbf{c}\) of a spectrum given by a vector \(\mathbf{w}\) of basis coefficients is obtained as a linear combination of basis chromaticities \(\mathbf{b}_{k}\) where the weights \(a_{k}\) correspond to _homogeneous barycentric coordinates_. This is illustrated in Figure 3, where the basis chromaticities \(\mathbf{b}_{k}\) form a gamut of colors achievable through a given choice of basis functions \(B_{k}(\lambda)\). Depending on that choice, the gamut may only partially overlap the sRGB gamut: this means that there is no \(\mathbf{w}\) that can achieve a chromaticity outside the basis gamut. A vector \(\mathbf{w}\) with only two non-zero contiguous coefficients yields a unique chromaticity point on the gamut boundary, since then only a contiguous pair of barycentric coordinates is non-zero. However, in all other cases, there will be multiple coefficient vectors \(\mathbf{w}\) that map to the same chromaticity point \(\mathbf{c}\). This is because for \(K>3\), the set of \(a_{k}\) describes _generalized_ barycentric coordinates of \(\mathbf{c}\), and is thus _not_ unique. In the next section, we show how to invert this many-to-one mapping.
Figure 2: Left: An example set of \(K=5\) Partition of Unity (PU) basis functions of degree 2 with regularly spaced knots. Right: CIE sensitivity functions \(\bar{x}(\lambda)\), \(\bar{y}(\lambda)\) and \(\bar{z}(\lambda)\) in red, green and blue.
## 4 One-to-many mapping
Our goal in this section is to find the equivalence class of basis coefficients \(\mathbf{w}\) that yields a target chromaticity \(\mathbf{c}\) and luminance \(F_{\mathbf{\Gamma}}\). We do this in two stages: we first find the set of generalized barycentric coordinates that achieves the target chromaticity \(\mathbf{c}\); then we show how this maps to an equivalence class of basis coefficients, a subset of which achieves the target luminance \(F_{\mathbf{\Gamma}}\).
### Achieving chromaticity
A first condition is that \(\mathbf{c}\) must lie inside the basis gamut or on its boundary. The target chromaticity may then be expressed in terms of generalized homogeneous barycentric coordinates, using:
\[\begin{bmatrix}1&1&\cdots&1\\ b_{0,x}&b_{1,x}&\cdots&b_{K-1,x}\\ b_{0,y}&b_{1,y}&\cdots&b_{K-1,y}\end{bmatrix}\begin{bmatrix}a_{0}\\ a_{1}\\ \cdots\\ a_{K-1}\end{bmatrix}=\begin{bmatrix}1\\ c_{x}\\ c_{y}\end{bmatrix}, \tag{7}\]
with \(\mathbf{b}_{k}=[b_{k,x},b_{k,y}]^{\top}\) and \(\mathbf{c}=[c_{x},c_{y}]^{\top}\).
Since \(\mathbf{c}\) is in the basis gamut, there is at least one triplet of bases whose chromaticity coordinates define a triangle that contains \(\mathbf{c}\). Let's assume that these basis are the first three (one can always re-order the bases to yield such a configuration). One solution to Equation 7 is then \([a_{0},a_{1},a_{2},0,\cdots,0]^{\top}=[\mathbf{a}_{T}^{\top},\mathbf{0}]^{\top}\), with \(\mathbf{a}_{T}\) the vector of triangular barycentric coordinates. Other solutions may then be obtained by adding perturbations to that vector, which may be written \([a_{0}-\Delta a_{0},a_{1}-\Delta a_{1},a_{2}-\Delta a_{2},a_{3},\cdots,a_{K-1 }]^{\top}=[(\mathbf{a}_{T}-\Delta\mathbf{a})^{\top},\mathbf{a}_{F}^{\top}]^{\top}\), where \(\mathbf{a}_{F}\) is a (K-3)D vector of barycentric coordinates that represent degrees of freedom to navigate the space of solutions, and \(\Delta\mathbf{a}\) is a 3D offset vector used to preserve the homogeneous barycentric coordinate constraint.
Let us now rewrite Equation 7 with the following matrix form: \([T\ F]\mathbf{a}=[1,\mathbf{c}^{\top}]^{\top}\), where \(T\) (resp. \(F\)) is the matrix corresponding to the first 3 (resp. last \(K-3\)) columns of the left hand side matrix, and \(\mathbf{a}\) is the vector of generalized barycentric coordinates. Since we also have \(T\mathbf{a}_{T}=[1,\mathbf{c}^{\top}]^{\top}\), it follows that:
\[T\Delta\mathbf{a}=F\mathbf{a}_{F}. \tag{8}\]
Now in order to navigate the space of solutions, we need bounds on \(\mathbf{a}_{F}\). Because all its coefficents are barycentric coordinates, we already know that \(\mathbf{0}\leq\mathbf{a}_{F}\leq\mathbf{1}\), with the lower bound trivially corresponding to a zero offset vector (see Equation 8). The upper bound is not a sufficient condition since we must also make sure that \(\mathbf{0}\leq\mathbf{a}_{T}-\Delta\mathbf{a}\leq\mathbf{1}\), or in terms of the offset vector: \(\mathbf{a}_{T}-\mathbf{1}\leq\Delta\mathbf{a}\leq\mathbf{a}_{T}\). Using Equation 8 yields the following vector inequality:
\[\mathbf{a}_{T}-\mathbf{1}\leq M\mathbf{a}_{F}\leq\mathbf{a}_{T}, \tag{9}\]
where \(M=T^{-1}F\) is a \(3\times(K-4)\) matrix. Note that since \(M\) may contain negative coefficients, the lower bound in Equation 9 may end up being used to define the upper bound on \(\mathbf{a}_{F}\).
We rely on an iterative approach to characterize the whole set of solutions by considering each coefficient of \(\mathbf{a}_{F}\) in turn. Let us start with \(a_{3}\), and assume that \(a_{4\cdot K-1}=0\). Equation 9 now becomes \(\mathbf{a}\tau-\mathbf{1}\leq\mathbf{m}_{0}a_{3}\leq\mathbf{a}_{T}\), with \(\mathbf{m}_{0}=[m_{00},m_{10},m_{20}]^{\top}\) the first column of \(M\). The constraints on offsets are then met by navigating \(a_{3}\) in the \([0,a_{3}^{\max}]\) interval, with the upper bound given by:
\[a_{3}^{\max}=\min_{i\in\{0,1,2\}}\frac{a_{i}+H(m_{0})-1}{m_{i0}}. \tag{10}\]
The Heaviside function \(H(m)\) is used to take the sign of each matrix component into account: when \(m\leq 0\) (resp. \(m>0\)), the lower (resp. upper) bound is considered. Having chosen the \(n-1\) first coefficients of \(\mathbf{a}_{F}\), the upper bound for the \(n\)th coefficient - assuming the remaining ones are zero - is obtained with a similar formula:
\[a_{3+n}^{\max}=\min_{i\in\{0,1,2\}}\frac{a_{i}+H(m_{m_{n}})-1-\sum_{l=0}^{n-1 }m_{il}\,a_{3+l}}{m_{in}}. \tag{11}\]
In the general case, a valid solution \(\forall n\in[0..K-4]\) must ensure:
\[a_{3+n}\leq\min_{i\in\{0,1,2\}}\frac{a_{i}+H(m_{in})-1-\sum_{l\neq n}m_{il}\, a_{3+l}}{m_{in}}. \tag{12}\]
For each vector \(\mathbf{a}_{F}\), the offset vector is computed by \(\Delta\mathbf{a}=M\mathbf{a}_{F}\), which yields a generalized homogeneous coordinates vector \(\mathbf{a}\) that achieves the target chromaticity \(\mathbf{c}\). Figure 4 illustrates that process. A triangle that encloses \(\mathbf{c}\) is first selected; then the space of degrees of freedom \(\mathbf{a}_{F}\) is sampled randomly to yield a family of spectra.
### Achieving luminance
Given a vector of generalized barycentric coordinates \(\mathbf{a}\), we now need to invert Equation 6 to retrieve basis coefficients \(\mathbf{w}\). Since any pair of basis coordinates \((a_{i},a_{j})\) is related by an equation of the form \(a_{i}w_{j}|B_{j}|=a_{j}w_{i}|B_{i}|\), the corresponding basis coefficients span a \(K\)D line. If we pick an arbitrary non-zero barycentric coordinate - say \(a_{0}\) - then \(\mathbf{w}\) may be expressed as a function of \(w_{0}\):
\[\mathbf{w}(w_{0})=\begin{bmatrix}1\\ \frac{a_{1}|B_{0}|}{a_{0}|B_{1}|}\\ \frac{a_{K-1}|B_{0}|}{a_{0}|B_{K-1}|}\end{bmatrix}w_{0}=Lw_{0},\ \ w_{0}\in\left(0,w_{0}^{\max}\right], \tag{13}\]
where the upper bound \(w_{0}^{\max}=\min\left\{1,\frac{a_{0}|B_{1}|}{a_{1}|B_{0}|},\cdots,\frac{a_{0}| B_{K-1}|}{a_{K-1}|B_{0}|}\right\}\) is set to ensure that \(\mathbf{0}\leq\mathbf{w}\leq\mathbf{1}\).
Figure 3: A PU basis \(B_{k}(\lambda)\) defines a gamut in chromaticity space ((a) orange polygon) where each vertex \(\mathbf{b}_{k}\) is a basis element. Any point outside that gamut (e.g., the black cross) is not achievable with the chosen basis, even though it might be inside the sRGB gamut (blue triangle). A spectrum defined by two non-null contiguous basis coefficients (e.g., plain red curve in (b) with individual PU contributions in dashed) yields a chromaticity on the basis gamut boundary (red point). More general spectra (e.g., blue curve in (b)) yield a chromaticity inside the basis gamut (blue point in (a)).
The KD line of solutions may now be restricted to a single solution by the target luminance constraint, which we write \(\mathbf{w}(w_{0})^{\top}\mathbf{B}_{\gamma}=F_{Y}\), with \(\mathbf{B}_{\gamma}=\left[B_{0,Y},\cdots,B_{K-1,Y}\right]^{\top}\) the vector of Y coefficients of basis colors.
Using Equation 13, the value of \(w_{0}\) that _potentially_ achieves the target luminance \(F_{Y}\) is:
\[w_{0}^{\star}=\frac{F_{Y}}{L^{\top}\mathbf{B}_{\gamma}}. \tag{14}\]
\(F_{Y}\) is effectively achieved if and only if \(w_{0}^{\star}\leq w_{0}^{\max}\). For that reason only a (possibly empty) subset of barycentric coordinates \(\mathbf{a}\) permits to achieve the target luminance. This is shown in Figure 4: only fully-opaque barycentric samples and their associated spectra achieve \(F_{Y}\) in practice.
However, this subset may be enlarged. Indeed, relying on the bounded property of PU (Equation 2) remains conservative: in some instances, we may use basis coefficients greater than 1 and still obtain energy-conserving spectra. This means that \(\mathbf{w}\) may be scaled in post process to increase the luminance of the reconstructed spectrum. Assuming that \(F_{Y}\) is not achieved (i.e., \(\mathbf{w}(w_{0}^{\max})^{\top}\mathbf{B}_{Y}<F_{Y}\)), we may thus obtain a closer solution in terms of luminance by using:
\[\mathcal{W}(\mathbf{w})=\frac{\mathbf{w}(w_{0}^{\max})}{\max\left(f^{\max}, \frac{\mathbf{w}(w_{0}^{\max})^{\top}\mathbf{B}_{Y}}{F_{Y}}\right)}, \tag{15}\]
where \(f^{\max}=\max_{\lambda}f(\lambda)\), and \(1/f^{\max}\) represents the margin by which the spectrum \(f(\lambda)\) is allowed to be scaled.
Finally, it would be useful to know _a priori_ whether there exists at least one vector \(\mathbf{w}\) of basis coefficients that achieves both the target chromaticity and luminance. A _conservative_ solution is to rely on the vector \(\overline{\mathbf{w}}\) that maximizes luminance under the constraint given by Equation 7, then to check whether \(\mathcal{W}(\overline{\mathbf{w}})^{\top}\mathbf{B}_{\gamma}\geq F_{Y}\). The vector \(\overline{\mathbf{w}}\) is found by solving the following linear programming problem:
\[\overline{\mathbf{w}} = \max_{\mathbf{w}}\ \mathbf{w}^{\top}\mathbf{B}_{\gamma},\] (16) subject to \[\mathbf{0} \leq\overline{\mathbf{w}}\leq\mathbf{1}\] (17) and \[A \overline{\mathbf{w}}=\mathbf{0}, \tag{18}\]
where \(A\) is obtained by rewriting Equation 7 in terms of \(\mathbf{w}\):
\[A=\left[T\ F\right]\text{diag}([\mathbf{B}])^{\top}-|\mathbf{B}|^{\top} \begin{bmatrix}1\\ \mathbf{c}\end{bmatrix}, \tag{19}\]
where we have used \(\mathbf{a}=\frac{\text{diag}([|\mathbf{B}|]^{\top}\mathbf{w}}{|\mathbf{B}|^{ \top}\mathbf{w}}\) and \(|\mathbf{B}|=\left[|B_{0}|,\cdots,|B_{K-1}|\right]^{\top}\). An exact solution could be obtained by replacing \(\overline{\mathbf{w}}\) by \(\mathcal{W}(\overline{\mathbf{w}})\) in Equation 17, at the cost of a more expensive computation time.
Once we have found \(\overline{\mathbf{w}}\) (green spectrum in Figure 4), it is trivial to retrieve the corresponding barycentric coordinates \(\overline{\mathbf{a}}\) and degrees of freedom \(\overline{\mathbf{a}}_{F}\) (green diamond in Figure 4). We use \(\overline{\mathbf{w}}\) by default during editing (see the supplemental video).
## 5 Basis design
Until now, we have relied on a small number of basis functions (\(K=5\)) for illustration purposes. Increasing the number \(K\) of bases has the effect of increasing the size of the equivalence class. As shown in Figure 5, this is due to the basis gamut, which encompasses a larger area of the chromaticity diagrams when \(K\) is increased. This has two effects on the _expressivity_ of a given basis. First, any chromaticity in a given RGB gamut (we consider sRGB and Adobe Wide Gamut RGB in the following) can be achieved when it is encompassed by the basis gamut. Increasing the number
Figure 4: _One-to-many mapping. The target color is given by its chromaticity \(\mathbf{c}=\left[0.41,0.42\right]^{\top}\) and luminance \(F_{Y}=0.57\). Left: a triangle (in green) enclosing the target chromaticity \(\mathbf{c}\) (black dot) is picked among the basis gamut (in orange). Middle: the remaining two basis constitute degrees of freedom, which are randomly sampled using our iterative procedure based on barycentric coordinates. Observe the presence of boundaries in this barycentric space, which are required to keep basis coefficients in the \([0,1]\) range. Right: the equivalence class of spectra achieving \(\mathbf{c}\) are retrieved from the degrees of freedom (we use matching colors). Transparent barycentric samples (middle) and spectra (right) indicate that the target luminance \(F_{Y}\) is not achieved. The solution of maximum luminance in the equivalence class is shown with a green diamond in barycentric space (middle), and its spectrum is drawn in green (right)._
of bases extends the latter as shown in Figure 5 (top row). Second, chromaticities on the gamut boundary map to a single pair of non-zero barycentric coordinates; hence a basis gamut larger than the chosen RGB gamut ensures to avoid these singular equivalence classes.
However, increasing the number \(K\) of bases cannot be done without limits, since plausible reflectance and transmittance spectra should be _smooth_. In addition, a smaller number of bases might be desirable for memory considerations.
In this section, from the geometric interpretation of previous sections, we design a set of PU basis functions that finds a tradeoff between expressivity and smoothness constraints. We keep degree of 2 throughout as there is no need to ensure \(C^{2}\) (or higher) continuity, and low degree splines prevent us from over-fitting issues.
**Knots warping**: Besides increasing \(K\), we may also control the position of basis knots. To this end, we use a two-parameter family of warping functions to alter the uniform distribution of knots along the \(U=[U_{0},U_{1}]\) interval. We use a warping function \(C_{s,p}:[0,1]\rightarrow[0,1]\) introduced by [11]:
\[C_{s,p}(x)=\begin{cases}\frac{x^{s^{\prime}}}{p^{-1}}&\text{if }\ x\in[0,p],\\ 1-\frac{(1-x)^{s^{\prime}}}{(1-p)^{s^{\prime}-1}}&\text{otherwise},\end{cases} \tag{20}\]
with \(c=\frac{2}{1+s}-1\). The two parameters \((s,p)\in\left[0,1\right]^{2}\) control the strength of warping and the position where most of the warping occurs. A sequence of warped knots \(\{\kappa_{k}\}\) is then produced using \(\kappa_{k}=U_{0}+C_{s,t}(u_{k})(U_{1}-U_{0})\), where the \(u_{k}\) form a uniform sequence of values in the \([0,1]\) range. Even though a set of \(K\) B-spline basis functions of order 2 requires \(K+3\) knots, we ignore the first and last two since the boundary knots have a multiplicity of 3. Hence we obtain \(K-1\) knots, with \(\kappa_{0}=U_{0}\) and \(\kappa_{K-1}=U_{1}\) as desired.
Figure 6 shows the effect of knots warping on \(K=7\) basis functions and the corresponding basis gamut, demonstrating that knots warping helps achieve a wider gamut without having to increase the number \(K\) of basis functions. The figure also shows that modifying the first and last two knots to be outside of the \(U\) interval has a negligible effect on the basis gamut, which is due to the small values of color matching functions around these boundaries (see Figure 2(left)). We choose to offset these boundary knots by 100nm outside of \(U\), which produces more physically-plausible results when observed outside of the visible range since the reconstructed spectra then gently fade to zero outside of \(U\).
**Expressivity-smoothness trade-offs**: We need to devise metrics to quantify the degree of expressivity of a set of basis functions, as well as the smoothness of the spectra it is able to produce, depending on the positions of its knots and the number \(K\) of bases.
An expressive basis requires a wide gamut that encompasses the chosen RGB gamut as much as possible. We thus rely on what we call the excess area \(\mathcal{A}\), which is the signed area between the basis and sRGB gamuts, normalized by the area between the horseshoe-shaped chromaticity gamut and the RGB gamut. This area is computed by lesselating the region between the RGB and basis gamuts into quads (see Figure 7(left)).
Basis smoothness is directly related to the smoothness of individual basis functions, which depends on both the number \(K\) of bases and their knots \(\{\kappa_{k}\}\). We compute the smoothness of a basis set as \(\mathcal{S}=\min_{k}\text{FWHM}_{k}\), where \(\text{FWHM}_{k}\) is the full width at half maximum of the \(k\)th basis.
As shown in Figure 7, for \(K=7\) bases and a sRGB spectrum, the excess area \(\mathcal{A}\) and smoothness \(\mathcal{S}\) criteria evolve differently as a function of \((s,p)\), the parameters of the knots warping function. How this pair of criteria is balanced is arbitrary. In this paper, we usually first pick a number \(K\) of basis functions, and then brute-force find the warping parameters that maximize \(\mathcal{A}\) under the constraint that \(\mathcal{S}<20\)nm. We indicate such a \((s,p)\) pair for \(K=7\) in Figure 7 by a black cross.
Figure 5: _Basis gamut w.r.t. \(K\). For the same chromaticity constraint (green dot, top row), we display randomly generated spectra (bottom row) when increasing the numbers of basis functions (from left to right \(K=\{5,7,9,11\}\)). We compare the basis gamut to both sRGB (blue) and Adobe Wide Gamut RGB (light blue)._
Figure 6: _Knots warping. Warping basis knots alters the basis gamut, here for a set of \(K=7\) basis functions. We assign to each basis a color to clearly locate it in chromaticity space. A strong warping (second column) tends to widen the gamut considerably, but results in one very narrow basis function. Adjusting the position parameter \(p\) (third column) achieves an even wider gamut with a smaller strength parameter \(s\), which results in less narrow bases and permits to capture most of the YXZ space. Displacing the boundary knots outside of the spectral interval \(U\) (last column) has a negligible effect on the basis gamut, even though pairs of basis functions at boundaries are significantly modified._
## 6 Applications
We now present application scenarios where we use our one-to-many mapping to find spectra with interesting visual appearance. Unless otherwise specified, we use warped basis functions with \((s,p)\) parameters determined as described in the previous section.
### Reproducing Vathochromism
Figure 1 demonstrates a reproduction of the Usambara effect using our approach. We use \(K=11\) warped basis functions and sample the equivalence class of spectra that achieves the target chromaticity \(\mathbf{c}=[0.38,0.45]\)1 and luminance \(F_{y}=0.46\). All such spectra are considered as transmittance spectra at a unit optical depth, which is related to the extinction coefficient \(\sigma_{t}\) of a medium by \(T_{1}(\lambda)=e^{-\Theta_{t}(\lambda)}\). The Beer-Lambert-Bouguer law at increasing depths \(d\) is then given by \(T_{d}(\lambda)=T_{1}(\lambda)^{d}\). For each sample of the equivalence class, we then integrate the corresponding \(T_{d}\) over color matching functions and plot the resulting transmittance curve in the chromaticity diagram. A pair of examples is shown in Figure 1(c), where we have picked two instances of the class that reproduce the Usambara effect - here with an orange color at large optical depths. For rendering, we need to specify \(\sigma_{a}\) and \(\sigma_{s}\), the absorption and scattering coefficients. In Figure 1(b), we use \(\sigma_{s}(\lambda)=T_{1}(\lambda)\) to achieve the target color on single scattering, which yields \(\sigma_{a}(\lambda)=-\log T_{1}(\lambda)-T_{1}(\lambda)\).
Footnote 1: Such scenes are typically long to converge in a spectral path tracer. We will provide more converged results in a final version of the paper.
Parameterizing the equivalence classIn the previous example, randomly sampling the equivalence class and then picking spectra that achieve the desired effects only provides indirect control over achieved colors at a optical depths \(d>1\). For some applications, a more direct control might be desired. Unfortunately, depending on the choice of basis, not all color appearance choices can be achieved. We provide a more direct control by parametrizing the equivalence class for the specific case of vathochromic transmittance spectra, which we illustrate on a series of unit tests in Figure 8. The main idea is to pick an _a priori_ set of representative spectra from the equivalence class, order them in chromaticity space, and interpolate them to navigate through a subset of relevant spectra. We use spectra formed by all triangles that contain the target chromaticity \(\mathbf{c}\), which naturally tend to result in distinct color appearance since they only rely on three basis functions. We then sample their transmittance curves at an arbitrary optical depth, and order the representative spectra in clockwise order around the equivalent point \(E=(\frac{1}{3},\frac{1}{3})\) according to chromaticity samples, as illustrated in Figure 8(left). As demonstrated in the accompanying video, when the user specifies a hue, we interpolate among the two closest transmittance curves in the chromaticity diagram.
Unit testsThe remainder of Figure 8 shows a test scene composed of a slab of homogeneous transparent and scattering medium, lit by two white point light sources: one behind and one in front2. As in Figure 1, the absorption and scattering coefficients are determined from \(T_{1}(\lambda)\) for each of the seven representative spectra of the equivalence class to render test images. The optical depth of light paths coming from behind is typically short, and exhibits the target chromaticity \(\mathbf{c}\) for all tests as expected. Light paths that come from the light in front instead need to be scattered to reemerge toward the camera and exhibit target hues at greater optical depths.
Footnote 2: Such scenes are typically long to converge in a spectral path tracer. We will provide more converged results in a final version of the paper.
Vathochromic reflectanceVathochromic effects may also occur with reflectance spectra, due to inter-reflections on shiny (typically metallic) materials. Figure 9 illustrates this effect on a crumpled paper model. We use \(K=9\) warped basis functions. In this case, the target chromaticity \(\mathbf{c}=[0.4,0.43]\)3 and luminance \(F_{y}=0.59\) control the reflectance at normal incidence \(R_{0}\) after a single scattering event. We then use the parametrization shown in Figure 8(left) to span the equivalence class of reflectance spectra, using Schlick's reflectance model [10] to compute \(R_{0}(\lambda)^{d}\) at normal incidence and the corresponding reflectance curves at discrete orders \(d\) of inter-reflection. This allows us to quickly find three different spectra that yield the same appearance in direct lighting, but exhibit the desired targeted hues in inter-reflections.
Footnote 3: Such scenes are typically long to converge in a spectral path tracer. We will provide more converged results in a final version of the paper.
### Reproducing Metamernism
Our one-to-many sampler also permits the exploration of the space of metameric spectra. Instead of directly using the PU to build a basis gamut in chromaticity space, we premultiply each element of the partition of unity with a target illuminant \(I(\lambda)\):
\[B_{k}^{I}(\lambda)=B_{k}(\lambda)I(\lambda). \tag{21}\]
This defines a different gamut in chromaticity space per illuminant:
\[\mathbf{b}_{k}^{I}=\frac{\left[B_{k,X}^{I},B_{k,Y}^{I}\right]^{\top}}{|B_{k}^ {I}|}. \tag{22}\]
Figure 7: **Warping optimisation.** Top row: the excess area \(\mathcal{A}\) is computed by tesselating the region between the RGB and basis gamuts into quads (left) and adding their signed areas (positive in green, negative in red, mixed in gray). We computed \(\mathcal{A}\) for several values of the \((s,p)\) warping parameters (middle) as well as a smoothness criterion \(\mathcal{S}\) (right). Bottom row: our \((s(K),p(K))\) trade-offs for various numbers \(K\) of basis functions, and two example basis gamuts before (orange) and after (green) warping. We plot \((s(7),p(7))=(0.66,0.39)\) with black crosses in criteria maps.
Figure 11: **Metameric pattern.** We use a binary texture (inset, right) to drive the use of one of two metameric spectra that both appear purple under a D65 illuminant, but differ in color under a F2 illuminant.
Figure 12: **Metameric image.** In this example, we produce a dense set of metamer spectra to hide the photograph of a cat (inset, right) under D65 lighting. For each pixel, we blend 8 spectra using the target image’s gray level. Blending factors are computed as a smooth Partition of Unity on \([0,1]\).
Figure 8: **Parametrization and unit tests.** We use our parametrization based on representative spectra to navigate through equivalence classes of transmittance spectra. The resulting spectra (colored by their hue at a large depth) are used for both absorption and scattering coefficients for a slab of homogeneous medium. The slab is lit by a point light source in front, and another from behind to distinguish first-order scattering (top row) from high-order scattering (bottom row). We scaled the medium to highlight \(d=10\) optical depth points (colored dots in the chromaticity diagram).
Figure 10: **Metameric palette.** We use our method to generate \(32\) spectra that produce the same achromatic color \(F_{Y}=0.8\) under a D65 (black dot and middle column) but provide a variety of colors under a F2 illuminant (blue dots and right column). We evaluate the accessible variability of metameric spectra under such a D65 constraint by random sampling.
Figure 9: **Vathochromic reflectance.** Multiple scattering has the same effect as a change in optical depth: it saturates the reflectance spectrum with a power law. Only this time, the exponents are integers. We use our vathochromic spectra as the \(R_{0}\) component of a Schlick Fresnel in a microfacet model. While single scattering produces the same appearance, multiple scattering depicts a change in tint that we control (from green to yellow).
Now, given a choice of illuminant - say \(D65\), we sample the equivalence class that achieves a target chromaticity \(\mathbf{c}^{D65}\), yielding a set of vectors \(\{\mathbf{w}^{D65}\}\) of basis coefficients. When these vectors are used with the basis functions premultiplied by another illuminant - say \(F2\), they yield _different_ chromaticities since \(\mathbf{b}_{k}^{F2}\neq\mathbf{b}_{k}^{D65}\).
**Unit tests** Metameric patches are shown in Figure 10, where a same achromatic color in \(D65\) is shown to correspond to a variety of different chromaticities in \(F2\), using \(K=7\)_non-warped_ basis functions. Each sample of the \(D65\) equivalence class thus yields an element of the palette achievable through metamerism. The greenish trend in the color palette is due to the choice of illuminant \(F2\).
**Hidden patterns and images** If we assign two different spectra from a metameric palette to two different regions of a surface, we obtain the result shown in Figure 11, where the use \(\mathbf{c}=[0.32,0.25]^{\top}\) and \(F_{Y}=0.8\). Here the spectrum controls the spectral diffuse albedo of a Lambertian material. The pattern is thus hidden under \(D65\) illumination, but revealed under \(F2\). Note that for rendering, we use the original basis functions \(B_{k}(\lambda)\), not the premultiplied ones. A similar effect can be obtained with hidden images, as shown in Figure 12. Here we take a gray-level picture as input, and blend 8 spectra of increasing luminance from the metameric palette of Figure 10 to reproduce luminance gradients. Compared to the tool of [1], our approach has two advantages: 1) their tool finds a single optimal metameric spectrum while we obtain a whole metameric palette; 2) they do not impose smoothness constraints on reflectance spectra while ours are smooth by design.
### Performance
As shown in the supplemental video, and although the prototype is implemented as a mono-threaded Python script, our upsampling method runs in real-time for artistic design. We measured timings for an increasing number of basis elements \(K\) on an Intel i3-6100 CPU at 3.70GHz. We report that our method runs at 4.9ms for \(K=5\), 13.1ms for \(K=7\), 19.3ms for \(K=9\), and 27ms for \(K=11\).
Note that using the Metameric Blacks construction [21] requires to track a number of constraints equals to twice the number of bins of a discretized spectra. In the literature, 31 bins are commonly used. We measured that generating the convex hull for a binning of 21 bins already takes 6.3s on average per spectra when using the scipy's interface to the qhull library (with default parameters and double precision)3.
Footnote 3: We do not count occurrences where the algorithm fails to find a solution.
## 7 Discussion and future work
We have introduced a novel method to upsample a color to an equivalence class of spectra through a well-defined one-to-many mapping. It provides another reason to move to spectral rendering besides the production of more photorealistic rendering results: the exploration of new visual effects as well as the imitation of those found in nature (gem stones, oils, etc). We have focused in particular on the generation of watchchromic effects, both in transmission and reflection, generalizing the intriguing Usambara effect. We have also shown how our approach applies to metameric effects. We now discuss its specificities.
**Differences with Metameric Blacks** The main difference with previous work in optics resides in the way reflectance spectra are represented. Methods that rely on discretized spectra (e.g., [21, 22]) require a small number of spectral bins to be computationally tractable, as discussed in Sections 2 and 6.2 and result in unrealistic spectra (Figure 13). Another difference lies in the case with which metameric sets may be explored. In our approach, an artist can quickly pick an achievable chromaticity (in the basis gamut), for which a maximum achievable luminance is readily provided through \(\overline{\mathbf{w}}\) (Equation 16). In contrast, with the metameric blacks approach, when a target color results in an empty metameric set, users have to go through trial and error to find a color for which at least one spectrum exists.
An alternative is to rely on measured spectra, such as in the work of Finlayson and Morovic [15]. Similar to Schmitt [14], they reconstruct metamers using barycentric coordinates in the space of spectra. That is, from a set of \(K\) measured spectra \(s_{k}(\lambda)\) they reconstruct \(r(\lambda)=\sum w_{k}s_{k}(\lambda)\) where \(w_{k}\) are positive weights with \(\sum w_{k}=1\) (Equation (27) in their paper). This imposes that \(r(\lambda)\) is in the convex hull of the \(s_{k}(\lambda)\). Our method only imposes validity constraints, \(w_{k}\in[0,1]\). Therefore, we can always reconstruct perfect blacks (\(r(\lambda)=0,\forall\lambda\)), perfect whites (\(r(\lambda)=1,\forall\lambda\)), and achieve any target luminance \(Y\). On the other hand, their method trivially yields physically-realistic spectra, whereas ours is more adapted to artistic exploration.
**Limitations** An inherent limitation of spectral asset creation, already pointed out by MacAdam [16, 17], is that one needs to trade saturation for luminance. Indeed, saturated spectra necessarily have narrow bands, and our approach is no different in this respect. This limitation might explain the difficulty to create visually-noticeable vathochromic effects in microfacet models to control the color of the multiple scattering term (see Figure 14).
A direction of improvement for our method lies in the design of techniques to navigate through equivalence classes. In particular, it would be useful to give an analytical description of bounds imposed by the target luminance \(F_{Y}\) (i.e., the boundary between opaque and transparent points in Figure 4(middle)).
Last, even though the spectra generated by our approach are physically-plausible, they are not physically-realistic by design.
Figure 13: **Comparison with Metameric Blacks (MB).** Both our method and Metameric Blacks permit to sample all possible metamers for a target color (here \(\mathbf{C}=[0.05,0.1,0.01]\)) in their respective equivalence class. However, MB uses a binned representation (here \(21\) bins) which yields unrealistic reflectance spectra.
Real spectra obey physical rules of their own. For instance, the real and imaginary parts of refractive indices are bound by the Kramers-Kronig relations. It would thus be interesting to establish connections with physical models of spectra. In this respect, having a large equivalence class from which to pick spectra closest to physically-realistic ones could be an advantage.
Future workOur method could be used to reproduce and study a number of interesting optical phenomena. The Alexandrite effect, an instance of metamerism, is one famous example: with our approach, we could investigate whether other spectra could potentially create similar effects. Interesting applications could be found in ecology, where the illuminant plays a crucial role in defining habitats. For instance, we could study how families of spectra are affected by lighting at different depths under water or under a dense foliage. We would also like to explore the extension of vathochromism to take into account fluorescence effects, which abound in nature.
Finally, we have only considered normal human color vision through the use of CIE sensitivity functions. A captivating direction of future work would be to experiment with sensitivity functions adapted to color blindness, or even to animal vision.
|
2307.09102 | Non-nilpotent Leibniz algebras with one-dimensional derived subalgebra | In this paper we study non-nilpotent non-Lie Leibniz $\mathbb{F}$-algebras
with one-dimensional derived subalgebra, where $\mathbb{F}$ is a field with
$\operatorname{char}(\mathbb{F}) \neq 2$. We prove that such an algebra is
isomorphic to the direct sum of the two-dimensional non-nilpotent non-Lie
Leibniz algebra and an abelian algebra. We denote it by $L_n$, where
$n=\dim_{\mathbb{F}} L_n$. This generalizes the result found in [11], which is
only valid when $\mathbb{F}=\mathbb{C}$. Moreover, we find the Lie algebra of
derivations, its Lie group of automorphisms and the Leibniz algebra of
biderivations of $L_n$. Eventually, we solve the coquecigrue problem for $L_n$
by integrating it into a Lie rack. | Alfonso Di Bartolo, Gianmarco La Rosa, Manuel Mancini | 2023-07-18T09:51:27Z | http://arxiv.org/abs/2307.09102v2 | # Non-nilpotent Leibniz algebras with one-dimensional derived subalgebra
###### Abstract
In this paper we study non-nilpotent non-Lie Leibniz \(\mathbb{F}\)-algebras with one-dimensional derived subalgebra, where \(\mathbb{F}\) is a field with \(\mathrm{char}(\mathbb{F})\neq 2\). We prove that such an algebra is isomorphic to the direct sum of the two-dimensional non-nilpotent non-Lie Leibniz algebra and an abelian algebra. We denote it by \(L_{n}\), where \(n=\dim_{\mathbb{F}}L_{n}\). This generalizes the result found in [11], which is only valid when \(\mathbb{F}=\mathbb{C}\). Moreover, we find the Lie algebra of derivations, its Lie group of automorphisms and the Leibniz algebra of biderivations of \(L_{n}\). Eventually, we solve the _coquecigrue problem_ for \(L_{n}\) by integrating it into a Lie rack.
+
Footnote †: The authors are supported by University of Palermo and by the “National Group for Algebraic and Geometric Structures, and their Applications” (GNSAGA – INdAM).
+
Footnote †: The authors are supported by University of Palermo and by the “National Group for Algebraic and Geometric Structures, and their Applications” (GNSAGA – INdAM).
+
Footnote †: The authors are supported by University of Palermo and by the “National Group for Algebraic and Geometric Structures, and their Applications” (GNSAGA – INdAM).
Dipartimento di Matematica e Informatica
Universita degli Studi di Palermo, Via Archirafi 34, 90123 Palermo, Italy
[email protected], ORCID: 0000-0001-5619-2644
[email protected], ORCID: 0000-0003-1047-5993
[email protected], ORCID: 0000-0003-2142-6193
## Introduction
Leibniz algebras were introduced by J.-L. Loday in [19] as a non-skew symmetric version of Lie algebras. Earlier such algebraic structures were also considered by A. Blokh, who called them D-algebras [5] for their strict connection with derivations. Leibniz algebras play a significant role in different areas of mathematics and physics.
Many results of Lie algebras are still valid for Leibniz algebras. One of them is the _Levi decomposition_, which states that any Leibniz algebra over a field \(\mathbb{F}\) of characteristic zero is the semidirect sum of its radical and a semisimple Lie algebra. This makes clear the importance of the problem of classification of solvable and nilpotent Lie / Leibniz algebras, which has been dealt with
since the early 20th century (see [2], [3], [4], [9], [10], [11], [13] and [14], just for giving a few examples).
In [16] and [17] nilpotent Leibniz algebras \(L\) with one-dimensional derived subalgebra \([L,L]\) were studied and classified. It was proved that, up to isomorphism, there are three classes of _indecomposable_ Leibniz algebras with these properties, namely the _Heisenberg_ algebras \(\mathfrak{l}^{A}_{2n+1}\), which are parameterized by their dimension \(2n+1\) and by a matrix \(A\) in canonical form, the _Kronecker_ algebra \(\mathfrak{k}_{n}\) and the _Dieudonne_ algebra \(\mathfrak{d}_{n}\), both parameterized by their dimension only. We want to complete this classification by studying non-nilpotent Leibniz \(\mathbb{F}\)-algebras with one-dimensional derived subalgebra, where \(\mathbb{F}\) is a field with \(\mathrm{char}(\mathbb{F})\neq 2\). Using the theory of non-abelian extensions of Leibniz algebras introduced in [18], we prove that a non-nilpotent non-Lie Leibniz algebra \(L\) with \(\mathrm{dim}_{\mathbb{F}}L=n\) and \(\mathrm{dim}_{\mathbb{F}}[L,L]=1\) is isomorphic to the direct sum of the two-dimensional non-nilpotent non-Lie Leibniz algebra \(S_{2}\), i.e. the algebra with basis \(\{e_{1},e_{2}\}\) and multiplication table given by \([e_{2},e_{1}]=e_{1}\), and an abelian algebra of dimension \(n-2\). We denote it by \(L_{n}\). This generalizes the result found in Theorem 2.6 of [11], where the authors proved that a _complex_ non-split non-Lie Leibniz algebra with one-dimensional derived subalgebra is isomorphic to \(S_{2}\).
We study in detail the properties of the algebra \(L_{n}\) and we compute the Lie algebra of derivations \(\mathrm{Der}(L_{n})\), its Lie group of automorphism \(\mathrm{Aut}(L_{n})\) and the Leibniz algebra of biderivations \(\mathrm{Bider}(L_{n})\).
Finally, we solve the _coquecigrue problem_ for the Leibniz algebra \(L_{n}\). We mean the problem, formulated by J.-L. Loday in [19], of finding a generalization of Lie third theorem to Leibniz algebras. Using M. K. Kinyon's results for the class of real _split Leibniz algebras_ (see [15]), we show how to explicitly integrate \(L_{n}\) into a Lie rack defined over the vector space \(\mathbb{R}^{n}\).
## 1 Preliminaries
We assume that \(\mathbb{F}\) is a field with \(\mathrm{char}(\mathbb{F})\neq 2\). For the general theory we refer to [1].
**Definition 1.1**.: A _left Leibniz algebra_ over \(\mathbb{F}\) is a vector space \(L\) over \(\mathbb{F}\) endowed with a bilinear map (called _commutator_ or _bracket_) \([-,-]:L\times L\to L\) which satisfies the _left Leibniz identity_
\[[x,[y,z]]=[[x,y]\,,z]+[y,[x,z]]\,,\ \ \forall x,y,z\in L.\]
In the same way we can define a right Leibniz algebra, using the right Leibniz identity
\[[[x,y]\,,z]=[[x,z],y]+[x,[y,z]]\,,\ \ \forall x,y,z\in L.\]
Given a left Leibniz algebra \(L\), the multiplication \([x,y]^{\mathrm{op}}=[y,x]\) defines a right Leibniz algebra structure on \(L\).
A Leibniz algebra that is both left and right is called _symmetric Leibniz algebra_. From now on we assume that \(\dim_{\mathbb{F}}L<\infty\).
We have a full inclusion functor \(i\colon\mathbf{LieAlg}_{\mathbb{F}}\to\mathbf{LeibAlg}_{\mathbb{F}}\) that embeds Lie algebras over \(\mathbb{F}\) into Leibniz algebras over \(\mathbb{F}\). Its left adjoint is the functor \(\pi\colon\mathbf{LeibAlg}_{\mathbb{F}}\to\mathbf{LieAlg}_{\mathbb{F}}\), which associates to each Leibniz algebra \(L\) the quotient \(L/\operatorname{Leib}(L)\), where \(\operatorname{Leib}(L)\) is the smallest bilateral ideal of \(L\) such that the quotient \(L/\operatorname{Leib}(L)\) becomes a Lie algebra. \(\operatorname{Leib}(L)\) is defined as the subalgebra generated by all elements of the form \([x,x]\), for any \(x\in L\), and it is called the _Leibniz kernel_ of \(L\).
We define the left and the right center of a Leibniz algebra
\[\operatorname{Z}_{l}(L)=\left\{x\in L\,|\,\,[x,L]=0\right\},\,\,\,\operatorname {Z}_{r}(L)=\left\{x\in L\,|\,\,[L,x]=0\right\}.\]
The intersection of the left and right center is called the _center_ of \(L\) and it is denoted by \(\operatorname{Z}(L)\). In general for a left Leibniz algebra \(L\), the left center \(\operatorname{Z}_{l}(L)\) is a bilateral ideal, meanwhile the right center is not even a subalgebra. Furthermore, one can check that \(\operatorname{Leib}(L)\subseteq\operatorname{Z}_{l}(L)\).
The definition of derivation for a Leibniz algebra is the same as in the case of Lie algebras.
**Definition 1.2**.: A linear map \(d\colon L\to L\) is a _derivation_ of \(L\) if
\[d([x,y])=[d(x),y]+[x,d(y)],\,\,\,\forall x,y\in L.\]
An equivalent way to define a left Leibniz algebra \(L\) is to saying that the left adjoint maps \(\operatorname{ad}_{x}=[x,-]\) are derivations. Meanwhile the right adjoint maps \(\operatorname{Ad}_{x}=[-,x]\) are not derivations in general. The set \(\operatorname{Der}(L)\) of all derivations of \(L\) is a Lie algebra with the usual bracket \([d,d^{\prime}]=d\circ d^{\prime}-d^{\prime}\circ d\) and the set \(\operatorname{Inn}(L)\) spanned by the left adjoint maps, which are called _inner derivations_, is an ideal of \(\operatorname{Der}(L)\). Moreover \(\operatorname{Aut}(L)\) is a Lie group and its Lie algebra is precisely \(\operatorname{Der}(L)\).
In [19] J.-L. Loday introduced the notion of anti-derivation and biderivation for a Leibniz algebra.
**Definition 1.3**.: A linear map \(D\colon L\to L\) is an _anti-derivation_ of \(L\) if
\[D([x,y])=[x,D(y)]-[y,D(x)],\,\,\,\forall x,y\in L.\]
The space \(\operatorname{ADer}(L)\) of anti-derivations of \(L\) has a \(\operatorname{Der}(L)\)-module structure with the extra multiplication \(d\cdot D=d\circ D-D\circ d\), for any derivation \(d\) and for any anti-derivation \(D\), and one can check that the right adjoint maps \(\operatorname{Ad}_{x}\) are anti-derivations.
**Definition 1.4**.: A _biderivation_ of \(L\) is a pair \((d,D)\in\operatorname{Der}(L)\times\operatorname{ADer}(L)\) such that
\[[d(x)+D(x),y]=0,\,\,\,\forall x,y\in L.\]
The set \(\mathrm{Bider}(L)\) of all biderivations of \(L\) has a Leibniz algebra structure with the bracket
\[[(d,D),(d^{\prime},D^{\prime})]=([d,d^{\prime}],d\cdot D^{\prime})\]
and it is defined a Leibniz algebra homomorphism
\[L\to\mathrm{Bider}(L),\;x\mapsto(\mathrm{ad}_{x},\mathrm{Ad}_{x}).\]
The pair \((\mathrm{ad}_{x},\mathrm{Ad}_{x})\) is called the _inner biderivation_ associated with \(x\in L\) and the set of all inner biderivations of \(L\) forms a Leibniz subalgebra of \(L\).
We recall the definitions of solvable and nilpotent Leibniz algebras.
**Definition 1.5**.: Let \(L\) be a right Leibniz algebra over \(\mathbb{F}\) and let
\[L^{0}=L,\ L^{k+1}=[L^{k},L^{k}],\ \ \forall k\geq 0,\]
be the _derived series of \(L\)_. \(L\) is \(n-\)_step solvable_ if \(L^{n-1}\neq 0\) and \(L^{n}=0\).
**Definition 1.6**.: Let \(L\) be a left Leibniz algebra over \(\mathbb{F}\) and let
\[L^{(0)}=L,\ L^{(k+1)}=[L,L^{(k)}],\ \ \forall k\geq 0,\]
be the _lower central series of \(L\)_. \(L\) is \(n-\)_step nilpotent_ if \(L^{(n-1)}\neq 0\) and \(L^{(n)}=0\).
When \(L\) is two-step nilpotent, it lies in different varieties of non-associative algebras, such as associative, alternative and Zinbiel algebras. In this case we refer at \(L\) as a _two-step nilpotent algebra_ and we have the following.
**Proposition 1.7**.:
1. _If_ \(L\) _is a two-step nilpotent algebra, then_ \(L^{(1)}=[L,L]\subseteq\mathrm{Z}(L)\) _and_ \(L\) _is a symmetric Leibniz algebra;_
2. _If_ \(L\) _is a left nilpotent Leibniz algebra with_ \(\dim_{\mathbb{F}}[L,L]=1\)_, then_ \(L\) _is two-step nilpotent._
In [16] the classification of nilpotent Leibniz algebras with one-dimensional derived subalgebra was established. The classification revealed that, up to isomorphism, there exist only three classes of indecomposable nilpotent Leibniz algebras of this type.
**Definition 1.8**.: [16] Let \(f(x)\in\mathbb{F}\left[x\right]\) be a monic irreducible polynomial. Let \(k\in\mathbb{N}\) and let \(A=(a_{ij})_{i,j}\) be the companion matrix of \(f(x)^{k}\). The _Heisenberg_ algebra \(\mathfrak{l}^{A}_{2n+1}\) is the \((2n+1)\)-dimensional Leibniz algebra with basis \(\{e_{1},\ldots,e_{n},f_{1},\ldots,f_{n},z\}\) and the brackets are given by
\[[e_{i},f_{j}]=(\delta_{ij}+a_{ij})z,\;[f_{j},e_{i}]=(-\delta_{ij}+a_{ij})z,\ \ \forall i,j=1,\ldots,n.\]
When \(A\) is the zero matrix, then we obtain the \((2n+1)-\)dimensional Heisenberg Lie algebra \(\mathfrak{h}_{2n+1}\).
**Definition 1.9**.: [16] Let \(n\in\mathbb{N}\). The _Kronecker_ algebra \(\mathfrak{k}_{n}\) is the \((2n+1)\)-dimensional Leibniz algebra with basis \(\{e_{1},\ldots,e_{n},f_{1},\ldots,f_{n},z\}\) and the brackets are given by
\[[e_{i},f_{i}]=[f_{i},e_{i}]=z,\ \ \forall i=1,\ldots,n\] \[[e_{i},f_{i-1}]=z,[f_{i-1},e_{i}]=-z,\ \ \forall i=2,\ldots,n.\]
**Definition 1.10**.: [16] Let \(n\in\mathbb{N}\). The _Dieudonne_ algebra \(\mathfrak{d}_{n}\) is the \((2n+2)\)-dimensional Leibniz algebra with basis \(\{e_{1},\ldots,e_{2n+1},z\}\) and the brackets are given by
\[[e_{1},e_{n+2}]=z,\] \[[e_{i},e_{n+i}]=[e_{i},e_{n+i+1}]=z,\ \ \forall i=2,\ldots,n,\] \[[e_{n+1},e_{2n+1}]=z,\] \[[e_{i},e_{i-n}]=z,\ \ [e_{i},e_{i-n-1}]=-z,\ \ \forall i=n+2, \ldots,2n+1.\]
We want to extend this classification by studying non-nilpotent Leibniz algebras with one-dimensional derived subalgebra.
## 2 Non-nilpotent Leibniz algebras with one-dimensional derived subalgebra
Let \(L\) be a non-nilpotent left Leibniz algebra over \(\mathbb{F}\) with \(\dim_{\mathbb{F}}L=n\) and \(\dim_{\mathbb{F}}\left[L,L\right]=1\). We observe that such an algebra is two-step solvable since the derived subalgebra \(\left[L,L\right]\) is abelian.
It is well known that a non-nilpotent Lie algebra with one-dimensional derived subalgebra is isomorphic to the direct sum of the two-dimensional non-abelian Lie algebra and an abelian algebra (see Section 3 of [12]). Thus we are interested in the classification of non-Lie Leibniz algebras with these properties.
In Theorem 2.6 of [11] the authors prove that a _complex_ non-split non-nilpotent non-Lie Leibniz algebra with one-dimensional derived subalgebra is isomorphic to the two-dimensional algebra with basis \(\{e_{1},e_{2}\}\) and multiplication table \([e_{2},e_{1}]=[e_{2},e_{2}]=e_{1}\). Here we generalize this result when \(\mathbb{F}\) is a general field with \(\mathrm{char}(\mathbb{F})\neq 2\).
**Proposition 2.1**.: _Let \(L\) be a non-nilpotent left Leibniz algebra over \(\mathbb{F}\) with \(\dim_{\mathbb{F}}\left[L,L\right]=1\). Then \(L\) has a two-dimensional bilateral ideal \(S\) which is isomorphic to one of the following Leibniz algebras:_
1. \(S_{1}=\langle e_{1},e_{2}\rangle\) _with_ \([e_{2},e_{1}]=-\left[e_{1},e_{2}\right]=e_{1}\)_;_
2. \(S_{2}=\langle e_{1},e_{2}\rangle\) _with_ \([e_{2},e_{1}]=[e_{2},e_{2}]=e_{1}\)_._
Proof.: Let \(\left[L,L\right]=\mathbb{F}z\). \(L\) is not nilpotent, then
\[\left[L,\left[L,L\right]\right]\neq 0,\]
i.e. \(z\notin\mathbb{Z}_{r}(L)\). Since \([L,L]\) is an abelian algebra, there exists a vector \(x\in L\), which is linearly independent than \(z\), such that \([x,z]\neq 0\). Thus
\[[x,z]=\gamma z,\]
for some \(\gamma\in\mathbb{F}^{*}\). The subspace \(S=\langle x,z\rangle\) is an ideal of \(L\) and it is not nilpotent. In fact
\[0\neq\gamma z=[x,z]\in[S,[S,S]]\,.\]
Thus \(S\) is a non-nilpotent Leibniz algebra. Using the classification of two-dimensional Leibniz algebras given by C. Cuvier in [8], \(S\) is isomorphic either to \(S_{1}\) or to \(S_{2}\).
**Remark 2.1**.: The algebras \(S_{1}\) and \(S_{2}\) are respectively the Leibniz algebras \(L_{2}\) and \(L_{4}\) of Section 3.1 in [1]. We observe that \(S_{1}\) is a Lie algebra, meanwhile \(S_{2}\) is a non-right left Leibniz algebra.
One can see \(L\) as an extension of the abelian algebra \(L_{0}=L/S\cong\mathbb{F}^{n-2}\) by \(S\) (see [18])
(1)
It turns out that there exists an equivalence of Leibniz algebra extensions
where \(L_{0}\ltimes_{\omega}S\) is the Leibniz algebra defined on the direct sum of vector spaces \(L_{0}\oplus S\) with the bilinear operation given by
\[[(x,a),(y,b)]_{(l,r,\omega)}=(0,[a,b]+l_{x}(b)+r_{y}(a)+\omega(x,y)),\]
where
\[\omega(x,y)=[\sigma(x),\sigma(y)]_{L}-\sigma([x,y]_{L_{0}})=[\sigma(x),\sigma (y)]_{L}\]
is the Leibniz algebra \(2\)-cocycle associated with (1) and
\[l_{x}(b)=[\sigma(x),i(b)]_{L},\ \ r_{y}(a)=[i(a),\sigma(y)]_{L}\]
define the action of \(L_{0}\) on \(S\); \(i_{1},i_{2},\pi_{1}\) are the canonical injections and projection. The Leibniz algebra isomorphism \(\theta\) is defined by \(\theta(x,a)=\sigma(x)+i(a)\), for every \((x,a)\in L_{0}\oplus S\).
By Proposition 4.2 of [18], the \(2\)-cocycle \(\omega\colon L_{0}\times L_{0}\to S\) and the linear maps \(l,r\colon L_{0}\to\operatorname{gl}(S)\) must satisfy the following set of equations
1. \(l_{x}([a,b])=[l_{x}(a),b]+[x,l_{x}(b)]\);
2. \(r_{x}([a,b])=[a,r_{x}(b)]-[b,r_{x}(a)]\);
* \([l_{x}(a)+r_{x}(a),b]=0\);
* \([l_{x},l_{y}]_{\mathrm{gl}(S)}-l_{[x,y]_{L_{0}}}=\mathrm{ad}_{\omega(x,y)}\);
* \([l_{x},r_{y}]_{\mathrm{gl}(S)}-r_{[x,y]_{L_{0}}}=\mathrm{Ad}_{\omega(x,y)}\);
* \(r_{y}(r_{x}(a)+l_{x}(a))=0\);
* \(l_{x}(\omega(y,z))-l_{y}(\omega(x,z))-r_{z}(\omega(x,y))=\\ =\omega([x,y]_{L_{0}},z)-\omega(x,[y,z]_{L_{0}})+\omega(y,[x,z]_{L_{0}})\)
for any \(x,y\in L_{0}\) and for any \(a,b\in S\). Notice that these equations where also studied in [6] in the case of Leibniz algebra _split extensions_.
**Remark 2.2**.: The first three equations state that the pair \((l_{x},r_{x})\) is a biderivation of the Leibniz algebra \(S\), for any \(x\in L_{0}\). Biderivations of low-dimensional Leibniz algebras were classified in [20] and it turns out that
* \(\mathrm{Bider}(S_{1})=\{(d,-d)\mid d\in\mathrm{Der}(S_{1})\}\) and \[\mathrm{Der}(S_{1})=\left\{\begin{pmatrix}\alpha&\beta\\ 0&0\end{pmatrix}\Bigg{|}\;\alpha,\beta\in\mathbb{F}\right\};\]
* \(\mathrm{Bider}(S_{2})=\left\{\left(\begin{pmatrix}\alpha&\alpha\\ 0&0\end{pmatrix},\begin{pmatrix}0&\beta\\ 0&0\end{pmatrix}\right)\Bigg{|}\;\alpha,\beta\in\mathbb{F}\right\}\).
We study now in detail the non-abelian extension (1) in both cases that \(S\) is isomorphic either to \(S_{1}\) or to \(S_{2}\).
### \(S\) is a Lie algebra
When \(S\cong S_{1}\), we have that \(r_{y}=-l_{y}\), for any \(y\in L_{0}\) and the bilinear operation of \(L_{0}\ltimes_{\omega}S_{1}\) becomes
\[[(x,a),(y,b)]_{(l,\omega)}=(0,[a,b]+l_{x}(b)-l_{y}(a)+\omega(x,y)).\]
The linear map \(l_{x}\) is represented by a \(2\times 2\) matrix
\[\begin{pmatrix}\alpha_{x}&\beta_{x}\\ 0&0\end{pmatrix}\]
with \(\alpha_{x}\),\(\beta_{x}\in\mathbb{F}\). From equations (L4)-(L5) it turns out that
\[\omega(x,y)=(\alpha_{x}\beta_{y}-\alpha_{y}\beta_{x})e_{1},\;\;\forall x,y\in L _{0}\]
and the 2-cocycle \(\omega\) is skew-symmetric. Moreover, equations (L6)-(L7) are automatically satisfied and the resulting algebra \(L_{0}\ltimes_{\omega}S_{1}\cong L\) is a Lie algebra. We conclude that \(L\) is isomorphic to the direct sum of \(S_{1}\) and \(L_{0}\cong\mathbb{F}^{n-2}\).
### \(S\) is not a Lie algebra
With the change of basis \(e_{2}\mapsto e_{2}-e_{1}\), \(S_{2}\) becomes the Leibniz algebra with basis \(\{e_{1},e_{2}\}\) and the only non-trivial bracket given by \([e_{2},e_{1}]=e_{1}\). Now a biderivation of \(S_{1}\) is represented by a pair of matrices
\[\left(\begin{pmatrix}\alpha&0\\ 0&0\end{pmatrix},\begin{pmatrix}0&\beta\\ 0&0\end{pmatrix}\right)\]
with \(\alpha,\beta\in\mathbb{F}\) and the pair \(\left(l_{x},r_{x}\right)\in\mathrm{Bider}(S_{2})\) is defined by \(l_{x}(e_{1})=\alpha_{x}e_{1}\) and \(r_{x}(e_{2})=\beta_{x}e_{1}\), for any \(x\in L_{0}\).
Equation (L4) states that \(\left[l_{x},l_{y}\right]_{\mathrm{gl}(S_{2})}=[\omega(x,y),-]\), with
\[\left[l_{x},l_{y}\right]_{\mathrm{gl}(S_{2})}=l_{x}\circ l_{y}-l _{y}\circ l_{x} =\begin{pmatrix}\alpha_{x}&0\\ 0&0\end{pmatrix}\begin{pmatrix}\alpha_{y}&0\\ 0&0\end{pmatrix}-\begin{pmatrix}\alpha_{y}&0\\ 0&0\end{pmatrix}\begin{pmatrix}\alpha_{x}&0\\ 0&0\end{pmatrix}=\] \[=\begin{pmatrix}\alpha_{x}\alpha_{y}&0\\ 0&0\end{pmatrix}-\begin{pmatrix}\alpha_{x}\alpha_{y}&0\\ 0&0\end{pmatrix}=\begin{pmatrix}0&0\\ 0&0\end{pmatrix},\]
for any \(x,y\in L_{0}\). Thus \(\omega(x,y)\in\mathrm{Z}_{l}(S_{2})=\mathbb{F}e_{1}\).
From equation (L5) we have \(\left[l_{x},r_{y}\right]_{\mathrm{gl}(S_{2})}=\left[-,\omega(x,y)\right]_{S_{2}}\), with
\[\left[l_{x},r_{y}\right]_{\mathrm{gl}(S_{2})}=l_{x}\circ r_{y}-r_{y}\circ l_{ x}=\begin{pmatrix}0&\alpha_{x}\beta_{y}\\ 0&0\end{pmatrix}-\begin{pmatrix}0&0\\ 0&0\end{pmatrix}=\begin{pmatrix}0&\alpha_{x}\beta_{y}\\ 0&0\end{pmatrix}.\]
Thus, for every \(a=a_{1}e_{1}+a_{2}e_{2}\in S_{2}\) and for every \(x,y\in L_{0}\), we have
\[\left[a,\omega(x,y)\right]=\left[l_{x},r_{y}\right](a)=\alpha_{x}\beta_{y}a_{ 2}e_{1}\]
i.e. \(\omega(x,y)=\alpha_{x}\beta_{y}e_{1}\). Finally, equations (L6) and (L7) are identically satisfied.
Summarizing we have
\[\begin{cases}l_{x}\equiv\begin{pmatrix}\alpha_{x}&0\\ 0&0\end{pmatrix}\\ \\ r_{y}\equiv\begin{pmatrix}0&\beta_{y}\\ 0&0\end{pmatrix}\\ \\ \omega(x,y)=\alpha_{x}\beta_{y}e_{1}\end{cases}\]
for every \(x,y\in L_{0}\) and the bilinear operation \([-,-]_{(l,r,\omega)}\) becomes
\[[(x,a),(y,b)]_{(l,r,\omega)}=(0,(a_{2}b_{1}+\alpha_{x}b_{1}+\beta_{y}a_{2}+ \alpha_{x}\beta_{y})e_{1}),\]
for any \(x\), \(y\in L_{0}\) and for any \(a=a_{1}e_{1}+a_{2}e_{2}\), \(b=b_{1}e_{1}+b_{2}e_{2}\in S_{2}\).
If we fix a basis \(\{f_{3},\ldots,f_{n}\}\) of \(L_{0}\) and we denote by
\[\alpha_{i}=\alpha_{f_{i}},\ \ \beta_{i}=\beta_{f_{i}},\ \ \forall i=3,\ldots,n\]
then \(L\) is isomorphic to the Leibniz algebra with basis \(\{e_{1},e_{2},f_{3},\ldots,f_{n}\}\) and non-zero brackets
\[[e_{2},e_{1}]=e_{1}\] \[[e_{2},f_{i}]=\beta_{i}e_{1},\quad\forall i=3,\ldots,n\] \[[f_{i},e_{1}]=\alpha_{i}e_{1},\quad\forall i=3,\ldots,n\] \[[f_{i},f_{j}]=\alpha_{i}\beta_{j}e_{1},\quad\forall i,j=3,\ldots,n.\]
With the change of basis \(f_{i}\mapsto f_{i}^{\prime}=\dfrac{f_{i}}{\beta_{i}}-e_{1}\), if \(\beta_{i}\neq 0\), we obtain that
\[[e_{2},f_{i}^{\prime}]=e_{1}-[e_{2},e_{1}]=0,\] \[[f_{i}^{\prime},e_{1}]=\gamma_{i}e_{1},\ \text{where }\gamma_{i}= \dfrac{\alpha_{i}}{\beta_{i}},\] \[[f_{i},f_{j}^{\prime}]=\alpha_{i}e_{1}-[f_{i},e_{1}]=0,\] \[[f_{i}^{\prime},f_{j}^{\prime}]=\gamma_{i}e_{1}-\dfrac{1}{\beta _{i}}[f_{i},e_{1}]=0.\]
If we denote again \(f_{i}\equiv f_{i}^{\prime}\) and \(\alpha_{i}\equiv\gamma_{i}\) when \(\beta_{i}\neq 0\), then \(L\) has basis \(\{e_{1},e_{2},f_{3},\ldots,f_{n}\}\) and non-trivial brackets
\[[e_{2},e_{1}]=e_{1},\ \ [f_{i},e_{1}]=\alpha_{i}e_{1},\ \ \forall i=3,\ldots,n.\]
Finally, when \(\alpha_{i}\neq 0\), we can operate the change of basis
\[f_{i}\mapsto\dfrac{f_{i}}{\alpha_{i}}-e_{2}.\]
One can check that the only non-trivial bracket now is \([e_{2},e_{1}]=e_{1}\) and \(L\) is isomorphic to the direct sum of \(S_{2}\) and the abelian algebra \(L_{0}\cong\mathbb{F}^{n-2}\). This allows us to conclude with the following.
**Theorem 2.2**.: _Let \(\mathbb{F}\) be a field with \(\operatorname{char}(\mathbb{F})\neq 2\). Let \(L\) be a non-nilpotent non-Lie left Leibniz algebra over \(\mathbb{F}\) with \(\dim_{\mathbb{F}}L=n\) and \(\dim_{\mathbb{F}}[L,L]=1\). Then \(L\) is isomorphic to the direct sum of the two-dimensional non-nilpotent non-Lie Leibniz algebra \(S_{2}\) and an abelian algebra of dimension \(n-2\). We denote this algebra by \(L_{n}\). _
If we suppose that \(L\) is a _non-split_ algebra, i.e. \(L\) cannot be written as the direct sum of two proper ideals, then we obtain the following result, that is a generalization of Theorem 2.6 of [11] and which is valid over a general field \(\mathbb{F}\) with \(\operatorname{char}(\mathbb{F})\neq 2\).
**Corollary 2.3**.: _Let \(L\) be a non-split non-nilpotent non-Lie left Leibniz algebra over \(\mathbb{F}\) with \(\dim_{\mathbb{F}}L=n\) and \(\dim_{\mathbb{F}}[L,L]=1\). Then \(n=2\) and \(L\cong S_{2}\). _
Now we study in detail the algebra \(L_{n}=S_{2}\oplus\mathbb{F}^{n-2}\) by describing the Lie algebra of derivations, its Lie group of automorphisms and the Leibniz algebra of biderivations. Moreover, when \(\mathbb{F}=\mathbb{R}\), we solve the _coquegigrue problem_ (see [7] and [15]) for \(L_{n}\) by integrating it into a Lie rack.
### Derivations, automorphisms and biderivations of \(L_{n}\)
Let \(n\geq 2\) and let \(L_{n}=S_{2}\oplus\mathbb{F}^{n-2}\). We fix the basis \(\mathcal{B}_{n}=\{e_{1},e_{2},f_{3},\ldots,f_{n}\}\) of \(L_{n}\) and we recall that the only non-trivial commutator is \([e_{2},e_{1}]=e_{1}\). A straightforward application of the algorithm proposed in [20] for finding derivations and anti-derivations of a Leibniz algebra as pair of matrices with respect to a fixed basis produces the following.
**Theorem 2.4**.:
1. _A derivation of_ \(L_{n}\) _is represented, with respect to the basis_ \(\mathcal{B}_{n}\)_, by a matrix_ \[\left(\begin{array}{cccc|cccc}\alpha&0&0&0&\cdots&0\\ 0&0&0&0&\cdots&0\\ \hline 0&a_{3}&&&\\ 0&a_{4}&&&\\ \vdots&\vdots&&&A\\ 0&a_{n}&&&\end{array}\right)\] _where_ \(A\in\mathrm{M}_{n-2}(\mathbb{F})\)_._
2. _The group of automorphisms_ \(\mathrm{Aut}(L_{n})\) _is the Lie subgroup of_ \(\mathrm{GL}_{n}(\mathbb{F})\) _of matrices of the form_ \[\left(\begin{array}{cccc|cccc}\beta&0&0&0&\cdots&0\\ 0&1&0&0&\cdots&0\\ \hline 0&b_{3}&&&\\ 0&b_{4}&&&\\ \vdots&\vdots&&&B\\ 0&b_{n}&&&\end{array}\right)\] _where_ \(\beta\neq 0\) _and_ \(B\in\mathrm{GL}_{n-2}(\mathbb{F})\)_._
3. _The Leibniz algebra of biderivations of_ \(L_{n}\) _consists of the pairs_ \((d,D)\) _of linear endomorphisms of_ \(L_{n}\) _which are represented by the pair of matrices_ \[\left(\left(\begin{array}{cccc|cccc}\alpha&0&0&0&\cdots&0\\ 0&0&0&0&\cdots&0\\ \hline 0&a_{3}&&&\\ 0&a_{4}&&&\\ \vdots&\vdots&&&A\\ 0&a_{n}&&&\end{array}\right),\left(\begin{array}{cccc|cccc}0&\alpha^{\prime}&0& 0&\cdots&0\\ 0&0&0&\cdots&0\\ \hline 0&a^{\prime}_{3}&&&\\ 0&a^{\prime}_{4}&&&\\ \vdots&\vdots&&&A^{\prime}&\\ 0&a^{\prime}_{n}&&&\end{array}\right)\right)\] _where_ \(A\)_,_\(A^{\prime}\in\mathrm{M}_{n-2}(\mathbb{F})\)_._
The integration of the Leibniz algebra \(L_{n}\)
The _coquecigrue problem_ is the problem formulated by J.-L. Loday in [19] of finding a generalization of Lie third theorem to Leibniz algebras. Given a real Leibniz algebra \(L\), one wants to find a manifold endowed with a smooth map, which plays the role of the adjoint map for Lie groups, such that the tangent space at a distinguished element, endowed with the differential of this map, gives a Leibniz algebra isomorphic to \(L\). Moreover, when \(L\) is a Lie algebra, we want to obatin the simply connected Lie group associated with \(L\). From now on, we assume that the underlying field of any algebra is \(\mathbb{F}=\mathbb{R}\).
In [15] M. K. Kinyon shows that it is possible to define an algebraic structure, called _rack_, whose operation, differentiated twice, defines on its tangent space at the unit element a Leibniz algebra structure.
**Definition 3.1**.: A _rack_ is a set \(X\) with a binary operation \(\rhd\colon X\times X\to X\) which is left autodistributive
\[x\rhd(y\rhd z)=(x\rhd y)\rhd(x\rhd z),\ \ \forall x,y,z\in X\]
and such that the left multiplications \(x\rhd-\) are bijections.
A rack is _pointed_ if there exists an element \(1\in X\) such that \(1\rhd x=x\) and \(x\rhd 1=1\), for any \(x\in X\).
A rack is a _quandle_ if the binary operation \(\rhd\) is idempotent.
The first example of a rack is any group \(G\) endowed with its conjugation
\[x\rhd y=xyx^{-1},\ \ \forall x,y\in G.\]
We denote this rack by \(\operatorname{Conj}(G)\) and we observe that it is a quandle.
**Definition 3.2**.: A pointed rack \((X,\rhd,1)\) is said to be a _Lie rack_ if \(X\) is a smooth manifold, \(\rhd\) is a smooth map and the left multiplications are diffeomorphisms.
M. K. Kinyon proved that the tangent space \(\operatorname{T}_{1}X\) at the unit element \(1\) of a Lie rack \(X\), endowed with the bilinear operation
\[[x,y]=\frac{\partial^{2}}{\partial s\partial t}\bigg{|}_{s,t=0}\gamma_{1}(s) \rhd\gamma_{2}(t)\]
where \(\gamma_{1},\gamma_{2}\colon[0,1]\to X\) are smooth paths such that \(\gamma_{1}(0)=\gamma_{2}(0)=1\), \(\gamma_{1}^{\prime}(0)=x\) and \(\gamma_{2}^{\prime}(0)=y\), is a Leibniz algebra.
He also solved the coquecigrue problem for the class of _split Leibniz algebras_. Here a Leibniz algebra is said to be _split_ if there exists an ideal
\[\operatorname{Leib}(L)\subseteq I\subseteq\operatorname{Z}_{l}(L)\]
and a Lie subalgebra \(M\) of \(L\) such that \(L\cong(M\oplus I,\{-,-\})\), where the bilinear operation \(\{-,-\}\) is defined by
\[\{(x,a),(y,b)\}=([x,y],\rho_{x}(b))\]
and \(\rho\colon M\times I\to I\) is the action on the \(M\)-module \(I\). \(L\) is said to be the _demisemidirect product_ of \(M\) and \(I\). More precisely, we have the following.
**Theorem 3.3**.: _[_15_]_ _Let \(L\) be a split Leibniz algebra. Then a Lie rack integrating \(L\) is \(X=(H\oplus I,\rhd)\), where \(H\) is the simply connected Lie group integrating \(M\) and the binary operation is defined by_
\[(g,a)\rhd(h,b)=(ghg^{-1},\phi_{g}(b)),\]
_where \(\phi\) is the exponentiation of the Lie algebra action \(\rho\)._
Some years later S. Covez generalized M. K. Kinyon's results proving that every real Leibniz algebra admits an integration into a _Lie local rack_ (see [7]). More recently it was showed in [16] that the integration proposed by S. Covez is global for any nilpotent Leibniz algebra. Moreover, when a Leibniz algebra \(L\) is integrated into a Lie quandle \(X\), it turns out that \(L\) is a Lie algebra and \(X=\operatorname{Conj}(G)\), where \(G\) is the simply connected Lie group integrating \(L\).
Our aim here is to solve the coquecigrue problem for the non-nilpotent Leibniz algebra \(L_{n}=S_{2}\oplus\mathbb{F}^{n-2}\). One can check that \(S_{2}\) is a split Leibniz algebra, in the sense of M. K. Kinyon, with \(I=\operatorname{Z}_{l}(S_{2})\cong\mathbb{R}\) and \(M\cong\mathbb{R}\). Thus \(L\cong(\mathbb{R}^{2},\{-,-\})\) with the bilinear operation defined by
\[\{(x_{1},x_{2}),(y_{1},y_{2})\}=(0,\rho_{x_{1}}(y_{2}))\]
and \(\rho_{x_{1}}(y_{2})=x_{1}y_{2}\), for any \(x_{1},y_{2}\in\mathbb{R}\). It turns out that a Lie rack integrating \(S_{2}\) is \((\mathbb{R}^{2},\rhd)\), where
\[(x_{1},x_{2})\rhd(y_{1},y_{2})=(y_{1},y_{2}+e^{x_{1}}y_{2}).\]
and the unit element is \((0,0)\). Finally, one can check that the binary operation
\[(x_{1},x_{2},x_{3},\ldots,x_{n})\rhd(y_{1},y_{2},y_{3},\ldots,y_{n})=(y_{1},y_ {2}+e^{x_{1}}y_{2},y_{3},\ldots,y_{n})\]
defines on \(\mathbb{R}^{n}\) a Lie rack structure with unit element \(1=(0,\ldots,0)\), such that \((\operatorname{T}_{1}\mathbb{R}^{n},\rhd)\) is a Leibniz algebra isomorphic to \(L_{n}\). This result, combined with the ones of Section 4 of [16], completes the classification of Lie racks whose tangent space at the unit element gives a Leibniz algebra with one-dimensional derived subalgebra.
|
2305.16883 | Argumentation Schemes for Blockchain Deanonymization | Cryptocurrency forensics became standard tools for law enforcement. Their
basic idea is to deanonymise cryptocurrency transactions to identify the people
behind them. Cryptocurrency deanonymisation techniques are often based on
premises that largely remain implicit, especially in legal practice. On the one
hand, this implicitness complicates investigations. On the other hand, it can
have far-reaching consequences for the rights of those affected. Argumentation
schemes could remedy this untenable situation by rendering underlying premises
transparent. Additionally, they can aid in critically evaluating the probative
value of any results obtained by cryptocurrency deanonymisation techniques. In
the argumentation theory and AI community, argumentation schemes are
influential as they state implicit premises for different types of arguments.
Through their critical questions, they aid the argumentation participants in
critically evaluating arguments. We specialise the notion of argumentation
schemes to legal reasoning about cryptocurrency deanonymisation. Furthermore,
we demonstrate the applicability of the resulting schemes through an exemplary
real-world case. Ultimately, we envision that using our schemes in legal
practice can solidify the evidential value of blockchain investigations as well
as uncover and help address uncertainty in underlying premises - thus
contributing to protect the rights of those affected by cryptocurrency
forensics. | Dominic Deuber, Jan Gruber, Merlin Humml, Viktoria Ronge, Nicole Scheler | 2023-05-26T12:37:55Z | http://arxiv.org/abs/2305.16883v1 | # Argumentation Schemes for Blockchain Deanonymization
###### Abstract
Cryptocurrency forensics became standard tools for law enforcement. Their basic idea is to deanonymise cryptocurrency transactions to identify the people behind them. Cryptocurrency deanonymisation techniques are often based on premises that largely remain implicit, especially in legal practice. On the one hand, this implicitness complicates investigations. On the other hand, it can have far-reaching consequences for the rights of those affected. Argumentation schemes could remedy this untenable situation by rendering underlying premises transparent. Additionally, they can aid in critically evaluating the probative value of any results obtained by cryptocurrency deanonymisation techniques. In the argumentation theory and AI community, argumentation schemes are influential as they state implicit premises for different types of arguments. Through their critical questions, they aid the argumentation participants in critically evaluating arguments. We specialise the notion of argumentation schemes to legal reasoning about cryptocurrency deanonymisation. Furthermore, we demonstrate the applicability of the resulting schemes through an exemplary real-world case. Ultimately, we envision that using our schemes in legal practice can solidify the evidential value of blockchain investigations as well as uncover and help address uncertainty in underlying premises - thus contributing to protect the rights of those affected by cryptocurrency forensics.
Keywords:Argumentation Legal Reasoning Blockchain Analysis.
## 1 Introduction
"Follow the money" is arguably the central investigation strategy for any profit-driven offence [34]. Analysing flows of incriminated money is crucial to understand the business models and inner workings of organised crime groups, the hierarchy of the involved entities, and finally, identifying the groups' members. However, the fight against money laundering is challenging, and criminals utilising virtual currencies as early adopters aggravate the situation even further. While law enforcement agencies need to expend many resources to follow complex transnational flows of fiat currencies, blockchain-based investigations impose even further challenges. These challenges arise from the fact that cryptocurrencies are generally pseudonymous, with some even being anonymous. Bitcoin [17] is arguably
the most famous and widespread cryptocurrency - both for lawful economic purposes and criminal activities [6]. Already in the early days of Bitcoin, it was shown that the currency is not anonymous because it is possible to link multiple pseudonyms belonging to the same person [1, 14, 21]. However, also supposedly anonymous cryptocurrencies, such as Monero [15] or Zcash [35], have been target of deanonymisation attacks [11, 16]. What all attacks on Bitcoin, Monero, and Zcash have in common is that they are based on partly unreliable assumptions [5]. The reliability of these assumptions determines the quality of the results of an attack. In legal practice, those assumptions are critical for inferring the evidential value of the deanonymisation of a perpetrator. However, no standard practice for deriving and discussing the reliability of those analysis results has been proposed yet. Therefore, we propose argumentation schemes for assessing the reliability of investigations on the Bitcoin blockchain - thus bridging practical cryptocurrency forensics and its scientific analysis.
### Related Work
Argumentation schemes [33] as a way to classify arguments by their underlying principles of convincingness have been influential in the argumentation theory and the artificial intelligence community [12]. They present the various types of arguments as informal deduction rules together with accompanying _critical questions_ to aid a human reasoner in evaluating arguments of the respective type.
Given that expert testimonies, as well as the court process itself, is a form of argumentation, it is not surprising that argumentation schemes were applied to legal processes [2]. Walton [32] gives a detailed overview of the applicability of many argumentation schemes to representing and analysing legal processes. Apart from the argumentation schemes, there are other informal argument schemes like the ones proposed by Wagemans [31]; however, they focus more on the classification of arguments rather than human comprehension. There have also been more formal - and even automated - approaches to legal reasoning based on argumentation theory [20, 2]. However, our goal is not to automate parts of the legal process but to aid in evaluating statements about blockchain deanonymisation. While software automates blockchain deanonymisation (e.g. Chainalysis Reactor [10]), in the end, legal decision makers, i.e. humans, need to evaluate the reliability of the obtained findings.
Postulating application-tailored argumentation schemes to capture specialised forms of argument is common practice. Parsons _et al._[18] introduce schemes to reason about trust in entities to specialise arguments building on statements. Another example from the medical field is specific argumentation schemes to reason about treatment choices in order to aid doctors in their decision making and producing automated patient specific recommendations [26, 27].
On the legal side, the evidence must be critically evaluated as investigative measures justified by unreliable results potentially impinge upon the fundamental rights of the suspects [22]. Frowis _et al._[7] provide key requirements that must be satisfied to safeguard the evidential value of cryptocurrency investigations; one of them being reliability. They suggest specific measures to achieve reliability, such
as sharing any information necessary to assess reliability, without discussing how they can be implemented in practice. As a step in that direction, Deuber, Ronge and Ruckert [5] provide a taxonomy for the different assumptions underlying deanonymisation attacks on cryptocurrency users - while only briefly discussing their taxonomy's applicability in legal practice.
### Contribution
In legal practice, the lack of a profound framework means that there is no standard way to reason about the reliability of findings from blockchain-based investigations. Less reliable findings might entail two issues: First, results with low reliability might not establish the degree of suspicion required by subsequent investigative measures and thus render them unlawful. In the worst case, any evidence obtained from unlawful investigations might be inadmissible in court - depending on the exclusionary rules of the respective jurisdictions. Second, even if evidence might be admissible, low reliability corresponds to low evidential value, and thus the evidence might not be sufficient for a conviction. Given that any findings and the blockchain investigation itself are highly abstract for most parties involved, there needs to be a common ground between technical analysts, investigators, and other legal practitioners to assess these findings.
Our contribution is the application of tailored argumentation schemes to assess heuristics employed in investigations based on the Bitcoin blockchain to deanonymise criminal users. The schemes render the taxonomy proposed by Deuber, Ronge and Ruckert [5] broadly accessible and easy to use in practice. By presenting the implicit and explicit premises of those heuristics, our argumentation schemes enable all parties involved in the legal process to assess evidential value systematically. Thus, the schemes can potentially render blockchain-based analyses of Bitcoin transactions more comprehensible and the findings more reliable and conclusive.
## 2 Preliminaries
### Bitcoin (BTC)
Bitcoin [17] is a cryptocurrency. At its core are transactions that, in their most basic form, are payments. In contrast to fiat currencies, Bitcoin employs a decentralised ledger of transactions. Decentralised means that there is no central authority issuing new units of the currency or settling transactions. Instead, parties maintain the ledger in a peer-to-peer network - a network where all parties are clients and servers simultaneously. The transactions are organised in blocks, which is why the ledger is also referred to as a blockchain. Using a consensus mechanism, the network agrees on which blocks, i.e. particularly transactions, should extend the ledger. The network nodes participating in this consensus mechanism are called _miners_.
Transactions_ consist of a list of inputs and outputs. An output usually states an amount of Bitcoin (\(v\,\mathrm{BTC}\)) and the hash \(h_{\mathsf{pk}}\) of a public key \(\mathsf{pk}\), which is also referred to as address \(a\). The public key is part of a digital signature scheme. Such schemes use public and secret key pairs - anyone can check the validity of a signature with respect to some public key, while only the one knowing the corresponding secret key can create a valid signature. An input is a reference to an output of another transaction, which is uniquely described by the hash \(tx_{hash}\) of that other transaction and the position \(out_{id}\) of the output in the transaction's list of outputs. An example of a transaction with one input and two outputs is given in Fig. 1. Usually, transactions have several in- and outputs. Spending the first output of this transaction with an amount of \(v_{1}\) Bitcoin requires providing a public key \(\mathsf{pk}^{\prime}\) whose hash equals \(h_{\mathsf{pk}_{1}}\) and a signature that verifies under \(\mathsf{pk}^{\prime}\). This mechanism ensures that, in general, there are no unauthorized transactions, as knowledge of the corresponding secret keys is required to issue a transaction. A property of Bitcoin is that the input amount of a transaction is always consumed entirely. Thus, the second output of the transaction might be a so-called _change_ output. A change output pays back to the sender(s) the difference between its input amounts and the amount that the recipient(s) should receive.
Walletsin Bitcoin can be seen as a collection of several addresses which belong to the same entity. On a technical level, a wallet is often referred to as software that generates and stores the private keys corresponding to different addresses and allows creating new addresses and issuing transactions. By only inspecting transactions on the blockchain, it is not immediately obvious which addresses belong to the same wallet.
CoinJointransactions are a special type of transaction that tries to add anonymity to Bitcoin. The idea is to combine inputs from multiple entities while at the same time having equally valued outputs [13]. In Bitcoin, the concept of having transactions with inputs from multiple users to hinder linking is called _mixing_.
### Bitcoin Investigations
Research has shown early on that Bitcoin is not anonymous but pseudonymous, as it is possible to cluster addresses that are likely to be controlled by the same entity,
Figure 1: Bitcoin transaction
referred to as _address clustering_. The most important address-clustering heuristics are the _multi-input heuristic_[1, 14, 21] and the _change-address heuristic_[14, 1, 11]. The multi-input heuristic states that all inputs of a transaction are controlled by the same entity - as already mentioned in Bitcoin's whitepaper [17]. The multi-input heuristic should not be applied to CoinJoin transactions as they are issued by multiple entities by design. The change-address heuristics utilise that change often occurs in Bitcoin (see Section 2.1).
The main objective of blockchain investigations is _re-identification_, that is to determine the natural or legal person who controls an address cluster. This is especially relevant for law enforcement trying to identify persons connected to flows of incriminated virtual currencies. By tracing such transactions and conducting address clustering, they might identify a single relevant address cluster. As addresses typically do not contain any personally identifiable information, the investigation requires re-identification. To facilitate re-identification, address clusters are usually connected with off-chain information - a process also referred to as _attribution tagging_[7]. As its name implies, the tagged information in attribution tagging can be used to identify the actual entity. In practice, the arguably most important attribution information is that an address cluster is related to some cryptocurrency _exchange_ - a platform to exchange, buy or sell cryptocurrencies - as law enforcement might request the respective customer data from this exchange.
### Legal Background
Many states committed themselves to the fight against cybercrime by ratifying the Convention on Cybercrime [3]. This commitment includes establishing cybercrime offences under domestic law as well as providing investigative measures to enable the prosecution of such offences - while simultaneously protecting fundamental human rights and liberties. The actual balance between the interests of law enforcement and human rights is dictated by the domestic laws of the ratifying states. However, the legal issues discussed in this section are not specific to a particular jurisdiction or legal system. This is illustrated by using the US as an example of a common-law jurisdiction and Germany as an example of a civil-law jurisdiction; both states have ratified the convention. The starting point for our discussion is the following example case of a typical blockchain-based investigation:
**Example.** Investigators seized a darknet marketplace and recovered a local Bitcoin wallet that was presumably used to pay the marketplace's operator. The investigators then used blockchain analysis to discover the wallet which was used by the operator to receive payments. While the discovered operator wallet is a local wallet, the operator is suspected of using another wallet at a cryptocurrency exchange to convert Bitcoin into fiat currency. To prevent that the exchange wallet can be linked to the incriminated local wallet, the operator mixed the funds prior to the transfer. Through blockchain analysis, the investigators nevertheless managed to establish a link between the incriminated local wallet
and the exchange wallet. Next, the investigators issued a request for the disclosure of customer data to the exchange - which collected them as part of their employed Know-Your-Customer policy to comply with anti-money-laundering laws. The goal of this request was to find the natural person that controls the incriminated local wallet. After having identified this suspected operator, the investigators conducted electronic surveillance and executed a search of the suspect's premises.
In summary, the investigative measures used in the example were the blockchain analysis, a request for the disclosure of customer data, electronic surveillance, and a search of premises. In general, such investigative measures have in common that they require a specific degree of suspicion in order to protect the rights of the targeted person.
Under German law, an _initial suspicion_ is sufficient to justify a blockchain analysis (according to Sections 161, 163 German Code of Criminal Procedure (GCCP), [25; 8]) or a request for the disclosure of customer data (according to Section 100j GCCP). An initial suspicion must be based on a conclusive and established factual basis (factual quality). Due to lax requirements, these measures may be directed not only against the suspected person but also against other third parties that might be somehow connected [9; 24]. There are stronger requirements regarding electronic surveillance pursuant to Section 100a GCCP or a search of premises pursuant to Section 102 GCCP. Beyond the mere 'possibility' of the commission of a crime, in these cases, the suspicion of the crime must be specific and individualised (so-called _qualified initial suspicion_) as well as 'probable' [23; 19]. These measures have to be directed only against the accused person [23] and may only involve other persons who are directly connected to the accused person or involved in the crime (see Sections 100a (3) and 103 GCCP).
Under US law, especially the requirements for the analysis of blockchain data and a request for the disclosure of customer data differ significantly from German law. However, this does not affect the legal issues raised by blockchain analyses, as we will point out below. Both blockchain analyses and the request for the disclosure of customer data are not subject to the probable cause requirement of the Fourth Amendment, given that the _third-party doctrine_ applies [30]. However, electronic surveillance and search of premises are subject to the Fourth Amendment and therefore require _probable cause_ as the degree of suspicion. The Fourth Amendment demands the suspicion to be particularised with respect to the person under surveillance, being searched, or specific things to be seized.
The most important legal issue concerning blockchain analysis in practice is whether or not the findings of the analysis can establish the required degree of suspicion for subsequent investigative measures. Therefore, the lower requirements for blockchain analysis or a request for the disclosure of customer data under US law do not matter, as at least subsequent measures - such as searches of premises - require similar degrees of suspicion as under German law. Thus, the only difference under US law is that the legal issue arises later in the investigation.
To illustrate the legal issue, we return to the example of the darknet marketplace operator. Here, a blockchain analysis was used to link an incriminated wallet to an exchange service. Next, disclosure of customer data was requested
from the exchange. Imagine that solely based on the linkage of the wallets, further investigative measures are conducted against the natural person identified by the customer data. If those measures are electronic surveillance or searches of premises, the required suspicion must be particularised against the person targeted by the measures, both under German and US law. If it is unreliable, blockchain analysis might fail to establish this particularised suspicion. Imagine that the analysis is based on the multi-input heuristic, but the heuristic is applied to CoinJoin transactions. In this case, the analysis would definitely yield false positives as CoinJoin transactions are issued by multiple entities by design. False positives might render the individualisation insufficient and thus the respective investigative measure unlawful.
To summarise, certain invasive and targeted investigative measures require a degree of suspicion that is individualised with respect to the target of these measures. Blockchain analysis based on uncertain assumptions might lead to unreliable findings that are not sufficient to establish the individualisation and thus the required degree of suspicion for subsequent investigative measures. If investigative measures are conducted without the necessary degree of suspicion, they are unlawful and thus might render obtained evidence inadmissible - depending on the exclusionary rules of the respective jurisdiction.
### Argumentation Schemes
Argumentation schemes classify arguments by their warrant in the sense of Toulmin [28] - i.e. by their principle of convincingness. They are presented as informal presumptive deduction rules inferring plausible truth of a conclusion from truth of multiple premises [33]. For example, the _Argument from Abductive Inference_ is tailored towards reconstructing the cause \(E\) for a set \(F\) of observed findings.
\begin{tabular}{l l} Premise: & \(F\) is a finding or given set of facts. \\ Premise: & \(E\) is a satisfactory explanation of \(F\). \\ Premise: & No alternative explanation \(E^{\prime}\) given so far is as satisfactory as \(E\). \\ \hline \end{tabular}
* Conclusion: Therefore, \(E\) is plausible as hypothesis.
* Scheme 1: Argument from Abductive Inference [33]
In addition to the deduction rule representing the informal shape of the argument, an argumentation scheme specifies _critical questions (CQs)_ as ways to attack an argument based on the scheme. The critical questions aid both the producer and the receiver of arguments by suggesting relevant statements to present or ask about. There are usually critical questions attacking the individual premises or the conclusion of the argument, as well as ones attacking the applicability of the scheme. Consider for example the CQs of the Argument from Abductive Inference:
1. How satisfactory is \(E\) as an explanation of \(F\), apart from the alternative explanations available so far in the dialogue?
2. How much better an explanation is \(E\) than the alternative explanations available so far in the dialogue?
3. How far has the dialogue progressed? If the dialogue is an inquiry, how thorough has the investigation of the case been?
4. Would it be better to continue the dialogue further, instead of drawing a conclusion at this point?
Scheme 1: Critical questions of Argument from Abductive Inference
CQs 1 and 2 are direct attacks on truth of premises of the rule. CQs 3 and 4 are specific attacks based on the idea that there could be other explanations not yet put forth due to the temporal nature of argumentative dialogues.
By making premises and possible flaws of an argument explicit, argumentation schemes aid critical discussion of expert statements by legal decision-makers and other practitioners without the need for deep understanding of the underlying topic. For judging the reliability of a claim from blockchain analysis, it is particularly helpful to have transparency with regards to the underlying assumptions as they have to be judged on a case-by-case basis [5]. This added transparency can also increase the evidential value of such findings if the reliability of dependent information is sufficiently well established.
## 3 Our Argumentation Schemes
In criminal investigations, blockchain analyses are typically conducted to establish a link between an entity and a criminal offence through involved cryptocurrency addresses. As stated in Section 1.1, there exists software that could establish such links in an automated manner. However, the methods used by it, as well as the employed heuristics, remain regularly opaque. Such insufficient traceability is contrary to the requirements of legal proceedings, which require a high degree of explainability and intelligibility. For this purpose, we present a custom argumentation scheme to argue the involvement of an entity in an offence from the control of an address that is connected to that offence (see Scheme 2).
We do not need a custom argumentation scheme to represent linking an entity with an address by requesting data from a cryptocurrency exchange, as this is covered by _Argument from Position to Know_[33]. This standard scheme covers this case, as exchanges typically collect the personal information their customers' personal information as part of Know-Your-Customer policies and are thereby in a position to know who the customer using an account is.
To establish a link between addresses, there are software tools implementing various heuristics, such as the multi-input heuristic or change heuristics, which are arguably used by investigators [5]. We pose the _Cluster from Software_ scheme to represent arguments based on such a software tool to establish the link between addresses and thereby forming clusters.
Naturally, it is not enough for a software tool to establish a link between addresses without further explanations and evidence backing that claim. Analysts face a myriad of transactions when conducting blockchain analyses. They must assess the results presented by the software for criminalistic and legal reasons. First, analysts must understand the software's processes to infer investigative leads, find connections, and form hypotheses - tasks that cannot be entirely automated. Second, only when understanding the software's results can analysts apply their knowledge of criminal tactics eventually employed by perpetrators, question the results, and falsify hypotheses they previously posed. Finally, from a legal perspective, the rightfulness of the analysis is crucial, as it affects the lawfulness of further investigations in the pre-trial stages and the evidential value of obtained findings in the actual trial [5]. However, assessing the results would require that the employed deanonymization software discloses the assumptions relied on in the analysis - which is typically not done at all. Therefore, an investigator would back the findings of the software by manual analysis in case the software does not disclose the reasons for linking addresses. To represent the claims from manual analysis, we present two exemplary schemes that capture the use of the multi-input (see Scheme 4) and the change-address heuristic (see Scheme 5), respectively.
\begin{tabular}{l l} Premise: & Transaction \(T\) has multiple input addresses \\ Premise: & Entity \(E\) controls some input addresses of \(T\) \\ \hline \end{tabular}
Conclusion: Entity \(E\) controls all input addresses of \(T\)
1. Could \(T\) be a CoinJoin transaction?
2. Could it be that another entity \(F\) shares secret keys with \(E\) and thereby can control other or all inputs of \(T\)?
3. Which input addresses of transaction \(T\) does entity \(E\) control? What evidence is there for \(E\) controlling these addresses?
4. Are there other indicators that \(E\) might control other input addresses of \(T\)?
\begin{tabular}{l l} & Scheme 4: Cluster from Multi-Input \\ Premise: & Transaction \(T\) has multiple output addresses \\ Premise: & Output address \(C\) is a _change_ address of transaction \(T\) \\ Premise: & Entity \(E\) controls all input addresses of \(T\) \\ \hline \end{tabular}
Conclusion: Entity \(E\) also controls _change_ address \(C\)
1. Could \(T\) just have multiple distinct benefactors? Could the change for example be donated to a supported unrelated entity?
2. What evidence is there suggesting that client software was used which generates a fresh change address for every new transaction?
3. Are there other indicators that \(E\) controls address \(C\)?
\begin{tabular}{l l} & Scheme 5: Cluster by Change-Address \\ \end{tabular}
For brevity, the argumentation schemes presented in this section only cover the most common Bitcoin blockchain analysis heuristics used in practice and especially do not cover non-blockchain-specific reasoning. For the latter, we can use the vast array of pre-existing schemes [33]. Together, these schemes can be applied to represent reasoning about Bitcoin blockchain investigations in practice, as we will show in Section 4.
## 4 Application in the Wall Street Market Case
In order to illustrate our approach and its practical implications, we present the argumentation behind the investigative results of the proceedings against one of the administrators of the infamous Wall Street Market (WSM). WSM was one of the largest darknet marketplaces on which illegal narcotics, financial data, hacking software as well as counterfeit goods were traded between approximately 2016 and its seizure in 2019 [4]. Besides technical surveillance measures, blockchain-based investigations of Bitcoin transactions conducted by the US Postal Service (USPS) were decisive in identifying the administrators operating the marketplace [29].
The publicly available criminal complaint states that the USPS employed proprietary software of an undisclosed company to conduct its blockchain analyses [29]. Furthermore, neither the exact methods employed during the analyses
nor the involved Bitcoin addresses were specified. Instead, the final results - meaning actual investigative findings in the form of off-chain information - were presented on their own. To prove the correctness, it is merely stated that the software was found to be reliable based on numerous unrelated investigations [29]. This might either suggest the software was utilised as a black box or that the details were (intentionally) not published and kept secret to protect the technical means for tactical reasons. This argumentation might be insufficient to convince legal decision makers of the rightfulness of the findings. Thus, we infer from the criminal complaint which analysis methods the software might have employed and then apply our argumentation schemes to argue the findings.
The blockchain analyses of the USPS constituted the initial lead that enabled the involved law enforcement agencies to identify 'TheOne' - who acted as one of the administrators of the platform [29]. 'TheOne' is believed to be 'X.',1 one of the three defendants, mainly based on the following two findings:
Footnote 1: The defendant’s name has been anonymized by the authors.
First, the investigators could establish a link between the administrator 'TheOne' from WSM and the user 'dudebuy' from _Hansa Market_ by analysing data seized from both platforms. They found that 'TheOne' used the same PGP public key as 'dudebuy' did at the previously operated and meanwhile seized darknet marketplace Hansa Market. As a PGP key pair is a highly individual piece of data used to prove one's identity and encrypt communications, it has to be inferred that those two monikers belong to the same real-world entity. As 'dudebuy' used a wallet \(W2\) as his _refund wallet_ on Hansa Market, the
Figure 2: Application of the proposed argumentation schemes to assess the identification of the administrator of the darknet marketplace called Wall Street Market
investigators found an entry point to perform financial investigations concerning this perpetrator seeming to operate now as 'TheOne'.
Here, the investigators could establish suspicion using the _Suspicion through Address Control_ scheme and infer that the owner of wallet \(W2\) seems to be the targeted administrator of the ongoing investigations regarding WSM. This conclusion could be assessed by the evaluation of the critical questions of the scheme. CQ 1 - regarding circumstantial evidence indicating address control - leads to a high degree of confidence, as the investigators resorted to seized user data, including an identical PGP public key. While CQ 2 (address control by somebody else) does not seem to be of relevance to the investigators at this point in time, CQ 3 (nature of the connection to the offence) reveals at least an indirect involvement of the address in the offence in question.
Second, being confident that the owner of wallet \(W2\) is the target, the USPS revealed that other wallets that appeared in the investigations, namely wallets \(W1\) and \(W4\), were funded by transactions originating from wallet \(W2\). As this analysis step is basically a rather typical payment flow analysis, which is also employed in traditional money laundering investigations concerning fiat currencies, it is dispensable to assess it with a newly formulated argumentation scheme. For example, _Argument from Sign_ or _Argument from Abductive Inference_ would be a suitable fit here [33]. Those newly uncovered wallets, in turn, were identified to be the true origin of several payments to various services, which were conducted via a bitcoin payment processing company (BPPC). Prior to these payments, the corresponding funds were supposedly mixed via a commercial mixing service, whose flow of transactions could be 'de-mixed' by the USPS' analysts [29].
Given the fact that no further information regarding the de-mixing is presented in the criminal complaint, we deliberately assume that some sort of software established the link so that the _Cluster from Software_ scheme should be employed to be able to judge the evidential value of this result. The scheme revolves around the mechanism for link establishment (CQ 1), the reliability of the tool itself (CQ 2), human comprehensibility (CQ 3) and additional evidence available (CQs 4 and 5). Here, the most important critical question to pose might be CQ 3, i.e. whether the link could be established by comprehensible reasoning of a human analyst. As the following requests for the disclosure of customer data were based on this link, it must be considered crucial evidence in this early phase of the investigation. In the course of using CQ 3, a human analyst might establish that the link was a result of the multi-input heuristic. As the multi-input heuristic results in false positives when applied to CoinJoin transactions, it is crucial to challenge whether the involved transactions could be CoinJoin transactions - via CQ 1 of the _Cluster from Multi-Input_ scheme. By this example, the practical relevance of our argumentation schemes becomes particularly apparent. Without the schemes, the argumentation would be limited to whether the analysis software was reliable in the past but not whether false positives were actually excluded in the specific case.
By obtaining user records from the BPPC regarding the payment from wallet \(W1\), investigators uncovered an e-mail address, which could be linked to the
aforementioned defendant, as it was actually used alongside his real-world identity 'X'. In addition to that, they uncovered that wallet \(W4\) served as the suspected source for payments for two accounts at a video gaming company, which were also linked to the suspect, as the records obtained by a subpoena suggest. Furthermore, a second link could be established from another wallet \(W5\), in a similar manner, which is considered to be used to pay for a third account linked to the suspect at the gaming company in a similar manner. Wallet \(W5\) was found to be funded by a different wallet that could also be associated with WSM's administrators at a later point in time. While this correlation accumulates reliability, each respective request for the disclosure of customer data might be assessed by employing the _Argument from Position to Know_ scheme [33].
In summary, the USPS's blockchain analyses included the following broader steps: _identification of wallets_, _detection of payments between wallets_, _de-mixing_ and the _association of wallets with off-chain information_ mainly from other darknet marketplaces as well as service providers. While the investigators later found various pieces of evidence in the course of the following investigative actions, these steps were central for the case in order to find a starting point for targeted investigations. We showed that their reliability could be effectively assessed by the utilisation of our argumentation schemes.
## 5 Conclusion
After having demonstrated the usage of several argumentation schemes for blockchain-based investigations, we conclude by presenting use cases in which the schemes will be especially beneficial and by pointing out directions for future work.
As our argumentation schemes allow reasoning about the findings of blockchain-based investigations, we see potential use cases wherever such findings have to be communicated to and assessed by persons involved in respective criminal proceedings. By utilising the schemes, an analyst can clearly articulate the employed heuristics, their individual strengths, and potential weaknesses. This increases the comprehensibility of such analyses and court proceedings for the decision makers, and also eases the documentation for later verification by an expert witness. Given the high requirements regarding the explainability of legal proceedings, this task cannot be achieved by software in an automated manner yet. Therefore, we intend to support them with our argumentation schemes. Nevertheless, our considerations can be prospectively integrated into deanonymization software to increase its explainability. Clear articulation is key to determining the quality of blockchain-based findings, especially if they are not or only weakly supported by other evidence. On the one hand, applying an argumentation scheme and utilising its critical questions enables law enforcement agencies and the preliminary judge to reason about the eventual perpetration of the identified person and therefore establish a certain degree of suspicion to justify further investigative measures. On the other hand, the rights of suspects can be protected by ensuring that the results obtained from blockchain investigations are of quality, can be understood,
independently checked for plausibility by the parties to the proceedings, and are actually able to establish the relevant suspicion required by law.
As a result, we consider the application of argumentation schemes in the context of blockchain-based investigations a supportive mechanism for making sense of the intangible crime scene and highly abstract commission of cybercriminal offences. Our schemes can be a helpful tool for investigators and prosecutors that strive to identify perpetrators, as well as for legal decision makers to answer the question of guilt. Finally, the schemes are a step forward in the direction of harmonising the effectiveness and explainability of high-tech investigations.
Extending this work can be done in multiple directions. Further schemes for other blockchain analysis heuristics or other cybercriminal investigations could be created, as indicated already in Section 3. In addition to that, the critical questions of our schemes could be refined to comprise more specific sub-questions as done for _Argument from Expert Opinion_ in Walton, Reed and Macagno [33] to capture more expert knowledge.
#### 5.0.1 Acknowledgements
This work was supported by DFG (German Research Foundation) as part of the Research and Training Group 2475 "Cybercrime and Forensic Computing" (grant number 393541319/GRK2475/1-2019). Merlin Humml was also supported by DFG project RAND (grant number 377333057). The authors also wish to thank Marie-Helen Maras for fruitful discussions.
|
2307.16022 | Rising Tides: Analytic Modeling of Tidal Effects in Binary Neutron Star
Mergers | The gravitational waves produced by binary neutron star mergers offer a
unique window into matter behavior under extreme conditions. In this context,
we model analytically the effect of matter on the gravitational waves from
binary neutron star mergers. We start with a binary black hole system,
leveraging the post-Newtonian formalism for the inspiral and the
Backwards-one-Body model for the merger. We combine the two methods to generate
a baseline waveform and we validate our results against numerical relativity
simulations. Next, we integrate tidal effects in phase and amplitude to account
for matter and spacetime interaction, by using the NRTidal model, and test its
accuracy against numerical relativity predictions, for two equations of state,
finding a mismatch around the merger. Subsequently, we lift the restriction on
the coefficients to be independent of the tidal deformability, and recalibrate
them using the numerical relativity predictions. We obtain better fits for
phase and amplitude around the merger, and are able to extend the phase
modeling beyond the merger. We implement our method in a new open-source Python
code, steered by a Jupyter Notebook. Our research offers new perspectives on
analytically modeling the effect of tides on the gravitational waves from
binary neutron star mergers. | Alexander O'Dell, Maria C. Babiuc Hamilton | 2023-07-29T16:31:54Z | http://arxiv.org/abs/2307.16022v2 | # Rising Tides: Analytic Modeling of Tidal Effects in Binary Neutron Star Mergers
###### Abstract
The gravitational waves produced by binary neutron star mergers offer a unique window into matter behavior under extreme conditions. In this context, we model analytically the effect of matter on the gravitational waves from binary neutron star mergers. We start with a binary black hole system, leveraging the post-Newtonian formalism for the inspiral and the Backwards-one-Body model for the merger. We combine the two methods to generate a baseline waveform and we validate our results against numerical relativity simulations. Next, we integrate tidal effects in phase and amplitude to account for matter and spacetime interaction, by using the NRTidal model, and test its accuracy against numerical relativity predictions, for two equations of state, finding a mismatch around the merger. Subsequently, we lift the restriction on the coefficients to be independent of the tidal deformability, and recalibrate them using the numerical relativity predictions. We obtain better fits for phase and amplitude around the merger, and are able to extend the phase modeling beyond the merger. We implement our method in a new open-source Python code, steered by a Jupyter Notebook. Our research offers new perspectives on analytically modeling the effect of tides on the gravitational waves from binary neutron star mergers.
**Keywords: binary neutron star mergers, analytical modeling, gravitational waves, tidal deformability**
## 1 Introduction
Neutron stars are extremely dense remnants of massive stars, at the brink of collapsing into a black hole, with masses comparable to that of the sun, contained in only twenty kilometers diameter. They are captivating celestial objects, under the action of powerful gravitational and magnetic fields, which cannot be replicated on Earth. These exceptional properties present them as excellent astrophysical laboratories for investigating the behavior of matter in extreme conditions of density, pressure and temperature. When two neutron stars collide and merge, they emit gravitational waves, accompanied by electromagnetic radiation, matter, and neutrinos; carrying valuable information about their masses, sizes and interior structure.
The first direct detection of a gravitational wave (GW) signal from a binary neutron star (BNS) collision, named GW170817 [1], was accompanied by gamma-ray bursts [2] and ignited a new star, called a kilonova, powered by the radioactive decay of the heavy elements synthesized in the merger [3]. This twin detection marked the onset of the _golden era_ in neutron star research, alluding to the gold and precious metals synthesized and expelled during the collision, and has inspired intense theoretical investigations. These studies provided us with a wealth of new insights into the yet unknown internal structure of the neutron stars [4, 5, 6], the characteristics of their magnetic fields [7], and the outflow of heavy matter during collision [8].
Unfortunately, simultaneous detections of gravitational waves and light are few and far between. Out of the almost 100 such events reported to date [9], the overwhelming majority came from BNS collisions and only one other, named GW190425, was produced by an unusually heavy BNS collision [10, 11]. The two BNS mergers detected so far raised many questions, proving again that nature is more complicated than our models [12, 13]. It is thus imperative to revise our current understanding of the physics involved in those models, in order to gain insight on BNS mergers. This is a timely topic, because in the next decade hundreds of such GW events might be detected [14], with the advent of the new generation of gravitational wave observatories, including both ground-based detectors such as Cosmic Explorer and the Explorer and the Einstein Telescope, and space-borne instruments such as LISA, DECIGO and TianQin [15].
Recent advancements in the analytical modeling of the tidal interactions during BNS mergers have focused on developing post-Newtonian (pN) approximations for the late-inspiral phase, with most models relying on the effective-one-body (EOB) approach [16, 17, 18]. As those approximations become less accurate near merger [19], alternative methods have emerged, notably the closed-form tidal approximants [20, 21, 22, 23, 24, 25], which combines pN, tidal EOB, and numerical relativity data.
In this study, we employ the NRTidal approximant [20, 23], an elegant model that adjusts the analytically calculated binary black hole (BBH) waveforms by adding a closed-form expression to account for the tidal influences in the phase and amplitude of the GW. This model was used for estimating source properties and constraining the equation of state for ultra-dense matter in the first two BNS detections, and is the preferred model for the LIGO Scientific and Virgo Collaborations [26]. Our goal is to analytically model the effect of tides on the GW signal during the inspiral and to accurately extend it through the merger of the BNS systems considered. We aim to
create an open-source, easily replicable code that generates comprehensive analytical templates for gravitational radiation during BNS collisions. By doing so, we facilitate independent consistency checks and assist in defining the domains of validity for the approximation models employed. Ultimately, our work contributes to the development of a universal analytical model encapsulating the BNS dynamics and the effect of tides on GW emission.
This paper unfolds as follows: first, we present the point-particle system that characterizes BBH collisions, and calculate the GW by combining the post-Newtonian formalism for inspiral evolution with the Backwards-one-Body model for the merger. As baseline GW we use the fully analytical model for BBH collision we developed in [27] and expanded in [28], based on [29] for the inpiral and [30] for the merger. We prove our model's validity and efficiency through a comparison with numerical relativity (NR), utilizing the SXS:BBH:0180 template from the Simulating eXtreme Spacetimes (SXS) catalogue [31]. We then incorporate the tidal deformability into the point-particle waveform, as polynomial corrections to the phase and amplitude, by using the coefficients proposed in [22, 23]. Next, we test our implementation against SXS:NSNS:0001/0002 for two equations of state [32].
Finally, we push this model past the merger, by performing a new fit to the numerical BNS data for the tidal phase and amplitude, from which we derive updated values for the polynomial coefficients, expanding thus its applicability. Upon determining the new coefficients, we reconstruct the tidal correction in phase and amplitude beyond the merger and analyze the tidal influence on the early stages of the collision, revealing the effect of the matter interaction on the system's orbits.
In this work we consider two neutron stars with total mass \(M=M_{A}+M_{B}\), \(M_{A}\leq M_{B}\) and mass ratio \(q=M_{B}/M_{A}\geq 1\). We express time, space and energy in geometric units (with \(G=c=1\)), in terms of the binary mass \(M\), written as a multiple of the sun mass \(M_{\odot}\).
## 2 The Baseline Model
We start with assembling the baseline analytical model for the calculation of the GWs from a BBH collision, by following the common procedure as we detailed in [27, 28]. First, we split the binary motion in two regions: the weak field, during the inspiral, and the strong field, during the merger, and apply different mathematical formalisms to obtain the waveform for each region. We then generate the complete GW template for the whole binary evolution by matching those two regions in frequency around the last stable orbit, and building the hybrid waveform. Lastly, we compare our model against a numerically generated equal mass BBH collision, to uphold its validity.
### The Baseline Inspiral Model
Let us consider a tight binary system of separation \(r\), in quasi-circular orbit. By defining the reduced mass \(\mu=(M_{A}M_{B})/M\) and the symmetric mass ratio \(\eta=\mu/M\), we reduce it further to a single particle of mass \(\mu\) and position \(r\), orbiting around the mass \(M\) located at the center of mass. We require the system to dissipate GWs and
are led to the balance equation:
\[F(t)=-\frac{dE(t)}{dt}. \tag{1}\]
This formula states that the GW flux \(F(t)\) is emitted at the expense of the orbital energy \(E(t)\), causing the orbit to shrink. Eq.(1) can be rewritten in terms of a small factor \(x_{pN}=(v/c)^{2}\) called post-Newtonian parameter, where \(v\) is the orbital velocity of the binary and \(c\) is the speed of light (see [33, 34, 35, 36]).
\[\frac{dx_{pN}(t)}{dt}=-\frac{F(t)}{dE(t)/dx_{pN}(t)}. \tag{2}\]
Using the post-Newtonian (pN) approximation, we expand the deviation from Newtonian gravity as a perturbation in power series of \(x_{pN}\). From the many different methods of solving eq.(2), in this work we chose the TaylorT4 approximant, shown to agree best with numerical simulations [37]. Within this method, eq.(2) becomes [27, 29]:
\[\frac{dx_{pN}(t)}{dt}\Bigg{|}^{\frac{\mathrm{N}}{2}}=\frac{x_{pN}^{5}(t)}{M} \sum_{j=0}^{N}\xi_{j}x_{pN}^{j/2}(t). \tag{3}\]
where \(N/2\) denotes the pN expansion order. In this work we go up to 3.5 in the leading pN order, adding self-force and hereditary correction terms up to 6pN order, to increase the accuracy in modeling the region near the merger [29]. By integrating eq.(3) with the coefficients \(\xi_{j}\) we obtain the evolution of the pN parameter \(x_{pN}\). Next, we use Kepler's third law \(v^{2}=(M\Omega(t))^{2/3}\) where \(\Omega\) is the angular orbital velocity, to obtain the equation for the orbital phase:
\[\frac{d\Phi_{pN}(t)}{dt}=\Omega(t)=\frac{x_{pN}(t)^{3/2}}{M}. \tag{4}\]
We readily integrate eq.(4) to find the time evolution of the orbital inspiral phase.
We reuse Kepler's third law written as: \(v=\sqrt{M/r}\) to extract the orbital separation as \(r=M/x_{pN}\), which we then expand in powers series of \(x_{pN}\) for increased accuracy:
\[r_{pN}(t)=\frac{M}{x_{pN}(t)}\sum_{j=0}^{3}\rho_{j}x_{pN}(t)^{j}. \tag{5}\]
Lastly, we calculate the GW amplitude for optimal orientation of the source,
\[A_{\mathcal{R}}(t)=-2\frac{\mu}{R}\left[\frac{M}{r_{pN}(t)}+r_{ pN}(t)^{2}\left(\frac{d\Phi_{pN}(t)}{dt}\right)^{2}-\left(\frac{dr_{pN}(t)}{ dt}\right)^{2}\right],\] \[A_{\mathcal{I}}(t)=-4\frac{\mu}{R}r_{pN}(t)\frac{d\Phi_{pN}(t)}{ dt}\frac{dr_{pN}(t)}{dt}. \tag{6}\]
Now we have all the pieces necessary to construct the dimensionless strain, defined as:
\[h_{pN}(t)=A_{pN}(t)e^{-2\mathrm{i}\phi_{pN}(t)} \tag{7}\]
Mathematically, the strain is decomposed into two transverse _quadrupolar_ polarization modes, with \(h_{+}\) representing the real part of eq.(7) and \(h_{\times}\) the imaginary part,
\[h_{+,pN}(t) =A_{\mathcal{R}}(t)\cos{(2\phi_{pN})}+\mathrm{i}A_{\mathcal{I}}( t)\sin{(2\phi_{pN})},\] \[h_{\times,pN}(t) =A_{\mathcal{R}}(t)\sin{(2\phi_{pN})}-\mathrm{i}A_{\mathcal{I}}( t)\cos{(2\phi_{pN})}. \tag{8}\]
This is because the GWs compress the space in one direction while simultaneously stretching it in the orthogonal direction, such that the signal goes twice through maxima and minima during one orbital cycle, making the frequency of the GW twice the orbital frequency. This technique, albeit powerful, is valid only if the gravitational field is sufficiently weak and the orbital velocity is smaller than the speed of light.
### The Baseline Merger Model
Going beyond the pN approximation brings us up against the strong gravitational field around the merger. The transition between the weak and strong field is marked by the innermost stable circular orbit (ISCO), defined as the last stable orbit a particle would have when orbiting around a black hole. When the masses of the celestial objects are comparable, this location is not well defined [38], but can be approximated with the last stable photon orbit, called the light-ring (LR), which is close to the peak of the curvature potential [39]. From there on, through the merger and ringdown, we use the _Backwards-One-Body_ formalism [30]. This technique starts from the perturbed final black hole resulting after collision and builds the GW signal back in time to the end of the inspiral, assuming we know the mass, spin and ringdown frequency of the remnant. In this model, the strain of the GW signal can be modeled analytically as exponentially decaying sinusoids that break free from the LR on null geodesics [40]:
\[h_{BoB}(t)=\sum_{lmn}A_{lmn}e^{\mathrm{i}\omega_{lmn}t}e^{-t/\tau_{lmn}}, \tag{9}\]
where \(l\) is the principal, \(|m|\leq l\) the azimuthal and \(n\) is the overtone index of a mode. This kind of perturbation is called _quasinormal_ (QNM) ringing with frequency \(\omega_{lmn}\) and damping time \(\tau_{lmn}\). The frequency of the lower mode is (\(l=2,m=1,n=0\)) coincide with the orbital frequency and we will denote it as \(\Omega_{QNM}\). The (\(l=2,m=2,n=0\)) mode can be derived from this mode with the simple relation \(\omega_{22}=2\Omega_{QNM}\). This mode carries away most of the GW's energy (\(\approx 95\%\)) [41], while the higher harmonics, being much quieter, can be usually neglected. We will drop the (\(l,m,n\)) indices and consider that the strain is well described by the dominant mode.
The BoB model requires the QNM frequency and damping time, which are determined by the spin and mass of the final black hole. We will estimate the final spin of
the resulting black hole with a polynomial of coefficients \(s_{ij}\), as given in [42, 43]:
\[\chi_{f}=\sum_{i,j=0}^{3}s_{ij}\eta^{i}\chi_{\textit{eff}}^{j}. \tag{10}\]
We define an effective spin:
\[\chi_{\textit{eff}}=\frac{M_{A}^{2}\chi_{A}+M_{B}^{2}\chi_{B}}{M_{A}^{2}+M_{B} ^{2}}, \tag{11}\]
with \(\chi_{A,B}=S_{A,B}/M_{A,B}^{2}\) the dimensionless individual spins and \(S_{A,B}\) the spin angular momentum of each black hole entering the merger. In our calculations we pick \(\chi_{A,B}\approx 2\times 10^{-3}\), a mean spin for low-spinning, astronomical neutron stars [44].
For the final mass we use the fit to NR given in [45] for comparable mass binaries,
\[M_{f}=M(1-\tilde{E}_{\textit{GW}})-M_{\textit{disk}} \tag{12}\]
where \(\tilde{E}_{\textit{GW}}=E_{0}+E_{2}\chi_{f}^{2}+E_{4}\chi_{f}^{4}\) is the dimensionless energy released in GWs. For the disk mass we take as upper limit of \(M_{\textit{disk}}\approx 10^{-2}M\)[46]. The coefficients for \(E_{i}\) are given in Table III of [45].
Next, we calculate the dominant resonant frequency with a polynomial fit:
\[M_{f}\Omega_{\textit{QNM}}=f_{1}+f_{2}(1-\chi_{f})^{f_{3}}, \tag{13}\]
and use a similar formula for the quality factor:
\[Q=q_{1}+q_{2}(1-\chi_{f})^{q_{3}}. \tag{14}\]
The pairs \((f_{i},q_{i})\) are taken from Table VIII of [41]. Lastly, the damping time is:
\[\tau_{\textit{QNM}}=\frac{2Q}{\omega_{\textit{QNM}}}. \tag{15}\]
The orbital angular frequency \(\Omega_{\textit{BoB}}(t)\) is given in this model by:
\[\Omega_{\textit{BoB}}(t)=\left(\Omega_{i}^{4}+\kappa\left[\tanh\left(\frac{t -t_{0}}{\tau}\right)-\tanh\left(\frac{t_{i}-t_{0}}{\tau}\right)\right]\right)^ {1/4}. \tag{16}\]
Here, \(\Omega_{i}\) is the initial frequency, \(t_{0}\) is the time at which the strain of the GW reaches its peak amplitude, and \(t_{i}\) the initial time, marking the transition between the weak and strong regime. The parameter \(\kappa\) in eq.(16) assures the continuity between the end of the inspiral and the beginning of the merger, and is given by:
\[\kappa=\left[\frac{\Omega_{\textit{QNM}}^{4}-\Omega_{i}^{4}}{1-\tanh\left( \frac{t_{i}-t_{0}}{\tau}\right)}\right]. \tag{17}\]
The essential variable in the BoB model is the initial time \(t_{i}\), that locks in the frequency \(\Omega_{i}\) at the beginning of the merger to the frequency at end of the inspiral:
\[t_{i}=t_{0}-\frac{\tau}{2}\ln\left(\frac{\Omega_{QNM}^{4}-\Omega_{i}^{4}}{2\tau \Omega_{i}^{3}\Omega_{i}}-1\right). \tag{18}\]
We obtain the phase \(\Phi_{BoB}(t)\) by integrating eq.(16) between \(t_{i}\) and a final time \(t_{f}\):
\[\Phi_{BoB}(t)=\int_{t_{i}}^{t_{f}}\Omega_{BoB}(t)dt. \tag{19}\]
The amplitude of the GW signal is modeled with the simple function:
\[A_{BoB}(t)=A_{0}\text{sech}\left(\frac{t-t_{0}}{\tau}\right). \tag{20}\]
where \(A_{0}\) is a scaling factor. This amplitude is taken to correspond to the Weyl scalar \(|\Psi_{4}(t)|\), which is related to the strain by the formula
\[\Psi_{4}(t)=\frac{\partial^{2}h_{NR}(t)}{\partial^{2}t}. \tag{21}\]
The model rests upon the assumption that the amplitude changes much slower compared to the phase during the merger, and uses the simple expression for the strain:
\[h_{BoB}(t)=-\frac{A_{BoB}(t)}{\omega_{BoB}(t)^{2}}e^{-2\mathrm{i}\Phi_{BoB}(t )}. \tag{22}\]
where \(\omega_{BoB}(t)=2\Omega_{BoB}(t)\) is the frequency of the dominant mode. Please note that in this model the influence of the proportionality coefficients arising from the double integration of eq.(21) over time is overlooked. In reality, both the magnitude and the peak location of the amplitude are affected by this integration. As a result, we expect this model to yield a slight time deviation in predicting the merger position. We assessed the strengths and weaknesses of the BoB model in a previous work [47].
### The Complete Baseline Model
Let us now assemble the two forms of the GW strain by gluing the waveform around the LR, at the transition between the weak and strong field, estimated by [48]:
\[r_{LR}=2M\left[1+\cos\left(\frac{2}{3}\cos^{-1}(-\chi_{f})\right)\right]. \tag{23}\]
The frequency and phase matching is robust against slight variations around this location, while the continuity of the amplitude is best ensured by choosing \(r_{LR}\approx 4M\), representing the location of the retrograde photon orbiting a maximally rotating BH.
We stitch the amplitudes of the two models together by first normalizing the BoB amplitude with it's peak value and then rescaling the pN amplitude with the normalized BoB amplitude at \(t_{i}\), by using the formula:
\[\bar{A}_{pN}=A_{pN}\frac{\bar{A}_{BoB}(t_{i})}{A_{pN}(t_{t})} \tag{24}\]
We build the complete hybrid model by gluing together the waveforms of the two domains at \(t_{i}\) with the following simple piecewise (step) function:
\[\sigma(t)=\left\{\begin{array}{ll}0&t<t_{i}\\ 1&t\geq t_{i}\end{array}\right. \tag{25}\]
For example, the hybrid orbital frequency of the GW has the form:
\[\Omega_{hyb}(t)=(1-\sigma(t))\Omega_{pN}(t-t_{t})+\sigma(t)\Omega_{BoB}(t-t_{ i}), \tag{26}\]
We form the hybrid amplitude, phase and strain of the GW with the same technique.
Before moving forward, we must test the accuracy of our implementation with a numerical waveform. We chose SXS:BBH:0180, corresponding to an equal-mass, non-spinning BBH configuration of normalized mass and initial separation of \(18M\) in geometric units [31]. We use the same configuration, but start our analytic model at a separation of \(15M\), which for a regular equal-mass BNS with a total mass of \(2.8M\) translates to a distance of about 45km between the centers of mass. We start the BoB model with the initial orbital frequency (\(\Omega_{i}\), \(\dot{\Omega}_{i}\)) of the pN model at the end of the inspiral marked by the LR, and build the hybrid baseline model, then align our waveform with the NR template at the amplitude peak, marking the merger.
We plot in Fig. 1 a comparison between the evolution of the \(h_{+}\) strain component obtained with our hybrid model, and the \(+\) polarization of the GW strain from the NR simulation, as well as comparisons in phase and amplitude (lower insets). We distinguish a noticeable phase mismatch between the analytical and the numerical strain as the binary approaches the merger, but a remarkable overlap in the beginning of the inspiral and during the merger. We found that the mismatch during the late inspiral is slightly alleviated if we start the analytical model from a smaller separation, mainly because less phase mismatch accumulates during the pN evolution. In subsequent studies we aim to refine this model by including higher-order post-Newtonian (pN) terms as they become available. The shift in the position of the merger is inherent in the BoB model because, as we explained earlier, it assumes a simplified expression for the amplitude. We analyzed this deviation from the NR predicted waveform at the merger in [47], and in future work we intend to correct the merger amplitude by incorporating the neglected integration coefficients. We do not take in consideration possible differences between the numerical code units and our geometric units, or that the NR simulations might have accumulated numerical errors during the evolution.
The addition of tidal effects to this analytic model will overwrite the time of the merger, because a BNS system reaches merger earlier than its corresponding BBH.
## 3 The BNS Model
Thus far, we have not taken into consideration the internal structure of the neutron stars, which is determined by the equation of state and accounts for the effect of the tidal interactions. It is expected for all neutron stars in the universe to be described by a unique equation of state, but it's expression is not currently known, only estimated by theory [49] and experiments [50] in nuclear physics. The tidal effects become important in the late stage of the inspiral, inducing deformations in the stars and driving them to merge faster, thus affecting the emitted GW signal both in phase and in amplitude. It is this effect that will allow us to determine the equation of state corresponding to the dense matter inside neutron stars from the direct observations of the amplitude and phase of the gravitational waves emitted during BNS collisions.
We can encode the tidal effects into the tidal correction strain:
\[h_{T}(t)=A_{T}(t)e^{-2\mathrm{i}\phi_{T}(t)} \tag{27}\]
where \(A_{T}\) and \(\phi_{T}\) are the analytic tidal corrections to the GW amplitude and phase. We find the analytical GW strain of the BNS model by multiplying eq.(27) to the strain of the baseline model:
\[h_{\mathit{BNS}}(t)=A_{\mathit{BNS}}(t)e^{-2\mathrm{i}\phi_{\mathit{BNS}}(t) }=h_{\mathit{BBH}}(t)h_{T}(t)=A_{\mathit{BBH}}A_{T}(t)e^{-2\mathrm{i}(\phi_{ \mathit{BBH}}(t)+\phi_{T}(t))}. \tag{28}\]
This equation tells us that we must add the tidal phase correction to the baseline,
\[\phi_{\mathit{BNS}}(t)=\phi_{\mathit{BBH}}(t)+\phi_{T}(t), \tag{29}\]
Figure 1: Comparison between the numerical and the analytic plus component of the strain, matched at peak amplitude. The phase gets slightly out of sync near the merger. The upper inset shows that the strain peaks are slightly displaced, but the merger is well modeled if we overlap them (dotted red curve). The two lower insets contain the comparison in phase (lower left) and amplitude (lower right) between our model and SXS:BBH:0180. Only expected small differences before the merger are visible.
while for the amplitude, we should multiply the tidal correction to the baseline:
\[A_{\text{\it BNS}}(t)=A_{\text{\it BBH}}(t)A_{T}(t). \tag{30}\]
We implement the tidal effects as polynomial functions of the variable \(x=(M\Omega_{\text{\it hyp}})^{2/3}\) and build the analytical expressions for the BNS phase and amplitude provided by the pN and NRTidal models, then we validate our implementation by comparing it two open-source BNS simulations from the SXS catalogue [32].
### The BNS Tidal Phase
As the stars get closer together, their tidal deformability increases, affecting the binding orbital energy. Looking back to eq.(1), we should not be surprised to see that this will change the flux of gravitational waves emitted as the orbit shrinks. The tidal correction becomes significant only as the stars approach the merger, not entering in the pN decomposition until the \(5^{th}\) pN correction term [51]. The correction to the GW phase induced by the tidal deformation of one star due to the gravitational pull produced by its companion is known up to 7.5-pN [52], and can be written as:
\[\phi_{T}=\frac{13x^{2.5}}{8\eta}\kappa_{A,\text{\it eff}}\left(1+c_{A,1}x+c_{A,1.5}x^{1.5}+c_{A,2}x^{2}+c_{A,3.5}x^{2.5}\right)+[A\leftrightarrow B] \tag{31}\]
with \(x\) from the baseline BBH model and the pN tidal phase coefficients \(c_{[A,B],i}\) depending on the mass ratio, as given in [23]. The tidal effects are encoded in \(\kappa_{[A,B],\text{\it eff}}\), named the effective tidal coupling constant, introduced in [20]:
\[kappa_{\text{\it eff}}=kappa_{\text{\it eff}}+\kappa_{B,\text{\it eff}}=\frac {3}{32}\left(\tilde{\Lambda}_{A}+\tilde{\Lambda}_{B}\right)=\frac{3}{16} \tilde{\Lambda}, \tag{32}\]
where:
\[\tilde{\Lambda}_{A}=\frac{16}{13}\frac{(M_{A}+12M_{B})M_{A}^{4}\Lambda_{A}}{M^{ 5}};\ \ \tilde{\Lambda}_{B}=\frac{16}{13}\frac{(M_{B}+12M_{A})M_{B}^{4}\Lambda_{B}}{M^{ 5}}. \tag{33}\]
are the symmetric mass-weighted tidal deformabilities [53], defined in terms of \(\Lambda_{[A,B]}\), the dimensionless tidal deformability of each star in the binary [54]. Let us take a closer look at this quantity, because it is it's dependence on the equation of state and mass that can reveal the internal structure of the neutron stars in the binary by measuring the phase of their GW emission and fitting it with analytical or numerical templates. In general, the nuclear models give numerical values for the dimensionless tidal deformability between \(10^{2}\) and \(10^{3}\), varying inversely proportional with the mass and the compactness of the star [55]. So called hard nuclear equations of state models describe less compact stars, predicting higher values for the tidal deformability, while nuclear models with soft equations of state favor more compact stars and tidal deformabilities towards lower values. The dimensionless tidal deformability calculated from the GW170817 event gave, for a neutron star of mass \(1.4M_{\odot}\), a value somewhere in the middle range, namely \(\tilde{\Lambda}\approx 400\)[56].
We now return to the tidal correction to the GW phase, and, as mentioned before, choose the NRTidal model for supplying the expression of the tidal phase evolution. This analytical approximation, first introduced in [20] and subsequently improved in [22, 23], is calibrated to NR, and because of the scarce availability of numerical simulations of BNS systems with unequal mass ratio, includes only non-spinning, equal-mass configurations. In this case, eq.(31) simplifies to a polynomial of constant coefficients:
\[\phi_{T,\mathit{eq}}=\frac{13x^{2.5}}{8\eta}\kappa_{\mathit{eff}}\left(1+\sum _{i=2}^{N}c_{i/2}x^{i/2}\right)=\frac{13x^{2.5}}{8\eta}\kappa_{\mathit{eff}}P (x). \tag{34}\]
In order to fit eq.(34) to NR data, NRTidal replaces the polynomial \(P(x)\) with a rational function \(R(x)\) given by a Pade approximant of constant coefficients:
\[R(x)=\frac{1+n_{1}x+n_{1.5}x^{1.5}+n_{2}x^{2}+n_{2.5}x^{2.5}+n_{3}x^{3}}{1+d_{1 }x+d_{1.5}x^{1.5}+d_{2}x^{2}}, \tag{35}\]
determined from the NR, and restricted to enforce consistency with the analytical coefficients entering the polynomial \(P(x)\).
Using one set of coefficients for \(c_{i}\) and two sets of coefficients for \((n_{i},d_{i})\) from [20, 22, 23], we implement the tidal correction to our hybrid baseline model as:
\[\phi_{T,\mathit{pN}}=\frac{13x^{2.5}}{8\eta}\kappa_{\mathit{eff}}P(x);\ \ \phi_{T,\mathit{F1}(\mathit{F2})}=\frac{13x^{2.5}}{8\eta}\kappa_{\mathit{eff}}R (x). \tag{36}\]
Note that, in contrast to the corresponding equations presented in [20, 22, 23], we found necessary to use a positive sign in eq.(36), in order to obtain the expected behavior for the BNS phase evolution when we use eq.(29) to obtain the analytic BNS phase.
It's time to check our implementation and test how well does the NRTidal model predict the tidal phase against numerical simulations. We choose two equal-mass, non-spinning BNS systems with two different equations of state [32], publicly available in the SXS GW catalogue [31]. The first numerical GW used is SXS:NSNS:0001, with mass \(M_{1}=2.8\) and an ideal gas equation of state of polytropic index \(\Gamma=2\) corresponding to a dimensionless tidal deformability \(\tilde{\Lambda}_{\Gamma 2}=791\). The second is SXS:NSNS:0002, with mass \(M_{2}=2.7\) and a piecewise polytropic equation of state for cold dense matter called MS1b [57], with dimensionless tidal deformability \(\tilde{\Lambda}_{\mathit{MS1b}}=1540\). While these tidal deformabilities surpass the one estimated from the GW170817 observation, they are not yet ruled out and are useful as an upper bound. We also note that in the pN and NRTidal models the tidal correction to the phase depends only linearly on the dimensionless tidal deformability, and the coefficients should remain independent of it.
We start with the assumption that at a considerable distance from the merger, the tidal effects are negligible and the BNS signal is virtually indistinguishable from that of the BBH. This allows us to rescale the phase and shift the time of the numerical BNS simulations to start from zero at the reference frequency of each simulation, marking when the initial burst of spurious transient radiation has dissipated in the
NR simulation [31]. Then, we find the time and phase corresponding to the same frequency in the baseline BBH model and align the numerical time and phase with the corresponding values of the analytical model. We stop the analytic fit at the merger frequency, taken where the numeric BNS amplitude reaches its peak.
We plot in Fig. 2 the phase comparison between our analytic baseline hybrid model, SXS:NSNS:0001 (BNS1) and SXS:NSNS:0002 (BNS2). The insets in Fig. 2 contain the analytical models for the BNS phase, obtained by adding \(\phi_{T,pN}\) and \(\phi_{T,\textit{NRfit}}\) from eq.(36) to our baseline model. We observe that the NRTidal phase (dash-dot-dot red) slightly overestimates both the pN (dotted maroon) and the NR phase, and is more accurate for the BNS1 case, corresponding to the smaller tidal deformabiliy. This suggest that the assumption of the coefficients being independent of the tidal deformability may be too restrictive. In Section 4 we will delve into the behavior of the phase around the merger and propose new coefficients for the tidal correction to push the model past the merger.
### The BNS Tidal Amplitude
The amplitude of the GW is extremely small, and its increase is less dramatic than the phase accumulation observed up to the merger, defined as the point of maximum amplitude [58]. This makes the task of accurately modeling analytic tidal contributions to the amplitude more difficult compared to the phase. Taking advantage of the relatively small change in the GW amplitude, we can effectively use a low-order
Figure 2: Phase comparison between the baseline model (solid black), SXS:NSNS:0001 (dashed blue) and SXS:NSNS:0002 (dash-dot cyan) overlapped at the reference frequencies of the NR simulations. The insets contain our analytical BNS phase calculated with the pN (dotted maroon) and NRTidal (dash-dot-dot red) corrections, plotted up to the merger against the numerical phase for the two cases.
polynomial within the context of the pN approximation, of the form [23, 52]:
\[A_{T}(x)=\frac{8M\eta}{D_{L}}\sqrt{\frac{\pi}{5}}x^{6}\kappa_{\mathit{eff},A}( \hat{c}_{A,0}+\hat{c}_{A,1}x)+\left[A\leftrightarrow B\right], \tag{37}\]
where \(D_{L}\) is the distance from the detector to the BNS system and \(\hat{c}_{[A,B],i}\) are the pN tidal amplitude coefficients, given in [23, 52]. Let's assume again a similar-mass binary, where only the symmetric mass ratio \(\eta\) retains the information on the mass ratio. Now, the pN tidal correction in amplitude eq.(37) simplifies to:
\[A_{T,pN}(x)=A_{T,\mathit{eq}}(x)=\frac{M\eta}{21D_{L}}\sqrt{\frac{\pi}{5}}x^{6} \kappa_{\mathit{eff}}(672-11x). \tag{38}\]
As expected, this approximation becomes less reliable as the binary approaches the merger. The NRTidal model corrects it by adding a dependence of \(x\):
\[A_{T,d}(x)=\frac{A_{T,pN}(x)}{1+dx}, \tag{39}\]
where the parameter \(d\) is fixed by the identity [23]:
\[d=\frac{1}{x}\left(\frac{A_{T,pN}(x)}{A_{\mathit{mrg}}}-1\right)\bigg{|}_{x=x_ {\mathit{mrg}}}. \tag{40}\]
We compute the merger amplitude \(A_{\mathit{mrg}}\) using an analytically predicted quasi-universal relation valid at the moment of merger [59], that provides the peak amplitude only as function of the tidal coupling constant \(\kappa_{\mathit{eff}}\)[23]:
\[A_{\mathit{mrg}}=\frac{M\eta}{D_{L}}\frac{1.6498(1+2.5603\times 10^{-2}\kappa_{ \mathit{eff}}-1.024\times 10^{-5}\kappa_{\mathit{eff}}^{2})}{1+4.7278 \times 10^{-2}\kappa_{\mathit{eff}}}. \tag{41}\]
As a cautionary remark, eq.(40) requires that we provide an analytical expression for \(x_{\mathit{mrg}}\) as well, without relying on numerical data. A good approximation is to assume that the stars merge when they come in contact, and \(x_{\mathit{mrg}}\) is given by:
\[x_{c}=\frac{M}{R_{A}+R_{B}}, \tag{42}\]
where \(R_{A,B}\) are the radii of the stars, for which we use the value of \(11.5\texttt{km}\) as specified in [32]. As a check, we also use an alternative route, and calculate \(x_{\mathit{mrg}}=(M\Omega_{\mathit{mrg}})^{2/3}\) from the analytic expression for the merger frequency given in [60]:
\[M\Omega_{\mathit{mrg}}=\sqrt{q}\frac{0.178(1+3.354\times 10^{-2}\kappa_{ \mathit{eff}}+4.315\times 10^{-5}\kappa_{\mathit{eff}}^{2})}{1+7.542\times 10^{-2 }\kappa_{\mathit{eff}}+2.236\times 10^{-4}\kappa_{\mathit{eff}}^{2})}. \tag{43}\]
In both cases, we found similar values for \(x_{mrg}\), and chose to use the result calculated with eq.(43) in our implementation. Working within the same assumption that far from the merger the BNS and BBH waveforms have indistinguishable amplitudes, we rescale the numeric BNS amplitude to coincide with the numeric BBH amplitude for a binary with the same total mass at the reference frequency. We take the ratio between the BNS and BBH amplitudes and plot them in Fig. 3. This is not fulfilled exactly, as we see in the inset of Fig. 3, where we plot a comparison of their \(h_{+}\) strain.
Let us now add the tidal amplitude correction to our baseline model, using eq.(30), meaning that at reference frequency, where the amplitudes are equal, we expect \(A_{T,\mathit{ref}}=1\). We capture this behavior for the evolution of our analytic BNS amplitude, by changing \(A_{T}\) to:
\[A_{T}(x)\to 1+A_{T}(x). \tag{44}\]
We plot in Fig. 4 the amplitudes for SXS:NSNS:0001 (BNS1) and SXS:NSNS:0002 (BNS2) compared with our hybrid analytic baseline amplitude. The insets in Fig. 4 contain the analytical approximations for the BNS amplitude, obtained by multiplying \(A_{T}(x)\) from eq.(44) with our baseline model as in eq.(30), up to the corresponding BNS merger frequency. We observe that both analytical approximations underestimate the steep increase in amplitude around the merger, although this time the NRTidal amplitude (dash-dot-dot red) performs better than the pN (dotted maroon) approximation, which is an expected behavior. We will return to this comparison in Section 4, when we will make a new fit for the analytical amplitude to the numerical data and taper the amplitude past the merger using a Hanning window.
Figure 3: Amplitude comparison between SXS:NSNS:0001 (dashed blue) and SXS:NSNS:0002 (dash-dot cyan) rescaled at the reference frequencies of the NR simulations and divided by the amplitude for SXS:BBH:0180. The inset contain a comparison of the numerical \(h_{+}\) between the BNS and BBH data.
## 4 Modeling the BNS Merger
Up to this point we have limited ourselves with modeling the tidal effects up to the merger, and confirmed that both the pN and NRTidal approximations hold reasonably well in comparison with the two numerical relativity simulations considered, although they were not included in the calibrations [20, 22, 23]. Let us now take a closer look at the behavior of the analytical models for the tidal phase and amplitude as we approach the merger. Indeed, we shall see a noticeable mismatch with the numerical simulations, and will attempt to improve the model by performing our own fit only up to the merger. We will see that because of the smooth evolution of the phase and amplitude even after the moment of the merger, we can use the new coefficients obtained from our fit to push the model past the location where the stars touch. We succeed in extend the model for the phase beyond the merger, and devise a method to determine the how far we can reach, where we end it with a taper and continue with the baseline model. We carry the amplitude up to the merger and terminate it with a Hann taper, to ensure a smooth and continuous transition into the post merger.
### New Fit for the Tidal Phase
Let us proceed by first taking the difference between the numerical BNS and BBH phase for the two system considered. This is the _true_ tidal correction (not accounting for the numerical error) that we subsequently use to compare with the analytical models for the tidal phase. We plot in Fig. 5 this comparison between the numerical tidal phase and the three analytical approximations for \(\phi_{T,pN}\), \(\phi_{T,F1}\) and \(\phi_{T,F2}\) with the coefficients from [20, 22, 23]. We start at a time about 1000 M before the merger, where the differences between the analytic and numerical phase become noticeable. We then extend the model past the merger, until the approximation exhibits a sharp
Figure 4: Amplitude comparison between the baseline model (solid black), SXS:NSNS:0001 (dashed blue) and SXS:NSNS:0002 (dash-dot cyan) overlapped at the reference frequencies of the NR simulations. The insets contain the amplitudes for the pN (dotted maroon) and NRTidal (dash-dot-dot red) approximations, plotted up to the merger against the numerical BNS amplitude for the two cases.
increase and breaks down. We observe an increased difference between the analytical and numerical tidal phase near the merger. The true tidal phase is larger for the first (BNS1) system compared to the second (BNS2), appearing to scale inversely with the tidal deformability. On the other hand, the analytical phase, which depends linearly on the tidal deformability, doesn't provide an accurate estimation. Specifically, the tidal phase is underestimated for the first system and overestimated for the second.
We assume that the Pade approximant from eq.(35) is complex enough to model the smooth increase of the tidal phase at the merger, and proceed with performing new curve fits for the analytical tidal phase to the numerical tidal phase. We use as initial guesses the coefficients from the pN [20], \(F1\)[22] and \(F2\)[23] NRTidal fit and stop the fit at the merger. Irrespective of the initial fitting coefficients, we obtain comparable sets of new coefficients for the polynomial modeling the tidal interaction of a BNS system with a given deformability. However, in contrast to the original fits, we do not find any longer a one-size-fits-all set of coefficients, namely our coefficients do depend on the equation of state considered. We give in Table 1 the average values of the new coefficients obtained, in comparison with the \(F2\) NRTidal coefficients. We need to alert readers that the large differences between our new fitting coefficients and the original ones are primarily due to our efforts to model the highly nonlinear tidal interactions during the merger, which is a step beyond the bounds of the analytical approximation. Furthermore, the assumptions made in the original model that the coefficients were independent of the equation of state, are no longer applicable.
We use the new fitting coefficients to reconstruct the analytical tidal phase, then we incorporate it into the baseline framework, thus obtaining a new analytical model for the BNS phase that extends beyond the merger. Following this, we develop a
Figure 5: Comparison between the numerical tidal phase correction (solid and dotted black lines) and the analytic pN and two NRTidal models (\(F1\) and \(F2\)) for the tidal phase for the two BNS systems, close to the merger, and extended beyond it. The vertical yellow dotted line marks the merger.
procedure that determines the termination point of our phase, where we apply a Heaviside function, allowing only the phase of the baseline model to continue from then on. First, we calculate the difference between our new analytical tidal phase and the true tidal phase. After that, we determine the time derivative of this difference. Next, we identify potential cutoff points at locations where this derivative switches sign. Lastly, we select the final cutoff point as the last point for which the deviation from the numerical phase is less than \(3\%\), to optimize the fit with the true tidal phase. Here we terminate our analytical tidal phase with a Heaviside function and continue smoothly with the baseline phase, obtaining a complete phase representation.
We plot in Fig.6 our newly modeled phase. In the main plot we show our new analytic fit for the BNS phase tailored to the two BNS systems, against the true tidal phase. Although we applied the curve fit only up to the merger, we see that it is able to accurately follow the numerical tidal phase beyond the merger. We include in the two insets a comparison between our reassembled tidal phase for the two BNS systems, with the Heaviside taper applied at the cutoff point, and the numerical BNS phase.
### New Fit for the Tidal Amplitude
We use a similar procedure to model the analytical amplitude at the merger and start by calculating the numerical tidal amplitude as the ratio between the numerical BNS and BBH amplitudes for the two system considered. This is the _true_ tidal amplitude (not accounting for the numerical error) that we use next to compare with the analytical approximations \(A_{T,pN}\) and \(A_{T,d}\). We attempt to improve the amplitude modeling by applying a curve fit for eqs.(38, 39) to the true tidal amplitude up to the merger, using as initial guess the coefficients given in [23]. Again, we obtain similar sets of new coefficients regardless of the model we start with, but depending on the value of the tidal deformability. We give in Table 2 the values of the new coefficients resulting from the new tidal amplitude fit, in comparison with the \(A_{T,d}\) NRTidal coefficients.
With these coefficients, we proceed to reconstruct the new analytical amplitude. This time we cannot track the amplitude beyond the merger, due to its unphysical steep increase. While the model is precise enough to capture the sharp rise in amplitude before the merger, it is too basic to accurately follow its evolution beyond the peak point. To complete the waveform beyond the merger, we taper the amplitude at the merger with a Hann window that terminates to zero at the phase cutoff time.
\begin{table}
\begin{tabular}{l l l l} \hline Coefficient & Original \(F2\) fit [23] & New fit, \(\Lambda=791\) & New fit, \(\Lambda=1540\) \\ \hline \(n_{1}\) & \(-15.2452\) & \(-218.719\) & -225.122 \\ \(n_{1.5}\) & \(31.5423\) & \(2139.82\) & \(3220.28\) \\ \(n_{2}\) & \(-80.9260\) & \(-8635.27\) & -18476.6 \\ \(n_{2.5}\) & \(312.482\) & \(16089.4\) & \(46434.9\) \\ \(n_{3}\) & \(-342.155\) & \(-11328.6\) & -42380.1 \\ \(d_{1}\) & \(-20.2372\) & \(-27.0697\) & \(171.216\) \\ \(d_{1.5}\) & \(39.3962\) & \(63.4914\) & -1127.73 \\ \(d_{2}\) & \(-5.36163\) & \(-25.6741\) & \(1765.18\) \\ \hline \end{tabular}
\end{table}
Table 1: New merger fit for tidal phase, dependent on the tidal deformability
We plot in Fig. 7 the comparison between the numerical BNS amplitude and our new amplitude with tides. The two insets display our fit to the numerical tidal amplitude, contrasted with the pN and NRTidal tidal amplitudes for the two BNS systems.
### Complete Analytic BNS Waveforms
With these new fits obtained for the phase and amplitude, we build the complete analytical GW strain for the BNS merger using eq.(28). Although we overlooked the amplitude complexity after the merger, we modeled the phase accurately up to the cutoff time. This allows us to recompute the orbital frequency \(\Omega_{BNS}\) by taking the time derivative of the phase \(\phi_{BNS}\). Then we recalculate \(x_{BNS}\) and follow with the radial separation \(r_{BNS}\), using eq.(5). We also reevaluate the orbital and radial velocities and compare their evolution through the merger. Looking at their ratio we observe that even at the merger, where the radial velocity reaches its maximum, it represents at most 6% of the orbital velocity, after which it falls abruptly. In contrast, the orbital velocity keeps increasing even after the merger, reaching its peak soon thereafter, after which it starts decreasing as well. The domain between merger and cutoff time is very short, extending up to \(240M\) for BNS1 and \(180M\) for BNS2. For the systems
\begin{table}
\begin{tabular}{l l l l} \hline \hline Coefficient & Original fit \(A_{T,d}\)[23] & New fit, \(\Lambda=791\) & New fit, \(\Lambda=1540\) \\ \hline \(d\) & \(-8.8326\) & \(-1.3912\) & -1.8939 \\ \(p\) & 1 & 0.15345 & 0.28388 \\ \hline \hline \end{tabular}
\end{table}
Table 2: New merger fit for tidal amplitude, dependent on the tidal deformability
Figure 6: Comparison between our analytic new fit to the tidal phase (dashed and dashdot green) to the numerical tidal phase (solid and dotted black), for the two BNS systems, up to the merger, then continued beyond the merger. The insets contain the reconstructed BNS tidal phase for the two BNS systems, extended beyond the merger. The vertical yellow dotted line marks the merger, and the vertical brown dotted line delineates the end point where we applied the tapering.
considered, this is between 2 msec and 3 msec, not enough to inform us on the fate of the remnant. Most likely, we reach the early stages of a rapidly-rotating, tidally deformed, short lived hyper-massive neutron star (HMNS) phase. A HMNS is not stable and continues to emit gravitational radiation at the expense of its angular momentum, slowing down on a timescale ranging from several hundred milliseconds to minutes. The duration of the HMNS phase is determined by its mass, equation of state, rotation rate, and the gravitational wave emission. If it does not lose enough angular momentum through gravitational radiation, it will eventually collapse into a black hole. However, the GW170817 event indicates that a remnant with a total mass of \(2.8M_{\odot}\) will settle as a neutron star, the collapse to a black hole being less likely [4].
Note that our model does not account for the mass ejected due to the tidal interactions during coalescence, or the dynamically ejected mass during the collision-induced shock of the neutron star crust when the stars come in contact. The matter expulsion at the contact interface is called shock ejecta, and represents a significant source of ejecta for systems with similar masses, as considered in this work. Additionally, our model excludes the neutrino-driven wind ejecta emitted by the HMNS remnant as it cools down. The different types of ejecta produced during the BNS evolution form a rotating disk surrounding the remnant. The total mass ejected by the GW170817 event was estimated to be about \(0.04M_{\odot}\), and this is the value we consider in our work.
We plot in Fig. 8 the \(h_{+}\) component of the strain for the baseline model, in comparison with the BNS strain for the two equations of state considered. While our model for the amplitude past the merger is simplified, the phase modeling remains accurate. In the left plot of Fig. 8 we show the evolution of the separation between the neutron stars for the two equations of state considered, continued beyond the merger, depicted by the red dotted circle, in comparison to the separation of the baseline BBH model. We see how the value of the tidal deformability and thus of the equation of state
Figure 7: Comparison between the two numerical BNS amplitudes (solid and dotted black) and our new fit for the amplitudes (dashed and dashdot green) up to the merger, then tapered by a Hann function between merger and the cutoff time for the phase. The insets include our fit to the numerical tidal amplitude, compared with \(A_{T,pN}\) and \(A_{T,d}\). The vertical yellow dotted line marks the merger.
influence the early stages before and beyond the collision, revealing the effect of the matter interaction on the orbits. A larger tidal deformability (dashdot cyan) speeds up the merger of stars, leading to a larger-radius remnant, whereas a smaller value for it (dotted blue) prolongs the inspiral and yields a denser remnant with a smaller radius.
## 5 Conclusion
The gravitational waves emitted during a BNS merger offer insights into the behavior of matter under extreme conditions. To understand the impact of matter on the evolution of such systems and on their gravitational waves signature, we embarked upon an analytical modeling journey. Initially, we focused on a BBH system, employing the post-Newtonian formalism for the inspiral and the Backwards-one-Body model for the merger. By combining them, we established a baseline waveform, which we validated against numerical relativity simulations, to ensure a robust foundation for our next step. To incorporate the effects of tides, we introduced corrections to the phase and amplitude of the point-particle waveform by using the polynomial expressions proposed by the NRTidal model. We then verified the model's accuracy and efficiency through careful comparison with numerical relativity data for two equations of state.
However, we encountered a mismatch around the merger when solely relying on the NRTidal model. To address this limitation, we lifted the restriction on the coefficients' independence from tidal deformability and recalibrated them with the numerical relativity predictions. By performing new fits to the numerical BNS data, we obtained updated values for the polynomial coefficients. Armed with these new coefficients, we reconstructed the tidal corrections and achieved improved fits for the phase and amplitude, successfully extending the phase modeling beyond the merger. To achieve a comprehensive phase representation, we devised a method to determine its extent, and applied a taper at the end, seamlessly continuing with the baseline model. Regarding the amplitude modeling, we have successfully carried it up to the merger, where
Figure 8: Our analytical strain (left plot) and separation (right plot) for the baseline BBH (solid black), the \(\Lambda=791\) BNS1 (dotted blue) and the \(\Lambda=1520\) BNS2 (dashdot cyan) for the last \(\approx 8\) orbits extended to the early stages of the collision. The dotted red line marks where the neutron stars touch.
we employed a Hann taper to ensure a smooth and continuous transition into the post-merger. We ended by reconstructing the complete analytical BNS strain and by investigating the tidal influence on the system's orbits around the BNS collision.
We developed RisingTides, a Python code guided by a user-friendly Jupyter Notebook, for the analytical modeling the tidal effects on the gravitational waves emitted during BNS inspiral and merger. We make our implementation available to the scientific community, to foster collaboration and facilitate further future investigations.
In our future work, we aim to enhance our model by incorporating a broader range of numerical relativity BNS simulations encompassing diverse equations of state. We will seek to uncover a consistent pattern for the dependence of the phase on the tidal deformability around the merger, ultimately leading to a universal set of polynomial coefficients. Furthermore, we will explore universal relations for BNS systems, which may offer deeper insights into their behavior and characteristics. Additionally, we are committed to refining the accuracy of the amplitude modeling beyond the merger, thus increasing the overall predictive power of our approach. Our research will contribute to a better understanding of BNS mergers and their gravitational wave signatures.
Supplementary information.We release RisingTides, an open-source Python code for the analytical modeling of tidal effects on gravitational waves from BNS mergers, steered by a user-friendly Jupyter Notebook. Our implementation can be accessed at [https://github.com/mbabiuc/RisingTides.git](https://github.com/mbabiuc/RisingTides.git).
Acknowledgments.The authors are wishing to acknowledge the Physics Department and the College of Science at Marshall University. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
|
2305.00039 | HermesBDD: A Multi-Core and Multi-Platform Binary Decision Diagram
Package | BDDs are representations of a Boolean expression in the form of a directed
acyclic graph. BDDs are widely used in several fields, particularly in model
checking and hardware verification. There are several implementations for BDD
manipulation, where each package differs depending on the application. This
paper presents HermesBDD: a novel multi-core and multi-platform binary decision
diagram package focused on high performance and usability. HermesBDD supports a
static and dynamic memory management mechanism, the possibility to exploit
lock-free hash tables, and a simple parallel implementation of the If-Then-Else
procedure based on a higher-level wrapper for threads and futures. HermesBDD is
completely written in C++ with no need to rely on external libraries and is
developed according to software engineering principles for reliability and easy
maintenance over time. We provide experimental results on the n-Queens problem,
the de-facto SAT solver benchmark for BDDs, demonstrating a significant speedup
of 18.73x over our non-parallel baselines, and a remarkable performance boost
w.r.t. other state-of-the-art BDDs packages. | Luigi Capogrosso, Luca Geretti, Marco Cristani, Franco Fummi, Tiziano Villa | 2023-03-22T11:15:27Z | http://arxiv.org/abs/2305.00039v1 | # HermesBDD: A Multi-Core and Multi-Platform Binary Decision Diagram Package
###### Abstract
BDDs are representations of a Boolean expression in the form of a directed acyclic graph. BDDs are widely used in several fields, particularly in model checking and hardware verification. There are several implementations for BDD manipulation, where each package differs depending on the application. This paper presents _HermesBDD_: a novel multi-core and multi-platform binary decision diagram package focused on high performance and usability. _HermesBDD_ supports a static and dynamic memory management mechanism, the possibility to exploit lock-free hash tables, and a simple parallel implementation of the If-Then-Else procedure based on a higher-level wrapper for threads and futures. _HermesBDD_ is completely written in C++ with no need to rely on external libraries and is developed according to software engineering principles for reliability and easy maintenance over time. We provide experimental results on the \(n\)-Queens problem, the de-facto SAT solver benchmark for BDDs, demonstrating a significant speedup of \(18.73\times\) over our non-parallel baselines, and a remarkable performance boost w.r.t. other state-of-the-art BDDs packages.
Binary Decision Diagrams, Boolean Functions, Parallel Algorithms, Multi-Platform Package
## I Introduction
Binary decision diagrams (BDDs) were introduced by Akers [1] and developed by Bryant [2] and provide a data structure for representing and manipulating Boolean functions. There are several implementations of BDD packages in the literature (details in Sec. II), but they focus mainly on package performance regarding speedup and memory efficiency. However, we believe that performance is only one aspect to judge a BDD package. Other aspects, such as (not necessarily in order of importance), _functionality_, _robustness_, _reliability_, _portability_, and _documentation_, matter as well. According to these principles, we developed _HermesBDD_: a novel multi-core and multi-platform binary decision diagram package focused on high performance and usability. It supports a static and dynamic memory management mechanism, the possibility to exploit lock-free hash tables, and a parallel implementation of the If-Then-Else procedure based on a higher-level wrapper for threads and futures. Additionally, _HermesBDD_ presents a well-documented source code, it is completely written in C++ with no need to rely on external libraries, and it is developed according to engineering principles such as testability, code coverage, and continuous integration.
We provide experimental results on the \(n\)-Queens problem showing how our multi-core implementation improves the performance over our non-parallel baselines and how the different memory management techniques affect the overall speedup. Finally, we compare _HermesBDD_ with three of the best state-of-the-art BDD libraries, _i.e._, CUDD [3], Sylvan [4], and BuDDy [5], demonstrating a remarkable speedup boost. The experiments demonstrate the goodness of the proposed package, but given space constraints, extensive benchmarking will be the subject of future work.
In summary, the contributions of _HermesBDD1_ are:
Footnote 1: [https://luigicapogrosso.github.io/HermesBDD](https://luigicapogrosso.github.io/HermesBDD)
* A computationally faster BDD package, by exploiting multi-threading for parallel processing and concurrent access to a BDD;
* Multi-platform compatibility (Windows, Linux, and macOS) to accommodate integration within tools from different environments;
* Support for a static and dynamic memory management mechanism, the possibility to exploit lock-free hash tables, and a novel parallel implementation of the If-Then-Else procedure based on a higher-level wrapper for threads and futures;
* High usability and robustness by design, thanks to a development based on engineering principles such as code coverage and continuous integration, along with independence from external software to offer high usability, reliability, and easy maintenance over time.
## II Related Work
In this section, we provide an overview of the most widely used BDD libraries. For a survey on early packages see [6].
CUDD [3] stands for Colorado University Decision Diagram. It is a single-core package for the manipulation of BDDs, algebraic decision diagrams (ADDs), and Zero-suppressed binary decision diagrams (ZDDs) written in C, with a C++ wrapper.
BuDDy [5] is a BDD single-core library written in C, with many highly efficient vectorized BDD operations, dynamic variable reordering, automated garbage collection, a C++ interface with automatic reference counting, and much more.
Biddy [7] is a BDD package under GPL license, whose most distinguishing features are its specially designed C interface and a novel implementation of automatic garbage collection.
CacBDD [8] is a single-core C++ BDD package that implements dynamic cache management, which takes into account the hit rate of the computed table and the available memory.
BeeDee[9] is a thread-safe Java library for BDD manipulation. BeeDee allows clients to share a single factory of BDDs, in real parallelism and to reduce the memory footprint of their overall execution at a very low synchronization cost.
Sylvan [4] is a parallel BDD library written in C that provides scalable parallel execution of the standard BDD operations. It supports custom decision diagram terminal types, and it also implements operations on a specialized list of decision diagrams for model-checking.
DecisionDiagrams is a single-core implementation for numerous variants of BDDs that is used at Microsoft Research. Written in C#, currently, it maintains 100% code coverage. The library is based on a cache-optimized implementation of decision diagrams [10].
Based on the work of Lars Arge [11], Adiar [12] is a single-core BDD package that makes use of time-forward processing to improve the I/O complexity of BDD manipulation. This achieves efficient manipulation of BDDs, even when they outgrow the memory limit of a given machine.
In [13], Miyasaka _et al_. describe a simple BDD package without dynamic variable reordering, which is much faster than a conventional BDD package with reordering. The proposed BDD package is used in logic optimization with permissible functions. Moreover, in [14], Miyasaka presents a framework to compare BDD packages through auto-tuning.
## III Methodology
In this section, we present the algorithms and the techniques developed for our efficient parallelization. Due to space constraints, for a technical glance and interesting properties of BDDs, refer to [1, 2].
### _The Multi-Core_ If-Then-Else _Algorithm_
```
Data:\((x,t,e)\)
1if\(t=e\)then
2return\(e\);
3
4ifis_complemented\((e)\)then
5\(v\leftarrow\)node\((x,t,e)\);
6\(n\leftarrow\)lookup_or_create\((v)\);
7returncomplement\((n)\);
8
9else
10\(v\leftarrow\)node\((x,t,e)\);
11returnlookup_or_create\((v)\);
12
13 end if
```
**Algorithm 1**p_ite(). The multi-core If-Then-Else.
### _The Multi-Core_ If-Then-Else _Algorithm_
In _HermesBDD_ we parallelize the If-Then-Else function treating the two recursive calls as independent tasks. Starting from the task-based parallel flow of [15], we rewrote it with C++ primitives without using external libraries, which allowed us to introduce a more efficient hash table management mechanism (as we will see in Sec. IV-B). In particular, this flow gives us a simple way to use multiple threads through primitives: instead of making a recursive call to execute the If-Then-Else function, start a thread at each recursive step, then wait for the thread to finish. With this implementation, the only synchronization between workers is when the results of suboperations are stored in the unique hash table. This table is shared globally, in order to prevent workers from computing a suboperation that was finished already by some other worker. This technique seems to be best suited to parallelize BDDs. We have tried and compared other implementations, _i.e_., parallelism on different levels of the tree. But, given the nature of BDDs, these techniques present an important task overhead, vs. a negligible, or often negative, speedup. Alg. 1 shows the pseudocode of our implementation. Furthermore, _HermesBDD_ provides an option to the user, at compile-time, to decide whether to use the multi-core or sequential implementation of the If-Then-Else algorithm.
To this end, we use the C++ std::async() function, which is a high-level wrapper for threads and futures, followed by the matching function to retrieve the results of the computation. Standard C++ provides std::thread() which is a fairly low-level construct, and so its usage is often more cumbersome and error-prone than desired. Instead, std::async() automatically creates a thread to call the thread function and it conveniently returns an object std::future without the hurdle of manual thread management and decoupling the task from the result.
To ensure that the results are canonical reduced BDDs, we use the update_utable\(()\) method, as shown in Alg. 2. Specifically, the lookup_or_create\(()\) function checks atomically if data is already in the unique hash table, and if not, it adds it. Finally, in order to enforce canonicity (_i.e_., complement only on 1 edge), we use the functions \(\textsc{is\_complemented}()\) and complement\(()\).
```
Data:\((x,t,e)\)
1if\(t=e\)then
2return\(e\);
3
4ifis_complement\((e)\)then
5\(v\leftarrow\)node\((x,t,e)\);
6\(n\leftarrow\)lookup_or_create\((v)\);
7returncomplement\((n)\);
8
9else
10\(v\leftarrow\)node\((x,t,e)\);
11returnlookup_or_create\((v)\);
12
14 end if
```
**Algorithm 2**update_utable(). Creates a BDD node using the unique hash table to ensure that there are no duplicates.
### _A Lock-Free Unique Hash Table for Multi-Core BDDs_
Traditionally, concurrency issues, such as data race, are solved by locks, providing mutual exclusion. Since blocked processes must wait, locks have a negative impact on the speedup of parallel programs. A lot of literature has been dedicated to developing non-blocking algorithms, specifically, Herlihy _et al_., in [16], distinguish between _lock-free_, _wait-free_, and _lock-less_ algorithms.
In particular, we use a lock-free unique hash table, which is implemented using the std::atomic_flag. Based on this class, we build a spinlock in order to protect the critical section. Specifically, a spinlock is a lock that causes a thread trying to acquire it to wait in a loop while repeatedly checking whether the lock is available. The use of spinlocks is particularly recommended when the critical section is supposed to perform a minimal amount of work, _i.e._, the spinlock is held for a very short period of time, as in our case. Also, it operates faster compared to mutex since context switching is reduced. Spinlock does not cause the thread to be preempted but instead, it keeps on spinning until the lock on the resource is released.
The pseudocode for inserting a new value in the unique hash table is given in Alg. 3. This computes the hash, calculates the index, acquires the lock, and finally writes the data into the table. The get_from_utable() algorithm works in exactly the same way. This is not reported, since it differs from Alg. 3 only by line 5, where it calls the function that compares the parameters and returns the result value.
### _The Memory Management Mechanism_
BDD algorithms are considered memory intensive since they have little computation for each unit of memory access. Hence, memory allocation techniques for the tables play an important role, and have a great effect, on the performance of the implementations of a BDD package. In _HermesBDD_, we implemented both dynamic and static memory allocation techniques in order to exploit fine-grain parallelism. Also in this case, at compile-time, _HermesBDD_ provides an option for the users in order to select the dynamic or the static memory allocation mechanism.
As dynamic memory management, we implemented a simple but effective technique based on doubling the memory space required by the tables. At the beginning of the process, \(M\) bytes of memory are allocated to store \(N\) nodes. If this space is not enough during program execution, more space of size \(M*2\) will be allocated. Since the table is shared by all nodes created by the library, this allows reusing memory. For simplicity and efficiency, there is no strategy for cleaning up the slot table after a given node is removed.
In the static allocation technique, instead, a contiguous slice of \(M\) bytes of memory is reserved at the start of the process. In this case, the variables get allocated permanently, until the program executes or the function call finishes, and once the memory is allocated, the memory size cannot change, so it is more efficient than dynamic allocation.
## IV Experiments
In this section, we perform quantitative and qualitative analyses to demonstrate the potentiality of our _HermesBDD_ package. Experiments were carried out on a 64-bit 32-core AMD Ryzen Threadripper 1950X CPU 3.4GHz machine. The library is tested for compilation using GCC (minimum required: 10.2), Clang (minimum required: 11.0), and MSVC (minimum required: 19.20).
We ran benchmarks from the \(n\)-Queens problem, using a simple SAT solver based on [17]. In particular, we would like to emphasize that further experiments, as well as systematic comparisons with different parallel tools, will be the subject of future work. Here, also due to space constraints, we focus only on a single well-known problem; nonetheless, we aim to show the potentialities of _HermesBDD_ with no interest in presenting a benchmarking paper on BDDs.
### _The Speedup Latency w.r.t. our Baselines_
Tab. I shows the result on an average of 50 samples of our baselines on the \(n\)-Queens problems with the \(6\times 6\), \(7\times 7\), and \(8\times 8\) chessboards, using the static memory allocation mechanism.
Tab. II shows the result of our multi-core implementation in terms of speedup, where the speedup latency is computed as \(S=T_{ms}(no\_parallel)/T_{ms}(parallel)\), on the same chessboards, using the same number of cores.
Specifically, comparing Tab. I with Tab. II, several facts emerge: _i)_ smaller models (_e.g._, \(6\times 6\) and \(7\times 7\) chessboard) have lower speedups w.r.t. larger models, which exhibit the best speedups. _ii)_ This implies that the speedup is increasing with the size of the Boolean formula. _iii)_ Finally, our implementation scales well with the number of cores as long as the problem is sufficiently complex, yielding a speedup of up to \(18.73\times\) in the \(8\times 8\) case.
### _Comparison w.r.t. other BDD Packages_
As a comparative approach, we consider CUDD, Sylvan, and BuDDy, which are three of the most important state-of-the-art BDD packages. Tab. III reports the results on an
average of 50 samples of the \(n\)-Queens problem with an \(8\times 8\) chessboard, using a 32-core machine, and the static memory allocation mechanism. Specifically, we decided to use static memory allocation for this experiment because it is the memory management technique that allows us to push performance to the maximum in terms of computation time.
In summary, as we can see from Tab. III, the combination of using a lightweight approach for parallelism, a lock-free hash table technique, and a static memory allocation yields better performances both for execution time and memory space.
### _The Impact of Memory Allocation_
In this experiment, we evaluate the impact of both static and dynamic memory allocation techniques in terms of execution time and memory space. Specifically, Fig. 1 shows the results of a single test in which we run, sequentially, the \(n\)-Queens problem on a \(6\times 6\), then on a \(7\times 7\), and finally on an \(8\times 8\) chessboard on an average of 50 samples of the \(n\)-Queens problem on a 32-core machine. As mentioned in Sec. III-A, thanks to the unique hash table the \(7\times 7\) case benefits from the \(6\times 6\) storage, just as the \(8\times 8\) case benefits from the \(7\times 7\) storage.
The behaviors explained in Sec. III-C are confirmed: a dynamic memory allocation guarantees a more efficient memory occupancy, but the overall execution time is slower than static memory allocation due to the presence of overhead caused by the management of the memory allocation at run time (_e.g._, the increase of execution time in percentage is \(36\%\) for the \(6\times 6\) chessboard, down to \(19\%\) for the \(8\times 8\) chessboard). On the other hand, static memory allocation provides better performance in terms of execution time but at the expense of inefficient memory management (_e.g._, in the case of the \(6\times 6\) chessboard, more MBs are allocated than required).
## V Conclusions
In this paper, we presented _HermesBDD_, a multi-core and multi-platform package for BDD manipulation. We designed and implemented three different algorithms to support a parallel implementation of BDD operations: a multi-core If-Then-Else procedure based on a higher-level wrapper for threads and futures, a lock-free hash table, and both a static and dynamic memory allocation mechanism.
The performance demonstrated a significant speedup. Thus, we can say that the experiments validate the proposed package, whereas more extensive benchmarking will be the subject of future work.
|
2305.18676 | LayerDiffusion: Layered Controlled Image Editing with Diffusion Models | Text-guided image editing has recently experienced rapid development.
However, simultaneously performing multiple editing actions on a single image,
such as background replacement and specific subject attribute changes, while
maintaining consistency between the subject and the background remains
challenging. In this paper, we propose LayerDiffusion, a semantic-based layered
controlled image editing method. Our method enables non-rigid editing and
attribute modification of specific subjects while preserving their unique
characteristics and seamlessly integrating them into new backgrounds. We
leverage a large-scale text-to-image model and employ a layered controlled
optimization strategy combined with layered diffusion training. During the
diffusion process, an iterative guidance strategy is used to generate a final
image that aligns with the textual description. Experimental results
demonstrate the effectiveness of our method in generating highly coherent
images that closely align with the given textual description. The edited images
maintain a high similarity to the features of the input image and surpass the
performance of current leading image editing methods. LayerDiffusion opens up
new possibilities for controllable image editing. | Pengzhi Li, QInxuan Huang, Yikang Ding, Zhiheng Li | 2023-05-30T01:26:41Z | http://arxiv.org/abs/2305.18676v1 | # LayerDiffusion: Layered Controlled Image Editing with Diffusion Models
###### Abstract
Text-guided image editing has recently experienced rapid development. However, simultaneously performing multiple editing actions on a single image, such as background replacement and specific subject attribute changes, while maintaining consistency between the subject and the background remains challenging. In this paper, we propose _LayerDiffusion_, a semantic-based layered controlled image editing method. Our method enables non-rigid editing and attribute modification of specific subjects while preserving their unique characteristics and seamlessly integrating them into new backgrounds. We leverage a large-scale text-to-image model and employ a layered controlled optimization strategy combined with layered diffusion training. During the diffusion process, an iterative guidance strategy is used to generate a final image that aligns with the textual description. Experimental results demonstrate the effectiveness of our method in generating highly coherent images that closely align with the given textual description. The edited images maintain a high similarity to the features of the input image and surpass the performance of current leading image editing methods. _LayerDiffusion_ opens up new possibilities for controllable image editing.
## 1 Introduction
Given a single image of your pet, it can be imagined embarking on a worldwide journey and performing specific actions in any location. Generating such an image is a challenging and fascinating task in image editing. It entails preserving the specific subject's unique characteristics in new backgrounds and ensuring their seamless integration into the scene, harmoniously and naturally, while simultaneously accommodating multiple editing actions.
Recently, significant progress has been made in the development of deep learning-based large-scale text-to-image models [27; 30; 25]. These models can generate high-quality synthetic images based on text prompts, enabling text-guided image editing and producing impressive results. As a result, numerous text-based image editing methods [36; 13; 10; 7; 28; 8; 35] have emerged and evolved. However, such models cannot mimic specific subject characteristics. Even with the most detailed textual descriptions of an object, they may generate instances with different appearances and still struggle to maintain background consistency. Thus, the current leading image editing methods encounter several challenges, including rigid editing limited to specific domain images [22; 13], the inability to simultaneously edit both the background and specific subjects, and the requirement for additional auxiliary input information [28; 21; 3; 5]. These issues hinder the advancement of controllable image editing.
In this paper, we propose a semantic-based layered controlled image editing method, which we call _LayerDiffusion_, to alleviate these issues. By simply inputting textual descriptions of multiple editing
actions, along with the target image and a reference image, we can perform non-rigid editing and attribute modification of specific subjects, generating images consistent with the textual descriptions while maintaining the consistency of the specific subject and background features with the input image. As shown in Fig. 1, we can make a dog jump in a forest or a giraffe lies on a beach or modify their shapes and attributes in the original scene.
To implement our method, we leverage the robust and high-quality image generation capabilities of a large-scale text-to-image model [27]. Our method comprises a well-defined sequence of steps. Initially, we utilize a mask to eliminate interference from foreground objects effectively. Subsequently, we apply a layered controlled optimization strategy to optimize the text embeddings acquired from the text encoders [24], following the segmentation of the target text. This process aims to generate image backgrounds that exhibit a remarkable similarity to the reference images. Next, we employ a layered diffusion training strategy to fine-tune the model, thereby augmenting its ability to preserve the similarity between the specific subjects, backgrounds, and input images. Finally, during the diffusion process with the fine-tuned model, we adopt an iterative guidance strategy, where a highly constrained text embedding is iteratively employed to denoise the images. Consequently, this generates a final image aligning with the textual description.
We emphasize the contributions of each component in our method through ablation studies and compare our approach with other relevant image editing methods [17; 36; 20], clearly demonstrating superior editing quality. Furthermore, we conduct a user study to subjectively evaluate the quality of the images generated by our method, which aligns most closely with human perception. We summarize our main contributions as follows:
* We propose _LayerDiffusion_. To the best of our knowledge, this is the first image editing method that enables simultaneous editing of specific subjects and backgrounds using a single input image.
* We introduce a novel layered diffusion training framework that enables arbitrary and controllable editing of specific subjects and backgrounds.
* Experimental results demonstrate that our method generates images with highly similar features to the input images.
## 2 Related Work
Image synthesis has recently made significant advancements [1; 12; 22; 26; 39; 2; 11; 32]. With the development of diffusion models [15; 33; 34] in image processing tasks [16; 29; 37; 38; 6], new
Figure 1: Our method achieves layered image editing through text descriptions, enabling simultaneous modifications of backgrounds and specific subjects, such as background replacement, object resizing, and complex non-rigid changes.
text-guided solutions have emerged in the field of image editing and produced impressive results [21; 25; 40; 7; 17]. The powerful generative capabilities of diffusion models enable the generation of numerous high-quality images. Consequently, many image editing tasks [4; 13; 20; 36]no longer require training a large-scale text-to-image model, as pre-trained models can be used for image editing based on textual descriptions. Diffusion models have tremendous potential for image editing tasks guided by text descriptions. Many studies [20; 17; 28; 36; 13] have utilized pre-trained models as generative priors, which can be categorized into two approaches: training-free and fine-tuned method. SDEdit [20] introduces intermediate noise to an image, which can be augmented with user-provided brush strokes, followed by denoising through a diffusion process conditioned on the desired edit. P2P [13] and PnP [36] utilize cross-attention or spatial features to edit both global and local aspects of an image by directly modifying the text prompt. However, they often preserve the original layout of the source image and struggle with non-rigid transformations.
Fine-tuned methods [17; 28; 18; 10] have also shown remarkable performance. DiffusionCLIP [18] leverages the CLIP [23] model to provide gradients for producing impressive style transfer results. Textual-inversion [10] and Dreambooth [28] fine-tune the model using multiple sets of personalized images, resulting in the synthesis of images depicting the same object in new environments. Imagic [17] fine-tunes the model by optimizing text embeddings and achieves image editing through linear interpolation of text embeddings.
Similarly, our approach leverages target text descriptions to fine-tune the model and enable various image editing operations. Dreambooth [28] and Imagic [17] are methods that resemble our approach. However, Dreambooth requires multiple input images and often fails to produce satisfactory results when dealing with a single image. Imagic, on the other hand, faces challenges in simultaneously performing multiple editing actions, such as editing both the background and specific subjects simultaneously. In contrast, our method allows for simultaneous editing of specific subjects and the background using only a single input image.
## 3 Method
### Preliminaries
Stable Diffusion Models (SDM) [27] is a publicly available text-to-image diffusion model trained on the LAION-5B [31] dataset. Instead of directly operating in the image space, SDM is based on the latent diffusion method, which means the forward and reverse diffusion sampling are operated in the latent space. Given the trained autoencoder, the image \(\mathbf{p}\) is convert to low-dimensional latent variable \(\mathbf{x}\) at each timestep t. SDM also introduces an important modification in the form of text-based
Figure 2: Our method utilizes a layered controlled optimization strategy to refine text embeddings and a layered diffusion strategy to fine-tune the diffusion model. During inference, an iterative guidance strategy is employed to directly generate images aligning with the multiple editing actions described in the input text.
conditioning. During the denoising process, SDM can be conditioned on an additional input vector, which is typically a text encoding produced by a pre-trained CLIP text encoder \(\mathcal{P}\). Specially, the \(\mathcal{P}\) extract words from a given text prompt \(\mathbf{y}\) and convert them into tokens, denoted by \(\mathbf{e}=\tau_{\phi}(\mathbf{y})\). These tokens are further transformed into text embeddings, which are used to condition the neural network during the training process:
\[\min_{\theta}\mathbb{E}_{\mathbf{x}_{t},\mathbf{x}_{0},\mathbf{e}\sim\mathcal{N}(0,\mathbf{I}) }\left[\left\|\mathbf{\epsilon}_{\theta}\left(\mathbf{x}_{t},t,\tau_{\phi}(\mathbf{y}) \right)-\mathbf{\epsilon}\right\|_{2}^{2}\right], \tag{1}\]
Consequently, SDM facilitates the generation of images based on textual input by employing reverse diffusion sampling in the latent space. Instead of relying on \(\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t)\), the model utilizes a text-conditioned neural network denoted as \(\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t,\tau_{\phi}(\mathbf{y}))\). We implement the proposed approach in this work by fine-tuning this pre-trained model.
### Layered Diffusion
Our approach leverages target text descriptions to facilitate a wide range of image editing actions, including object size, property modifications, and background replacement while preserving specific subject details closely tied to the original image. To achieve this, we fine-tune a state-of-the-art diffusion model [27]. Furthermore, we introduce a layered editing method for the background and specific foreground objects. As illustrated in Fig. 2, our method begins by separating the background. We apply a layered controlled optimization strategy to refine the segmentation text embeddings acquired from the text encoders, which come from the target text.Then we identify the optimal text embedding that aligns with the desired target background in proximity to the target text embedding. Subsequently, we employ a layered diffusion strategy to fine-tune the diffusion model. This approach enhances the model's capability to maintain similarity between specific subjects, backgrounds, and input images, allowing for finer control and precision in image editing through parameter adjustments. During the inference stage, we utilize an iterative guidance strategy to directly generate images that align with the multiple image editing actions described in the input text without the text embedding interpolation. Each step of the process is outlined in detail below.
#### 3.2.1 Layered controlled optimization
Due to the potential interference of multiple text descriptions, optimizing text embeddings can be unstable during image editing. As a result, previous methods for image editing have often struggled to effectively modify selected object property and backgrounds simultaneously.
To this end, we aim to separate the background and foreground to reduce interference between different textual information. The target text \(T\) is first fed into the Stable Diffusion model [27] to obtain the target image \(O_{t}\). Then \(T\) is decomposed into \(T_{a}\) and \(T_{b}\), which describe object properties and background separately and sent to the text encoder [24] to output the corresponding text embeddings \(\mathbf{e}_{a}\in\mathbb{R}^{C\times N}\) and \(\mathbf{e}_{b}\in\mathbb{R}^{C\times N}\), where \(C\) is the number of tokens, and \(N\) is the token embedding dimension. However, \(\mathbf{e}_{a}\) and \(\mathbf{e}_{b}\) are in the distant embedding space, so we cannot directly perform linear interpolation on them. To make \(\mathbf{e}_{a}\) and \(\mathbf{e}_{b}\) match our input image background as much as possible and be in a close embedding space, we freeze the parameters of the diffusion model and optimize \(\mathbf{e}_{a}\) and \(\mathbf{e}_{b}\) simultaneously using the diffusion model objective [15]. In fact, we can optimize the initial text embedding to make it closer to the target image (modify the background) space or reference image (modify object properties) space. This process is controlled by the object mask \(M\) and can be represented as follows:
\[\left[\mathbf{\hat{e}}_{a},\mathbf{\hat{e}}_{b}\right]=\arg\min\mathbb{E}_{\mathbf{x}_{t },\mathbf{e}\sim\mathcal{N}(0,\mathbf{I})}\left[\left\|M*(\mathbf{\epsilon}-f_{\theta} \left(\mathbf{x}_{t},t,\left[\mathbf{e}_{a},\mathbf{e}_{b}\right]\right))\right\|^{2} \right], \tag{2}\]
where \(M\) is computed by Segment Anything Model (SAM) [19], and \(\mathbf{x}_{t}\) is the noisy version of the input image, and \(f_{\theta}\) means the forward diffusion process using pre-trained diffusion model. The optimized text embeddings make it meaningful to modify the linear interpolation weights of \(\mathbf{\hat{e}}_{a}\) and \(\mathbf{\hat{e}}_{b}\) as follows:
\[\mathbf{e}_{opt}=\alpha*\mathbf{\hat{e}}_{a}+(1-\alpha)*\mathbf{\hat{e}}_{b}, \tag{3}\]
according to the experimental analysis of text embedding interpolation in Imagic [17], we tend to set the weight \(\alpha\) that describes object properties to 0.7.
#### 3.2.2 Model fine-tuning
We obtain new text embeddings \(\mathbf{e}_{opt}\) by linearly interpolating multiple optimized text embeddings. Due to the limited number of optimization steps, the resulting embeddings may not lead to a consistent representation of the selected objects or background in the input image. Therefore, we propose a layered diffusion strategy to optimize model parameters while freezing the optimized text embeddings \(\mathbf{e}_{opt}\). This enables the model to fit the desired image at optimized text embedding points. To achieve the arbitrary modification and combination of foreground object properties and backgrounds, we employ SAM [19] to derive \(M_{t}\) (object) and \(1-M_{t}\) (background) from \(O_{t}\) and subsequently obtain \(M_{r}\) (object) and \(1-M_{r}\) (background) from the reference image \(O_{r}\). The aforementioned can be achieved by optimizing the following equations:
\[\mathcal{L}_{obj}=\mathbb{E}_{\mathbf{x}_{t},\mathbf{\epsilon}\sim\mathcal{N}(0, \mathbf{I})}\left[\left\|M_{t}*(\mathbf{\epsilon}-f_{\theta}\left(\mathbf{x}_{t},t,e_{opt} \right))\right\|^{2}\right], \tag{4}\]
\[\mathcal{L}_{bg}=\mathbb{E}_{\mathbf{x}_{t},\mathbf{\epsilon}\sim\mathcal{N}(0,\mathbf{I })}\left[\left\|(1-M_{r})*(\mathbf{\epsilon}-f_{\theta}\left(\mathbf{x}_{t},t,e_{opt} \right))\right\|^{2}\right], \tag{5}\]
The total loss can be represented as follows:
\[\mathcal{L}_{total}=\lambda_{1}\mathcal{L}_{obj}+\lambda_{2}\mathcal{L}_{bg}, \tag{6}\]
This approach enables us to manipulate the foreground object and background independently, allowing for precise control over the final output image.
Figure 3: Given a complex text description, the original image (left) is capable of performing multiple editing actions and maintaining similar characteristics of a specific subject. Note that the mask in the bottom left corner is used to change the size of the selected object.
#### 3.2.3 Iterative guidance strategy
We first represent the diffusion process of a pre-trained model as follows:
\[I_{T},I_{T-1},\ldots,I_{0}.\;I_{t-1}=D(I_{t}|y) \tag{7}\]
where \(D\) represent an update process: \(\mathcal{I}\times\mathcal{C}\rightarrow\mathcal{I}\), \(\mathcal{I}\in\mathbb{R}^{H\times W\times C}\) is the image space, and \(\mathcal{C}\) is the condition space, and \(y\in\mathcal{C}\) is a text prompt. From \(T\) to \(0\), \(I_{T}\) gradually changes from a Gaussian noise distribution to a desired image by \(y\). Nonetheless, due to the significant gap between the initial image and the desired image in our task, applying the base generative diffusion process with fine-tuned models under condition \(y(i.e.,e_{opt})\) may still result in failures in modifying object properties in sometimes, such as modifications of actions.
This issue in image editing is due to the lack of a strong constraint corresponding to the text description of the edited attributes in the diffusion process. The network bias leads the diffusion model to favor object properties in the initial image. To address this, we strengthen the object properties by utilizing the decomposed \(\mathbf{\hat{e}_{a}}\) in the diffusion process. Specifically, we perform the following approach:
\[I_{t-1}=\begin{cases}D(I_{t}|\mathbf{\hat{e}_{a}}),&\text{if $t\%2=0$}\\ D(I_{t}|\mathbf{e_{opt}}),&\text{otherwise}\end{cases} \tag{8}\]
### Implementation details
We adopt the stable diffusion text-to-image model [27] as the baseline for our method. Specifically, we utilize the publicly available v1.4 version of the model, which was pre-trained on the LAION-5B dataset [31] and built upon the latent diffusion model. We first fine-tune the text embeddings with a learning rate of 1e-3 using Adam [9] and perform 500 steps in most of our experiments. Subsequently, we fine-tune the diffusion model itself, using a learning rate of 2e-6 and executing 250 steps. We employ an iterative guidance strategy throughout the diffusion process, starting from random noise. This iterative process consist of 50 iterations by default, resulting in more refined results. For one image, it takes about 2 minutes to run on a single NVIDIA A100 GPU.
## 4 Experiments
### Qualitative Evaluation
We extensively evaluate our approach using images from various domains and categories. Our method involves a simple text prompt-based editing process, allowing for tasks such as background replacement and object property modification. The images utilized in our experiments are copyright-free on the Internet. We employ a layered editing strategy to ensure robustness and controllability in the editing process. This approach enables multiple editing actions simultaneously on the images, demonstrating excellent editing controllability. The probabilistic diffusion model also motivates us to test our method under different random seeds. By employing our layered diffusion strategy, we can generate images that closely match the provided text descriptions while preserving the critical attributes of the original image in most cases. Our method produces multiple editing results from a single text prompt, providing users with a selection of options to choose from.
In Fig. 3, we present some edited images. These images preserve the distinct characteristics of the input image, and they are altered based on text prompts to accommodate a range of editing actions that go beyond mere background replacement and property modification. Our method can execute precise editing actions on the images by leveraging reference background or foreground objects. For instance, we can alter foreground objects based on reference foreground object maps or implement background modifications guided by reference background maps. More results can be found in the supplementary material.
### Comparisons
We primarily compare our proposed image editing method with previous text prompt methods, such as SDEdit [20], Imagic [17], and PnP [36]. It is worth noting that Imagic [17] necessitates fine-tuning of both the network and text embeddings, while our method adopts a similar fine-tuning approach.
As shown in Fig. 4, non-rigid editings, such as jumping and rotation, have significant challenges in image editing tasks. This complexity leads to the failure of both PnP [36] and SDEdit [20] in performing editing actions. Additionally, Imagic [17] tends to produce overfitting of the original image and text embeddings during training, thereby making accurate image editing difficult, especially when modifying text prompts go beyond attribute editing and involve adding other editing actions, such as foreground-background editing simultaneously. In contrast, our approach adopts a layered strategy that allows for the simultaneous execution of multiple editing actions. As a result, our method achieves impressive results in real image editing tasks. The last two columns of Fig. 4 show the edited results generated by employing different random seeds. Our method outperforms others in multitask editing performance.
In Fig. 4, we generate a reference background image from the diffusion model [27], and our layered diffusion approach allows us to make the edited image as close as possible to the reference background image. We can also choose our reference image as long as it is close to the perspective of the original image. We show more results in the supplementary material.
Text-based image editing methods are a relatively new direction, and there is currently no standard benchmark for evaluating our approach. Although Imagic [17] propose the _TEDBench_, it includes only a single non-rigid edit, which is also not fully applicable to our approach. To further assess the quality of our generated results, we utilize the _TEDBench_ dataset to generate over 300 images per method for a preliminary evaluation. The supplemental material includes the used text prompts. We
\begin{table}
\begin{tabular}{l|c c|c c|c c|c} \hline \hline & \multicolumn{2}{c|}{**Settings**} & \multicolumn{2}{c|}{**CLIP score**} & \multicolumn{2}{c|}{**Settings**} & \multicolumn{2}{c}{**CLIP score**} \\ & \(\mathcal{L}_{obj}\) & \(\mathcal{L}_{bg}\) & & & L-c-o & Fine-tune & I-g & \(\alpha\) & \\ \hline (a) & \(\times\) & & 0.28 & (g) & \(\times\) & & & 0.33 \\ (b) & & \(\times\) & 0.32 & (h) & & \(\times\) & & 0.32 \\ (c) & \(\lambda_{1}\) = 1 & & 0.29 & (i) & & \(\times\) & & 0.29 \\ (d) & \(\lambda_{1}\) = 3 & & 0.32 & (j) & & & = 1 & 0.32 \\ (e) & \(\lambda_{1}\) = 1 & \(\lambda_{2}\) = 3 & 0.28 & (k) & & & = 0 & 0.29 \\ (f) & & & **0.35** & (f) & & & & **0.35** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative results with different settings. We report the CLIP score [14] over 300 images.
Figure 4: We present several edited images and compare them with similar image editing algorithms, such as SDEdit [20], Imagic [17], and PnP [36]. Our method generates the best results.
use a CLIP-based text-image clip score [14; 23], which measures the cosine similarity between the text prompt and the image embeddings. As our method aims to maintain proximity between selected object features before and after non-rigid editing, the CLIP score does not effectively demonstrate the superiority of our approach (see Fig. 5 (f)-(g) and Tab. 1 (f)-(g)). However, it still partially reflects the state of our image editing, such as changes in the motion of the selected object. Tab. 1 provides an approximate representation of the CLIP scores for several methods, and our approach achieves the highest score.
### Ablation Study
In this section, we present a comprehensive analysis of the three modules employed in our method. We utilize the _TEDBench_[17] dataset and generate over 300 images using 20 different random seeds. As an auxiliary objective evaluation metric, we employ the text-image CLIP score [14], which is presented in Fig. 7. Furthermore, we present the specific performance of each component in Tab. 1. As mentioned previously, the CLIP score may not fully capture the suitability of our method as it primarily focuses on the alignment between images and text. For instance, the results of (b), (g), and (i) show high CLIP scores, but their object features significantly differ from the reference images.
As shown in Fig. 5 and Tab. 1, (a) does not utilize \(\mathcal{L}_{obj}\), resulting in a background that matches the reference image, while the properties of the foreground objects differ substantially. On the other hand, (b) demonstrates that \(\mathcal{L}_{bg}\) preserves a more similar background. (c), (d), and (e) analyze the impact of different weights assigned to the two losses, which affect the similarity of the background and foreground objects. In this paper, we mostly set \(\lambda_{1}\) to 2 and \(\lambda_{2}\) to 1, except when \(\lambda_{1}\) is set to 3 for smaller foreground objects. (g), (h), and (i) validate the effectiveness of each of the three modules in our method. (g) enhances the similarity of the background, (h) controls the global features, and (i) significantly increases the percentage of image generation results that satisfy the description text, rising from 43\(\%\) to 81\(\%\).
### User Study
Furthermore, we conduct a user study to evaluate and compare the subjective perception of our method with several other approaches. To ensure a fair comparison, we randomly select ten generated
Figure 5: We present the edited images with different settings. For each setting, we show two generated images using different random seeds. (f) illustrates the final edited results.
images and utilize two random seeds to generate our results. We present two discriminant conditions for each image: background similarity and action similarity. We then ask 20 participants to rate the resulting images on a scale ranging from 1 to 5, with a rating of 5 indicating a very good match and 1 indicating a very poor match. The histogram on the right-hand side of Fig. 7 shows the average scores. Remarkably, our method achieves optimal subjective performance compared to the other methods.
## 5 Limitations
While our approach demonstrates superior performance in achieving controlled image editing and accomplishes remarkable results in tasks involving multiple editing actions, it is essential to acknowledge three significant challenges. (1) Dealing with fine-grained tasks is still challenging for our method while we rely on a pre-trained text-to-image diffusion model and the problem of overfitting that occurs during model fine-tuning. Fig. 6 demonstrates that our method will produce artifacts when confronted with textures with intricate details or facial features. (2) As shown in the Fig. 6, another challenge arises when there is a notable disparity in camera angles between the input reference image and the desired edited image, leading to the creation of visually inconsistent scenes. This limitation can be mitigated by incorporating additional descriptions about the camera position in the target text. (3) We need to fine-tune the model to accommodate the reference image. Appropriately fine-tuning specific parameters may be required for unconventional or atypical manifestations to generate good results in sometimes.
## 6 Conclusion
We propose _LayerDiffusion_, a semantic-based layered image editing method that simultaneously edits specific subjects and backgrounds using a single input image. _LayerDiffusion_ preserves the unique characteristics of the subjects while integrating them seamlessly into new scenes. Extensive experimentation demonstrates that our method generates images closely resembling the feature of the input images, surpassing existing approaches in editing quality and controllability. User studies confirm the subjective perception of the generated images, aligning with human expectations. Our contributions include introducing _LayerDiffusion_ as the first method for simultaneous editing of specific subjects and backgrounds. We develop a layered diffusion training framework for controllable image editing, which opens up new possibilities for text-guided image editing tasks. We may focus on preserving complex textures and facial features in the future.
Figure 6: We present several failure cases, including artifacts on faces and significant disparities in the camera angles of the images.
Figure 7: We compare several image editing methods using the CLIP and subjective user perception scores. Our method achieves a relatively higher score. |
2302.13784 | Solution for the EPO CodeFest on Green Plastics: Hierarchical
multi-label classification of patents relating to green plastics using deep
learning | This work aims at hierarchical multi-label patents classification for patents
disclosing technologies related to green plastics. This is an emerging field
for which there is currently no classification scheme, and hence, no labeled
data is available, making this task particularly challenging. We first propose
a classification scheme for this technology and a way to learn a machine
learning model to classify patents into the proposed classification scheme. To
achieve this, we come up with a strategy to automatically assign labels to
patents in order to create a labeled training dataset that can be used to learn
a classification model in a supervised learning setting. Using said training
dataset, we come up with two classification models, a SciBERT Neural Network
(SBNN) model and a SciBERT Hierarchical Neural Network (SBHNN) model. Both
models use a BERT model as a feature extractor and on top of it, a neural
network as a classifier. We carry out extensive experiments and report commonly
evaluation metrics for this challenging classification problem. The experiment
results verify the validity of our approach and show that our model sets a very
strong benchmark for this problem. We also interpret our models by visualizing
the word importance given by the trained model, which indicates the model is
capable to extract high-level semantic information of input documents. Finally,
we highlight how our solution fulfills the evaluation criteria for the EPO
CodeFest and we also outline possible directions for future work. Our code has
been made available at https://github.com/epo/CF22-Green-Hands | Tingting Qiao, Gonzalo Moro Perez | 2023-02-22T19:06:58Z | http://arxiv.org/abs/2302.13784v1 | Solution for the EPO CodeFest on Green Plastics: Hierarchical multi-label classification of patents relating to green plastics using deep learning
###### Abstract
This work aims at hierarchical multi-label patents classification for patents disclosing technologies related to green plastics. This is an emerging field for which there is currently no classification scheme, and hence, no labeled data is available, making this task particularly challenging. We first propose a classification scheme for this technology and a way to learn a machine learning model to classify patents into the proposed classification scheme. To achieve this, we come up with a strategy to automatically assign labels to patents in order to create a labeled training dataset that can be used to learn a classification model in a supervised learning setting. Using said training dataset, we come up with two classification models, a SciBERT Neural Network (SBNN) model and a SciBERT Hierarchical Neural Network (SBHNN) model. Both models use a BERT model as a feature extractor and on top of it, a neural network as a classifier. We carry out extensive experiments and report commonly evaluation metrics for this challenging classification problem. The experiment results verify the validity of our approach and show that our model sets a very strong benchmark for this problem. We also interpret our models by visualizing the word importance given by the trained model, which indicates the model is capable to extract high-level semantic information of input documents. Finally, we highlight how our solution fulfills the evaluation criteria for the EPO CodeFest and we also outline possible directions for future work. Our code has been made available at [https://github.com/epo/CF22-Green-Hands](https://github.com/epo/CF22-Green-Hands).
## 1 Introduction
A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an enabling disclosure of the invention. Soon after filing, patent applications are classified following a classification scheme. The International Patent Classification (IPC) and the Cooperative Patent Classification (CPC), which is a more specific version of the IPC, are two of the most commonly used classification schemes. Both the IPC and the CPC are hierarchical classification schemes and patents are classified as deeply as possible, i.e. there is no double classification in "parent" and "child" classes.
It goes without saying that performing this classification manually by humans, e.g. by patent examiners, as it is currently done, is a very time-consuming task. Therefore, it would be very beneficial to be able to perform this classification automatically and there is an increasing amount of effort in using machine learning models for this purpose.
An additional challenge is that the classification schemes are not static. As technology evolves, some technical areas become obsolete while new ones appear. To reflect this, the IPC and CPC are revised periodically and, if needed, updated. Upon updating the classification scheme, classification models also need to be updated to take into account the changes. Just to provide an example, in 2022, several "child" classes were added to the class G06N10/00 for quantum computing, namely G06N10/20, G06N10/40, G06N10/60, G06N10/70 and G06N10/80. This adds a layer of complexity on top of the patent classification task as there is no training data available for the new classes. The obvious solution is to manually check all the patents classified in G06N10/00, assign "child" classes as necessary, and subsequently, update the classification model using this manually labeled data. However, this is a very time-consuming approach and it would be very beneficial to find an alternative way to update the classification model.
In this work, we focus on technologies relating to green plastics. The term "green plastics" refers to a way of achieving a more circular plastics industry, for example by plastics with a reduced or minimized environmental impact or by processes for improved plastics recycling and minimizing plastic waste. In the last couple of decades, the amount of patents disclosing technologies related to green plastics has increased considerably. In order to make the knowledge contained in said patents more readily available to everybody, there is a need for having a patent classification scheme relating to this technology.
We propose a classification scheme, based on [21], for technologies relating to green plastics and an automatic way to assign weak labels to patents so that a machine learning model, based on BERT as a feature extractor and a neural network as a classifier, can be learned in a supervised setting. To the best of our knowledge, this is the first work that applies a weak supervision strategy for the classification of patents in a new classification scheme.
The paper is organized as follows: First, related work is briefly discussed. Second, the classification scheme is introduced and a labeled training dataset is obtained. Consequently, a machine learning model is proposed and trained on the training dataset. Subsequently, several experiments are carried out and the results are reported and discussed. Finally, the fulfillment of the EPO CodeFest criteria is presented and future work is thoroughly discussed.
## 2 Related work
In the last decades, there is a large number of works focusing on building machine learning models for patent classification. Earlier works used traditional machine learning methods such as k-Nearest neighbors (k-NN) [6], support vector machine (SVM) [6; 26; 5], Naive Bayes (NB) [6; 5], k-means clustering [10] and artificial neural networks [24; 8].
In the past decade, deep learning techniques have also been applied to patent classification outperforming previous methods. [15] proposed a deep learning algorithm based on pre-trained word embeddings and convolutional neural networks (CNN). The input to the models consists of the title and the abstract of the patent and the output consists of one or multiple classes at the sub-class label. [9] proposed using BiLSTM initialized with word2vec and a hierarchical attention-based memory unit. The input to the model consists of the title and the abstract of the patent and the output consists of a probability value for each class in the hierarchy.
Recently, transformer-based neural language models such as BERT [4] have outperformed previous state-of-the-art models in several natural language processing tasks. Unsurprisingly, BERT has also been applied to patent classification. [14] proposed to fine-tune a pre-trained BERT model for patent classification. Very recently, [22] proposed to fine-tune a pre-trained BERT model combined with a neural network based hierarchical classifier. In particular, they used SciBERT [1], which is a BERT model trained on a corpus of scientific publications and is hence closer to the patent domain.
It is worthwhile to point out that most of these works focus on learning a model for classifying patent documents locally at a single hierarchical level, i.e. only "child" classes are assigned by the model, as e.g. [15] or [14], or globally by treating all labels independently, e.g. [9]. However, recent work [22] proposes a hierarchical neural network classifier with as many output heads as classes in the hierarchy.
All prior-art works mentioned above assume the existence of a labeled training dataset, i.e. they perform patent classification based on an already existing classification scheme. This is however not
the case in our work as we first need to propose a classification scheme and subsequently, we learn a classification model for the proposed classification scheme.
Weak supervision is an approach for machine learning in which noisy or imprecise labels are provided to learn a model in a supervised learning setting. There are plenty of works using weak supervision in different natural language processing tasks, such as text classification. Recently, [2] proposed multiple weak supervision strategies to label text data automatically. One of the strategies proposed is based on heuristic rules consisting of regular expression matching.
## 3 Classification scheme
As discussed above, there is no existing classification scheme for technologies relating to green plastics. Therefore, a classification scheme, based on [21], is proposed as shown in Figure 1a. The class definitions are summarized in Table 1. The root node is Y02G for technologies relating to green plastics, which is divided into Y02G10/00 for recycling plastic waste and Y02G20/00 for alternative plastics. Y02G10/00 is in turn divided into Y02G10/10 for plastic waste recovery, including collecting, sorting, separating and cleaning plastic waste; and Y02G10/20 for plastic waste recycling, including recycling methods such as plastic-to-compost, plastic-to-monomer, plastic-to-nicineration and plastic-to-energy. Y02G10/20 also has two child classes, which are Y02G10/22 for plastic-to-product recycling, including mechanical recycling, such as melting and reforming thermoplastics, and Y02G10/24 for plastic-to-feedstock recycling, such as cracking, gasification and pyrolysis. Y02G20/00 is divided into Y02G20/10 for bioplastics and Y02G20/20 for designs for easier recycling.
## 4 Solution design
Now that we have a classification scheme, we need to propose a way to classify patents. An option could be to assign keywords to classes and, if any of those keywords are present in a patent, assign the corresponding classes to the patent accordingly. However, such a simple approach has several weaknesses. It is limited to the specified keywords and cannot extend to any synonyms which have not been explicitly specified. Moreover, it is unable to find semantically equivalent expressions which do not match the specified keywords.
However, machine learning models have the capacity of learning high-level semantic representations from data without the need for domain experts. For this reason, we decided to propose a machine learning model.
However, at this point, we do not have a labeled training dataset for our classification scheme. Learning a machine learning classification model in an unsupervised setting (when the training dataset is unlabeled) is not feasible as it would not be possible to control the classes that the model learns. Therefore, we decided to learn a classification model in a supervised setting (when the training dataset is labeled). In the following sections, we explain how we build our labeled training dataset and how we obtain our classification models.
Figure 1: a) Proposed classification scheme for technologies related to green plastics. b) Hierarchical labels: A patent belonging to class Y02G10/22 (shown in green) is considered to also belong to all the ancestor classes (shown in blue) and it is given the label \(l\) = [1, 1, 0, 1, 1, 0, 0, 0, 0]; as opposed to merely \(l\) = [0, 0, 0, 0, 1, 0, 0, 0, 0]. Note that the label list corresponds to the class list [Y02G, Y02G10/00, Y02G10/10, Y02G10/20, Y02G10/22, Y02G10/24, Y02G20/00, Y02G20/10, Y02G20/20]. Best viewed in color.
## 5 Training dataset
To learn a machine learning classification model in a supervised setting, we need to prepare a labeled training dataset \(X\) = {(\(P_{1}\), \(l_{1}\)), (\(P_{2}\), \(l_{2}\)),..., (\(P_{M}\), \(L_{M}\))}, where each patent \(P_{i}\) consists of a sequence of \(N\) words, i.e. \(P_{i}\) = {\(w_{1}\), \(w_{2}\),..., \(w_{N}\)}, and is associated with a label \(l\). Ideally, the labels would be assigned to the patents manually by humans. However, this is a very time-consuming task and therefore, we explore a way to provide weak labels in an automatic manner.
### Raw dataset
To build the training dataset, we use the EP full text data [16] which contains all publications of EP patent applications and patent specifications from 1978 until the end of January 2022. The dataset comprises over 6 million publications, each comprising several fields such as title, abstract, description, claims, language, filing date, publication date, classes, search report, etc. The entire dataset is about 260 gigabytes in size.
EP patents may be published in English, French or German. Since available pre-trained language models are usually monolingual [4; 1], we select only patents written in English. For each patent, the title, the abstract and the description are obtained while the other fields are discarded. Some standard text pre-processing steps, such as removing punctuation marks and stop-words, minimizing capital letters and tokenization, are carried out.
This dataset is unlabeled since the patents are not assigned classes in the classification hierarchy described in Figure 0(a). In the following, we explain the structure of the labels and how to assign them to build a labeled training dataset.
### Hierarchical label definition
According to our classification scheme, there are nine classes and hence, each patent will be assigned a label \(l\) consisting of a 9-dimensional binary vector, where 1 means that the patent belongs to the corresponding class and 0 means that the patent does not belong to the corresponding class. The labels look as follows: \(l\) = [Y02G, Y02G10/00, Y02G10/10, Y02G10/20, Y02G10/22, Y02G10/24, Y02G20/00, Y02G20/10, Y02G20/20], where each element is a binary value associated with the corresponding class.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Class** & **Definition** & **Keywords** \\ \hline Y02G & Green plastics & green+ 4d plastic+ \\ \hline Y02G10/00 & Recycling of plastic waste & recycl+ 4d plastic+ \\ \hline Y02G10/10 & Plastic waste recovery & (plastic+ 3d wast+) or ((collect+ or sort+ or separat+ or clean+) 6d plastic+) \\ \hline Y02G10/20 & Plastic waste recycling & ((recycle+ 4d plastic+) 20d (compost+ or fertili+)) or ((recycle+ 4d plastic+) 20d (depolymer+ or repolymer+)) \\ & & or ((recycle+ 4d plastic+) 4d incinerat+) \\ \hline Y02G10/22 & Plastic-to-product & (plastic+ 4d recycle+) 20d (+melt+ or extrud+ or pellet+) \\ \hline Y02G10/24 & Plastic-to-feedstock & ((feedstock+ 2d recycl+) 20d plastic+) \\ \hline Y02G20/00 & Alternative plastics & alternativ+ 2d plastic+ \\ \hline Y02G20/10 & Bioplastics & bioplastic+ or ((biolog+ or biobagrad+ or biobased+ or compostable+) 4d plastic+) \\ \hline Y02G20/20 & Designs for easier recycling & vitrimer+ or ((covalent+ 2d adapt+) 2d net+) or ((selfheal+ or selfrepair+) 2d polymer+) 20d recycl+ \\ \hline \end{tabular}
\end{table}
Table 1: Definition and keywords for each class in the proposed classification scheme. The symbol ”+” means none or more characters. The expression ”\(w_{1}\)_nd_\(w_{2}\)” means that between words \(w_{1}\) and \(w_{2}\) there may be \(n\) other words and the relative order in the sequence of words between \(w_{1}\) and \(w_{2}\) is irrelevant.
In order to provide more informative labels to the model for training, we propose to use hierarchical labels, meaning that for a patent belonging to a certain class, we also assign all the ancestor classes of said class. The idea behind this is that the model should learn that the given patent belongs to all the corresponding ancestors as well. This is illustrated in Figure 0(b).
### Labeling process
To assign the labels to the patents in an automatic way, we first define the keywords to each class. The list of keywords is provided in Table 1. Consequently, we translate the keywords into regular expressions and search for them in the description of each patent. If the keyword is found a \(k\) number of times, the patent is assigned to the class corresponding to the found keyword. This process is represented schematically in Figure 2.
### Building the training dataset
The labeling process described in Figure 2 is then performed to all the raw data that survived our raw data quality check and filters, e.g. English files, without missing content etc. In order to obtain a labeled training dataset. we sample all patents which have been labeled as belonging to green plastics (positive samples) and around two times more patents which have not been labeled as belonging to green plastics (negative samples). For the negative samples, we randomly sample patents related to any other field of technology. Specifically, in a similar way as for the patents related to green plastics, we also sample patents related to conventional plastics in our negative sample set, so that the classification model can learn to differentiate between conventional plastics and green plastics. The number of patents belonging to green plastics and the number of patents belonging to other fields of technology are shown in Figure 2(a). It can be see that the constructed dataset is imbalanced meaning that the samples of one class (in this case, green plastics) are considerably lower than the samples of other classes. This is done on purpose to mimic the real situation and also to challenge model to learn better parameters. A sample of the labeled training dataset can be seen in Figure 2(b), where the column TITLE_ABSTR is the input which is a combination of the title and the abstract of one patent file, the target column is to indicate if the patent belongs to certain classes, 1 means yes.
We split the resulting labeled dataset into a training set, a validation set and a test set. The breakdown of the number of patents for each class in each of these sets is shown in Table 2. This table shows the number of positive samples (+) and negative samples (-) for each class. As can be seen, the
Figure 3: a) Number of patents belonging to green plastics and number of patents belonging to other fields of technology. b) Sample from the labeled training dataset. After labeling, the description is dropped and only the title and the abstract are used as input for the classification model.
Figure 2: Schematic representation of the labeling process.
dataset is heavily imbalanced, above all for some classes that are deeper in the class hierarchy, such as Y02G10/24 or Y02G20/20, which have less than 100 positive samples.
At this point, it could be possible to try to obtain more positive samples for those classes by enlarging the set of keywords in Table 1. However, expert knowledge would be required to suggest more relevant keywords. Another possibility could be to ignore classes that have less than a threshold number of samples (e.g. 100 or 500), since it will be very hard for the machine learning model to learn those classes given the large imbalance. However, we decided to continue with the dataset of Table 2. In the evaluation section, we discuss how we believe that this imbalance could be tackled. Moreover, we also report results for each level of the classification hierarchy level and for each class, which should not be considerably affected by the presence of these heavily imbalanced classes during training.
## 6 Classification model
Our goal is to learn a classification model \(\Omega\) with parameters \(\Theta\) such that given a patent \(P\), it returns a 9-dimensional classification result \(y\), where each element in \(y\) corresponds to the probability of the patent \(P\) belonging to the corresponding class in the classification scheme of Figure 0(a). Mathematically,
\[y=\Omega(P;\Theta);\]
### Architecture
We propose two classification models as shown in Figure 4. Both models use a BERT model as a feature extractor and a neural network as a classifier. In the first model, SBNN, the classifier is a conventional neural network and in the second model, SBHNN, the classifier is a hierarchical neural network. Similar to [22], the BERT model is a pre-trained SciBERT model [1]. The input to the model is a 256-dimensional vector consisting of a concatenation of the patent's title and abstract, as done in [15; 9; 14; 22] and the output is a 768-dimensional feature vector \(h\) corresponding to the CLS embedding, as done in [14; 22]:
\begin{table}
\begin{tabular}{|c|c c|c c|c c|} \hline
**Class** & \multicolumn{2}{c|}{**Training set**} & \multicolumn{2}{c|}{**Validation set**} & \multicolumn{2}{c|}{**Test set**} \\ & + & - & + & - & + & - \\ \hline Y02G & 26286 & 47160 & 3244 & 5937 & 3259 & 5922 \\ \hline Y02G10/00 & 19106 & 54340 & 2335 & 6846 & 2376 & 6805 \\ \hline Y02G10/10 & 17452 & 55994 & 2150 & 7031 & 2171 & 7010 \\ \hline Y02G10/20 & 494 & 72952 & 63 & 9118 & 61 & 9120 \\ \hline Y02G10/22 & 400 & 73046 & 50 & 9131 & 51 & 9130 \\ \hline Y02G10/24 & 33 & 73413 & 3 & 9178 & 4 & 9177 \\ \hline Y02G20/00 & 7755 & 65691 & 977 & 8204 & 946 & 8235 \\ \hline Y02G20/10 & 2901 & 70545 & 375 & 8806 & 376 & 8805 \\ \hline Y02G20/20 & 10 & 73436 & 0 & 9181 & 1 & 9180 \\ \hline \hline Total & 73446 & \multicolumn{2}{c|}{9181} & \multicolumn{2}{c|}{9181} \\ \hline \end{tabular}
\end{table}
Table 2: Number of patents for each class in the training, validation and test sets. This is obtained with a threshold \(k=1\). ”+” indicates positive samples, i.e. patents belonging to the corresponding class, and ”-” indicates negative samples, i.e. patents not belonging to the corresponding class. Patents counted for a given class are also counted for all the ancestors of that class. For example, if a patent is counted for Y02G10/22, it is also counted for Y02G10/20, Y02G10/00 and Y02G, in line with the labeling process described in Figure 0(b). The last row indicates the total amount of patents in each of the training, validation and test sets.
\[h=SciBERT(w_{1},w_{2},...,w_{N};\Theta_{B});\]
where \(\Theta_{B}\) represents the parameters of SciBERT.
The feature vector \(h\) is subsequently input to a neural network classifier \(NN\) which outputs a classification result:
\[y=NN(h;\Theta_{N});\]
where \(\Theta_{N}\) represents the parameters of the neural network.
The neural network is implemented by cascading fully connected layers (FC), which implement a linear matrix multiplication and add a bias term, and a non-linear activation function, e.g. ReLU or sigmoid, \(\sigma\). Mathematically, a fully connected layer with a non-linear activation function is implemented as follows:
\[t=a(Wx+b);\]
where \(x\) and \(t\) are the input and output, \(W\) is a learnable weight matrix, \(b\) is a learnable bias vector and \(a(z)\) denotes a non-linear function, such as:
\[ReLU(z)=max\{0,z\};\ \ \ \ \ \ \sigma(z)=\frac{1}{1+e^{-z}}\]
In SBNN, the neural network classifier consists of a fully connected layer with ReLU activation followed by another fully connected layer with sigmoid activation. In SBHNN, inspired by [22], the neural network classifier consists of a hierarchical neural network with one classification head per class in which the connections between different classification heads, implemented as element-wise additions, directly reflect the classification scheme of Figure 0(a).
### Loss function
The model \(\Omega\) is trained using the labeled training dataset \(X\) to minimize the binary cross-entropy loss, which, for a single patent, is expressed as follows:
Figure 4: a) Architecture of SBNN. b) Architecture of SBHNN. The feature extractor is displayed in orange and the classifier is displayed in green. The blue arrows indicate the hierarchical connections between classification heads. The sizes of the vectors are shown in grey. Best viewed in color.
\[L=-\sum_{i=0}^{C=8}\beta_{i}[\gamma_{i}\cdot l_{i}\cdot log(y_{i})+(1-l_{i}) \cdot log(1-y_{i})];\]
where \(C\) represents the number of classes, \(\beta\) is a class importance weight (if \(\beta\) = [1, 1, 1, 1, 1, 1, 1, 1], all classes are given the same importance) and \(\gamma\) is a positive sample weight to compensate the imbalanced training set, and in the end, it will be reflected as a trade-off between precision and recall (if, for a given class \(c\), \(\gamma_{c}\) = 1, false positives and false negative are given the same importance).
During training, we update the parameters \(\Theta_{B}\) and \(\Theta_{N}\) to minimize this loss function by mini-batch gradient descent with Adam optimizer [11].
## 7 Evaluation
We evaluate the performance of our models on our training dataset based on several evaluation metrics which are commonly used in hierarchical text classification tasks. In the following, we report and analyze the results.
### Evaluation metrics
Prior-art [22] used _hierarchical_ precision, _hierarchical_ recall and _hierarchical_ F1-score as proposed by [12]. These metrics are more suitable for hierarchical classification tasks than the conventional precision, recall and F1 score as they give credits to partially correct classifications and discriminate errors by both distance and depth in the classification hierarchy. These are defined as follows:
\[hP=\frac{\sum_{i}|Y_{i}\cap L_{i}|}{\sum_{i}|L_{i}|};\ \ \ \ \ \ hR=\frac{\sum_{i}|Y_{i}\cap L _{i}|}{\sum_{i}|L_{i}|};\ \ \ \ \ \ hF1=2\cdot\frac{hP\cdot hR}{hP+hR};\]
wherein, for each test instance \(i\), the set \(Y_{i}\) consists of all predicted labels and their respective ancestors, the set \(L_{i}\) consists of all true labels including ancestors, \(|\cdot|\) denotes the cardinality of a set and \(\cap\) denotes the intersection of sets. Hence, if for a given patent the predicted class is Y02G10/20 while the target class is Y02G10/22, the set \(Y_{i}\) = {Y02G10/20, Y02G10/00, Y02G} and the set \(L_{i}\) = {Y02G10/22, Y02G10/20, Y02G10/00, Y02G}. The patent is correctly assigned the classes from the extended set \(|Y_{i}\cap L_{i}|\) = 3, there are \(|Y_{i}|\) = 3 assigned classes and \(|L_{i}|\) = 4 target classes. Therefore, we get \(hP=\frac{3}{3}\) and \(hR=\frac{3}{4}\).
As done by [22], we report per instance \(macro\)-scores, that compute the scores independently per class and then average them, as well as \(micro\)-scores, that aggregate the contributions of each class to compute an global average score. Both are implemented with scikit-learn [20]. Unless otherwise stated, the decision threshold is set to 0.5, meaning that we consider that a patent is classified to class if the probability of the classification model output, i.e. after the sigmoid layer, is at least 0.5.
In addition, we compute the Area Under the Precision-Recall Curve (AUPRC) [3], which does not require defining a decision threshold. We also compute the accuracy, even though this is not the best metric when the dataset is imbalanced. We use the implementation from scikit-learn [20] to compute both the AUPRC and the accuracy.
### Implementation details
We build the labeled training dataset using the EP full text data [16] as indicated in Figure 2. We download and store all the data in a Toshiba Canvio Basics USB 3.0 Hard Drive with 1 TB of storage. Consequently, we label and sample the training dataset in a MacBook Pro with a 2,7 GHz Dual-Core Intel Core i5.
We implement our models in Python using Pytorch [19]. We use the HuggingFace Transformers library [25] for instantiating SciBERT. We use the same hyper-parameters for training both classification models. We have a learning rate of 2e-6, a batch size of 96 and a dropout of 0.5. The class importance is set to \(\beta\) = [4, 3, 2, 2, 1, 1, 3, 2, 2], and hence, we give more importance to classes that are higher up in the classification hierarchy. The positive sample weight is set to \(\gamma\) = [2, 2, 2, 2, 2,
2, 2, 2, 2] to compensate for the class imbalance. The parameters of SciBERT are initialized to the pre-trained values and the parameters of the neural network are initialized randomly.
We train the models until the loss in the validation set stops decrease and the error stops to decrease in order to avoid overfitting. In Figure 5, we show the evolution of the train and validation loss and error. For SBNN, we chose the model after 6 training epochs; and for SBHNN, we chose the model after 5 epochs. The training of a single model takes approximately 1.2 hours on an Nvidia GeForce RTX 3090 with 24GB of GDDR6X memory.
### Experimental results
We perform experiments to verify the validity of our classification models. Since we are dealing with a hierarchical multi-label classification problem with imbalanced data, in order to give a complete overview, we report results based on the whole classification hierarchy, based on each level in the classification hierarchy and based on each class.
**Performance for the whole classification hierarchy.** Table 3 shows the overall results obtained for SBNN and SBHNN. From Table 3 we can also see that SBHNN achieves better performance than SBNN in terms of both macro and micro F1 scores. For both models, the micro scores are considerably higher than the macro scores. This is due to the fact that macro scores calculate metrics for each class, and compute their unweighted mean, without taking class imbalance into account. On the other hand, micro scores calculate the metrics globally by considering each output independently. Therefore, they are relatively less biased towards the classes which have less positive training samples, and hence, provide higher scores.
It is noticeable that the overall results of both models don't seem to be very satisfactory. This is actually quite commonly shown in the literature [22]. The overall low performance is due to the challenging nature of the hierarchical classification problem. Moreover, in our case, the lower the class level is, the less number of positive samples we have, and therefore, the more challenging it is to train a good performing classification model. It should be kept in mind that the calculations of these evaluation metrics are negatively biased by these classes with insufficient data.
**Performance for each level in the classification hierarchy.** Table 4 shows the results per level obtained for SBNN and SBHNN. It is clearly shown that both SBNN and SBHNN have a better performance, both in precision and recall, the higher the class is in the classification hierarchy. For the first level (Y02G), the precision and recall of both models are higher than 70\(\%\). For the second level (Y02G10/00 and Y02G20/00), they still reach 60-50\(\%\) and for the third level (Y02G10/10, Y02G10/20, Y02G20/10 and Y02G20/20) and the fourth level (Y02G10/22 and Y02G10/24), they are around 40-30\(\%\). This is to be expected as the number of positive training samples in lower levels is considerably lower, see Table 2, therefore, there is not much information for both models to learn from. Moreover, we also gave a higher class importance, i.e. \(\beta\), to classes in higher levels in the classification scheme during the training process, so the model was trained to focus more on the classes which have more positive samples.
Figure 5: Evolution of the train and validation loss and error. Best viewed in color.
We can also see that SBNN actually performs slightly better in level 1 in terms of both macro and micro F1 scores, whereas SBHNN obtains better results in lower levels. This is probably because of the unique architecture of SBHNN, in which there are independent classification heads for each class. The model can thus take advantage of the more complex neural network and learn better from the data.
**Performance for each class.** Table 5 shows the results per class obtained for SBNN and SBHNN. In line with the results obtained in Table 4, classes that are on top of the classification hierarchy have better scores and as we get deeper in the hierarchy, both models struggle to maintain the performance. As mentioned in relation with the results of Table 4, this is to be expected due to the heavy class imbalance and to the class importance \(\beta\). Another potential explanation is that the model currently only considers the title and the abstract. It is possible that, based solely on this information, it is challenging, if not impossible, to classify the patent so deep in the hierarchy. We believe that more input information could help with this issue. In the future work, we discuss an approach that could be used to overcome this limitation, i.e. enrich the input by some content from the description of patents.
Figure 6 shows the hierarchical precision - hierarchical recall curves for each class when modifying the decision threshold (which for the results shown in Tables 3, 4 and 5 was set to 0.5). These curves show how it is possible to trade off precision and recall by modifying the decision threshold.
### Explainability
There is an increasing interest in being able to explain the predictions of machine learning models. For this purpose, we implemented the method of integrated gradients [23], which allows us to find out which of the input words are the most important for the classification model when making a prediction. We implemented this method with the Captum library [13]. In Figure 6(a), we show an example of a patent that does not relate to green plastics and in Figure 6(b), we show an example of a patent that relates to green plastics.
This information is useful for understanding how the classification model is making the predictions. We believe that showing this information could help patent examiners gain trust in the classification model and also quickly decide whether the predictions of the classification model are reasonable or not.
\begin{table}
\begin{tabular}{|c|c|c c c|c c c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{**macro-avg.**} & \multicolumn{4}{c|}{**micro-avg.**} & **AUPRC** & **Accuracy** \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{5-10} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} \\ \hline & 1 & **0.7173** & 0.7160 & **0.7166** & **0.7412** & **0.7412** & **0.7412** & **0.7076** & **0.7412** \\ & 2 & **0.6276** & 0.4436 & 0.4805 & **0.6198** & 0.5341 & 0.5738 & 0.6325 & **0.6580** \\ & 3 & 0.4612 & 0.2944 & 0.3322 & **0.6113** & 0.5118 & 0.5571 & 0.6058 & **0.6317** \\ & 4 & 0.3587 & 0.2290 & 0.2584 & **0.6113** & 0.5087 & 0.5553 & 0.6026 & **0.6317** \\ \hline & 1 & 0.7096 & **0.7208** & 0.7131 & 0.7294 & 0.7294 & 0.7294 & 0.7057 & 0.7294 \\ & 2 & 0.5546 & **0.4873** & **0.5052** & 0.5890 & **0.5716** & **0.5802** & **0.6342** & 0.6269 \\ & 3 & **0.5172** & **0.3441** & **0.3820** & 0.5880 & **0.5423** & **0.5642** & **0.6108** & 0.6032 \\ & 4 & **0.4856** & **0.2807** & **0.3197** & 0.5882 & **0.5398** & **0.5629** & **0.6087** & 0.6032 \\ \hline \end{tabular}
\end{table}
Table 4: Evaluation metrics for each level in the classification hierarchy. Level 1 corresponds to Y02G; level 2 to Y02G10/00 and Y02G20/00; level 3 to Y02G10/10, Y02G10/20, Y02G20/10 and Y02G20/20; and level 4 to Y02G10/22 and Y02G10/24.
\begin{table}
\begin{tabular}{|l|c c c c|c c c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{**macro-avg.**} & \multicolumn{4}{c|}{**micro-avg.**} & **AUPRC** & **Accuracy** \\ & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline SBNN & 0.3587 & 0.2290 & 0.2584 & **0.6113** & 0.5087 & 0.5553 & **0.6317** & 0.6026 \\ \hline SBHNN & **0.4856** & **0.2807** & **0.3197** & 0.5882 & **0.5398** & **0.5629** & 0.6032 & **0.6087** \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation metrics for the whole classification hierarchy.
\begin{table}
\begin{tabular}{|c|c|c c c|c c c|c|c|} \hline & & & \multicolumn{3}{c|}{**macro-avg.**} & \multicolumn{3}{c|}{**micro-avg.**} & **AUPRC** & **Accuracy** \\ Model & Class & hP & hR & hF1 & hP & hR & hF1 & & \\ \hline \multirow{9}{*}{SBNN} & Y02G & **0.7173** & 0.7160 & **0.7166** & **0.7412** & **0.7412** & **0.7412** & **0.7076** & **0.7412** \\ & Y02G10/00 & **0.6145** & 0.5946 & 0.6043 & **0.6186** & 0.6000 & 0.6091 & 0.6710 & **0.6833** \\ & Y02G10/10 & **0.6020** & 0.5661 & **0.5832** & **0.6078** & 0.5747 & 0.5908 & 0.6443 & **0.6596** \\ & Y02G10/20 & 0.4097 & 0.3964 & 0.4029 & **0.6186** & 0.5936 & 0.6058 & 0.6665 & **0.6780** \\ & Y02G10/22 & 0.3073 & 0.2973 & 0.3022 & **0.6186** & 0.5883 & 0.6031 & 0.6627 & **0.6780** \\ & Y02G10/24 & 0.3073 & 0.2973 & 0.3022 & **0.6186** & 0.5932 & 0.6056 & 0.6658 & **0.6780** \\ & Y02G20/00 & **0.6454** & 0.3853 & 0.4330 & **0.6382** & 0.5194 & 0.5727 & 0.6414 & **0.6913** \\ & Y02G20/10 & **0.6865** & 0.3305 & 0.4030 & **0.6422** & 0.4949 & 0.5590 & 0.6229 & **0.6872** \\ & Y02G20/20 & **0.4303** & 0.2569 & 0.2886 & **0.6382** & 0.5193 & 0.5726 & 0.6411 & **0.6913** \\ \hline \multirow{9}{*}{SBHN} & Y02G & 0.7096 & **0.7208** & 0.7131 & 0.7294 & 0.7294 & 0.7294 & 0.7057 & 0.7294 \\ & Y02G10/00 & 0.5958 & **0.6184** & **0.6053** & 0.5979 & **0.6298** & **0.6134** & **0.6710** & 0.6584 \\ & Y02G10/10 & 0.5904 & **0.5789** & 0.5824 & 0.5935 & **0.5936** & **0.5936** & **0.6447** & 0.6365 \\ & Y02G10/20 & **0.6472** & **0.4451** & **0.4615** & 0.5981 & **0.6241** & **0.6108** & **0.6676** & 0.6537 \\ & Y02G10/22 & **0.6729** & **0.3632** & **0.3970** & 0.5983 & **0.6196** & **0.6088** & **0.6645** & 0.6537 \\ & Y02G10/24 & **0.4854** & **0.3338** & **0.3461** & 0.5981 & **0.6237** & **0.6106** & **0.6672** & 0.6537 \\ & Y02G20/00 & 0.5381 & **0.4581** & **0.4747** & 0.5897 & **0.5862** & **0.5880** & **0.6429** & 0.6731 \\ & Y02G20/10 & 0.5677 & **0.4215** & **0.4658** & 0.5915 & **0.5667** & **0.5788** & **0.6288** & 0.6679 \\ & Y02G20/20 & 0.3587 & **0.3054** & **0.3165** & 0.5897 & **0.5861** & **0.5879** & **0.6428** & 0.6731 \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation metrics for each class in the classification hierarchy.
Figure 6: Hierarchical precision - hierarchical recall curves for each class. Best viewed in color.
## 8 EPO CodeFest evaluation criteria
In this section, we highlight how we believe that our solution fulfills the evaluation criteria.
### Completeness and transferability.
We have provided an end-to-end solution that covers all the steps, i.e. from pre-processing of the raw xml data to building a labeled training dataset, to training and evaluating classification models based on state-of-the-art machine learning algorithms. Together with the code repository in GitHub, we also submit slides to help people quickly understand our approach and this paper to present all the relevant technical details.
Our solution and code base are far from being limited to this specific problem of classifying patents relating to green plastics but can easily be applied to different situations. Whenever there is an update of the classification scheme, it is possible to use our code to construct a labeled training dataset and to train and evaluate a classification model for the updated classification scheme. Our code is very flexible and most modifications only need to be done in a configuration file. Moreover, it is also possible to skip the labeling process and to use our code to directly train a classification model with a given labeled training dataset for an already existing classification scheme.
Figure 7: a) Negative sample. b) Positive sample. Red means that the highlighted words decrease the probability of the patent being classified as belonging to green plastics whereas green means that the highlighted words increase the probability of the patent being classified as belonging to green plastics. We can also see how the BERT tokenizer tokenizes the input text. [CLS], [SEP] and [PAD] are special tokens of the BERT tokenizer. ”\(\#\)” means that the rest of the token should be attached to the previous one, without space. These results were obtained with SBHNN. Best viewed in color.
### Effectiveness and efficiency.
The effectiveness of the approach has been thoroughly discussed in Section 7.3 in relation to the results of Tables 3, 4 and 5. The models' performance shows that our solution sets up a strong benchmark for this challenging problem. Possible ways that we believe could lead to still better results are discussed in the future work section.
This approach can be applied whenever new classes are added to the classification scheme. Currently, this is done by humans which is a very time-consuming task and also prone to errors. With our solution, this could be done in a more automatic fashion by (re-)training a classification model for the updated classification scheme. Moreover, during inference, it takes our models less than a second to assign classes to a patent.
### Design and usability.
We have designed an end-to-end flexible pipeline to tackle this challenging problem. Moreover, we allow feedback from domain experts to be seamlessly incorporated (for example, by enriching the keywords associated with each class).
Our code repository is very well-structured and commented which makes it very easy to understand and use. Moreover, we follow industry best coding practices, e.g.: 1) The model results are reproducible and we provided all the pre-trained models which are ready to use; 2) we use popular open-sourced python packages, see the'requirements.txt' file to see all the packages' dependencies and corresponding versions, e.g. Pandas [17] for raw data pre-processing, PyTorch [19] for deep learning framework, HuggingFace [25]) for pre-trained BERT models; 3) the code is highly modular, there are no hard-coded variables and all the important parameters related to the projects are clearly defined in a configuration file; 4) we provided Jupyter Notebooks to help readers or users test the code and "play" with the pre-trained models; 5) we have this paper to document all the details related to this project and the ReadMe.md file in the code repository to help quickly set up the right python environment to test the code; and 6) the project can be installed as a python package and all of our functions can be imported and reused in any other projects.
### Creativity and innovation.
All of the research in patent classification focuses on an existing classification scheme. However, in our problem, we would like to classify patents in a classification scheme that does not exist yet. This is a very challenging problem and to the best of our knowledge, there is no prior art disclosing any solution. We made several creative design choices in order to solve this problem. We proposed to assign labels based on keyword matching to obtain a labeled dataset in order to train a machine learning classification model. We also proposed to have hierarchical labels in order to provide more positive training samples for the classification model. The two proposed models have novel structures and loss functions that are able to be trained for the complex multi-label hierarchical classification problem.
We believe that our solution of assigning labels based on keyword matching to obtain a labeled dataset in order to learn a machine learning classification model is an innovative solution to the problem of re-classification of patents upon an update of the classification scheme. This solution can allow to speed up this laborious task. It could also be possible to use our solution in combination with the current human work. For example, our classification model could classify the patents and also provide an explanation as shown in Section 7.4. A human, e.g. a patent examiner, could quickly review the classes proposed by the classification model together with the explanation, and in many cases, probably directly take the proposed classes without the need to further review the patent.
## 9 Conclusion
We have provided an approach for the classification of patents related to green plastics based on an automatically obtained labeled training dataset. We have come up with two innovative and effective classification models and reported several evaluation metrics that give a complete overview of the performance of the models. Our models set a strong benchmark for this challenging and new problem. Our solution has great potential to improve the productivity of patent classification. Moreover, we
highlight that this approach is not limited to the classification of green plastics, but it can be easily adapted for other fields whenever the classification hierarchy is updated.
## 10 Future work
Here, we discuss some future work which, we believe, could help to further improve our solution.
The next step would be to discuss the proposed classification scheme and the list of keywords assigned to each class with domain experts. Both the classification scheme and the list of keywords can be updated based on feedback. For example, as pointed out earlier, for classes such as Y02G10/24 or Y02G20/20, there are very few patents. It could be possible to either remove these classes from the classification scheme or to provide more relevant keywords to identify more patents belonging to those classes. Our code is very flexible and both modifications can be implemented almost effortlessly.
An interesting experiment to double-check the performance of our models could be to manually build a test dataset with samples from all classes in the classification hierarchy and to evaluate the performance of the model.
In the future, as incoming patents will be manually labeled in these classes, it would also be possible to periodically update the model with the newly labeled data. It would then be possible to give higher importance to the manually labeled patents in comparison to patents having automatically assigned labels.
So far, we have limited our approach to dealing with patents written in English. In order to be able to also classify patents in French and German, it is possible to re-use this code to build a specific classification model for each language. Alternatively, we saw that, recently, a pre-trained language-agnostic BERT model has been open-sourced [7], so it could be interesting to explore the possibility of using such a model to process all three languages.
Most of the prior-art works [15, 9, 14, 22] use the title and the abstract as input for the classification. However, patent examiners usually classify patents based on the content of the description. Since the descriptions are usually at least a couple of pages long, it is not straightforward to obtain a proper embedding using a transformer model such as BERT. It could be possible to try some of the approaches proposed in [18] so that the content of the description is also taken into account.
|
2305.17545 | Activity-Induced Annealing leads to Ductile-to-Brittle Transition in
Amorphous Solids | Investigating the behavior of amorphous solids under various external loading
conditions continues to be an intriguing area of research with significant
practical implications. In this study, we demonstrate the utilization of
self-motility as a means to anneal glasses and use that as a means to fine-tune
the failure mode of the system under uniaxial tensile deformation. We begin by
highlighting the annealing effects of activity and draw parallels with other
well-known mechanical annealing processes, such as oscillatory shearing (both
uni- and multi-directional). Furthermore, we explore the annealing effects in
the presence of open boundaries, observing enhanced surface relaxations due to
activity. By implementing various activity-induced annealing protocols, we
successfully induce a transition in the failure mode from ductile to brittle.
This is demonstrated via performing tensile tests on the glass samples
resulting from the active-annealing process. The intricate effects of geometry
on the formation of shear bands are also examined. We find that samples having
an aspect ratio greater than one fail via shear band formation, owing to their
formation angle of $45\degree$ from the strain axis. In conclusion, we
introduce a novel method for producing well-annealed glasses in silico and
establish a correspondence between sheared and actively driven glasses. | Rishabh Sharma, Smarajit Karmakar | 2023-05-27T18:26:44Z | http://arxiv.org/abs/2305.17545v1 | # Activity-Induced Annealing leads to Ductile-to-Brittle Transition in Amorphous Solids
###### Abstract
Investigating the behavior of amorphous solids under various external loading conditions continues to be an intriguing area of research with significant practical implications. In this study, we demonstrate the utilization of self-motility as a means to anneal glasses and use that as a means to fine-tune the failure mode of the system under uniaxial tensile deformation. We begin by highlighting the annealing effects of activity and draw parallels with other well-known mechanical annealing processes, such as oscillatory shearing (both uni- and multi-directional). Furthermore, we explore the annealing effects in the presence of open boundaries, observing enhanced surface relaxations due to activity. By implementing various activity-induced annealing protocols, we successfully induce a transition in the failure mode from ductile to brittle. This is demonstrated via performing tensile tests on the glass samples resulting from the active-annealing process. The intricate effects of geometry on the formation of shear bands are also examined. We find that samples having an aspect ratio greater than one fail via shear band formation, owing to their formation angle of \(45^{\circ}\) from the strain axis. In conclusion, we introduce a novel method for producing well-annealed glasses in silico and establish a correspondence between sheared and actively driven glasses.
## I Introduction
Glasses are out-of-equilibrium materials formed when a supercooled liquid is cooled below its (empirically defined) glass transition temperature (\(T_{G}\)), at which point the liquid viscosity exceeds \(10^{12}\) Pa.s [1; 2; 3; 4; 5; 6]. This enormous change in viscosity occurs over a small temperature change, usually just a few tens of kelvins. The central problem of glass transition is understanding how such a large dynamical change can occur in a system without any significant structural change. Additionally, when the temperature is kept below \(T_{G}\), the relaxation time becomes so large that, for all practical purposes, one observes an amorphous solid. The mechanical properties of such solids are a topic of significant research interest due to their importance in various practical applications and in future material design. Because of the inherent non-equilibrium nature of glass formation, the material properties depend not only on their composition but also on the preparation protocols. Owing to the lack of long-range order, glasses do not suffer from defects like grain boundaries, which are present in their crystalline counterparts and can lead to unwanted structural and optical features. Constant efforts to alter or tune the material's mechanical properties as desired have led to the establishment of various annealing methods. Such methods aim at achieving better material properties, with the recent discovery of ultra-stable glasses being a notable development [7].
The mechanical failure mode of glasses can be tuned in various ways. For instance, confining geometry can change how glasses break. In recent experimental and simulation works [8; 9; 10], it has been found that amorphous solids can show very different mechanical failure behavior under uniaxial tensile deformation at bulk and nano-scale. For example, a material that can withstand only 2% strain at bulk before complete brittle-like failure can be deformed up to 200% if the dimension of the sample is at a few hundred nano-meter scales [8], with neck-like failure. The thickness of the necking region can become as small as a few atomic diameters just before the total failure. This drastic change in failure mode from brittle to ductile with changing sample size when going from bulk to nano-scale highlights the importance of surface relaxation in the failure process. In [10], the existence of a critical aspect ratio (ratio of height to width in a two-dimensional sample or ratio of length to cross-sectional linear dimension for a three-dimensional sample) is observed. Below this critical aspect ratio, the material shows neck-like ductile failure, which then crosses over to cavity-dominated brittle-like failure for aspect ratios larger than the critical value.
Similarly, one can modify the interaction range of the constituent particles to transition from brittle (smaller range of inter-particle interaction) to ductile (larger range of inter-particle interaction) failure [11]. Introducing impurities or inclusions having different mechanical properties than the embedding matrix can also lead to changes in failure mode[12]. For example, one could cross over from a heterogeneous shear band-mediated yielding behavior to homogeneous ductile yielding with increasing appropriate inclusions, as is done in various micro-alloying methods. The degree of annealing or changing the cooling rate during preparation has a huge impact on the resulting glass' mechanical properties. For example, a glass that is prepared through a rather high cooling rate will be poorly annealed. If such a glass is subjected to simple shear, it shows a more ductile yield. In contrast, a glass that is prepared at a slow cooling rate or produced via other methods that can anneal the solids much better will generally be more well-packed and have much larger elastic moduli. Unfortunately, such a solid
will also show catastrophic brittle failure via shear band formation when subjected to deformations [13]. In general, the better-annealed a glass is, the more "brittle" it tends to be.
The pursuit of creating increasingly well-annealed glasses has implications not only for exploring the mechanical properties of these systems but also for getting closer to the proposed "ideal glass." This ideal glass can be considered the most well-packed yet disordered state of matter, whose configurational entropy, much like that of a crystal, is zero. Grasping these concepts is crucial for probing the nature of the glass transition at a deeper level. Recently, significant breakthroughs have been made in developing various experimental techniques, such as Physical Vapour Deposition (PVD)[7], and computational techniques like swap Monte Carlo [14] and in silico vapour deposition [15], to achieve ever-lower energy states in glasses. Each of these methods comes with its advantages and limitations. Thus, an exploration of other generic methods which can be employed for better annealing a wide variety of disordered solids is certainly a compelling avenue of research. In this regard, in computer simulations, people have investigated different forms of mechanical deformations that can lead to the annealing of glasses. Oscillatory shearing is a particularly well-researched method among these techniques[16; 17; 18].
In this paper, we introduce a novel form of mechanical annealing: annealing glasses through local internal perturbations facilitated by active particles. Active particles have motility that derives from either their internal energy reserves or from external sources [19; 20; 21; 22]. Such motility in the context of glasses has been helpful in understanding many biological systems where dense cell packing and ATP-driven transport is the norm [23; 24; 25; 26; 27; 28]. In our observations, we find that the annealing and yielding behavior of glasses subjected to active driving closely resembles that seen in glasses annealed using oscillatory shearing[16]. This helps to strengthen the correspondence between actively driven and sheared glasses which has been observed before [29]. We alluded to such similarities in an earlier work dealing with the formation of cavities in glasses [30]. There, it was found that oscillatory shear cycles can lead to cavitation at a density that is much larger than the density at which one expects to see cavitation failure via only uniform expansion protocols. It was also shown that qualitatively similar results could be obtained if these samples were subjected to local deformation via active particles instead of oscillatory shear. In this current work, we take this correspondence further by focusing on the annealing effects of active dynamics. After establishing the annealing effects, we further demonstrate how this process can be leveraged to tune the failure mode from ductile to brittle under tensile loading conditions. These results suggest a deeper connection between annealing via oscillatory shear and active annealing.
The organization of this paper is as follows: Section II provides details on the simulation and model. Section III describes the protocols employed to generate the initial states, as well as the methods used for activity-induced annealing and tensile testing. In Section IV, we present our findings, and we conclude the paper in Section V with the summary and future directions.
## II Model and Simulation Details
For our MD simulations, we used a well-known model glass former, the binary Kob-Anderson (KA) mixture [31]. It involves 2 species of particles interacting via Leonard-Jones potential (eq.1), with the larger A-type and smaller B-style present in the ratio of 80:20.
\[V_{\alpha\beta}(r_{ij})=4\epsilon_{\alpha\beta}\left[\left(\frac{\sigma_{ \alpha\beta}}{r_{ij}}\right)^{12}-\left(\frac{\sigma_{\alpha\beta}}{r_{ij}} \right)^{6}\right]+u(r_{ij}) \tag{1}\]
The energy and length scales are chosen such that \(\epsilon_{AA}=1\) and \(\sigma_{AA}=1\). The interactions between the other combinations of particles are then given in terms of \(\epsilon_{AA}\) as \(\epsilon_{AB}=1.5\epsilon_{AA}\) and \(\epsilon_{BB}=0.5\epsilon_{AA}\). Similarly for length scales, we have \(\sigma_{AB}=0.8\sigma_{AA}\) and \(\sigma_{BB}=0.88\sigma_{AA}\). The cut-off range is taken to be \(2.5\sigma_{ij}\), and the potential is made to go to zero smoothly at this point (by choosing \(u(r_{ij})\) accordingly so as to make the slope continuous at the potential cutoff).
For imparting activity, we add an additional force \(\vec{f_{0}}\) (eq. 2) to the smaller B-type particles in addition to the potential derived force.
\[\vec{f_{0}}=f_{0}(k_{x}\hat{x}+k_{y}\hat{y}+k_{z}\hat{z}). \tag{2}\]
The force is added along the eight diagonal directions, and the directions are shuffled after a persistence time \(\tau_{p}\) (here taken to be \(\tau_{p}=4\)). This results in run and tumble dynamics, and in literature, it is referred to as the 8-state clock model [32]. Momentum conservation is ensured by maintaining the sum of \(k\)'s to be zero component-wise throughout the simulations. Nose-Hoover thermostat and barostat were used to maintain the desired temperatures and pressures.
The typical system size considered is \(N=10,000\), with additional simulations performed for \(N=32,000\), \(N=64,000\), and \(N=128,000\) particles to study systematic finite size effects. A very low temperature of \(T=0.01\) is maintained throughout activity-driven annealing and during tensile testing. Annealing was done under periodic boundary conditions, and open boundaries were created for tensile testing and surface relaxation studies. All the simulations were conducted using our custom parallel MPI C codes.
## III Annealing and Tensile Testing Protocols
**Initial states:** The states for annealing were prepared by cooling a liquid equilibrated at a high temperature of
\(T=1.0\) and high density \(\rho=N/V=1.2\) to a temperature of \(0.01\). The cooling rates were varied from \(\dot{T}=10^{-1}\) to \(\dot{T}=10^{-6}\). These resulted in states with inherent state per particle energies ranging from \(-6.910\) (poorly annealed) to \(-7.035\) (well annealed). An ensemble average of \(16\) independent samples are taken for each case.
**Annealing protocol:** The obtained states, having a range of inherent state energies, are then subjected to the local perturbations via active dynamics. The protocol involved imparting activity to all the B-type particles in the system and evolving such a system for \(10^{5}\) time units (or \(2\times 10^{7}\) MD steps) under isothermal conditions at a low temperature of \(T=0.01\). Thus, the system was left to age in the presence of various magnitudes of active forcing (\(f_{0}\)), and the resulting states were then used to perform tensile testing.
**Energy minimization protocol:** To understand where in the energy landscape the system is during the annealing process, we sampled states at equal intervals from the isothermal aging trajectory and performed energy minimization using the well-known conjugate gradient (CG) method. The final energies plotted in Fig. 2(a) are the average energies of the last \(5\) frames from the tail of the energy-minimized trajectories (Fig. 2(b)).
**Tensile testing protocol:** To perform tensile testing, we utilized the various annealed states and subjected them to \(200,000\) Molecular Dynamics (MD) steps under zero pressure conditions. This step was crucial to prevent any pressure shock when transitioning from periodic to open boundary conditions. Subsequently, we created two walls of width \(2.5\sigma_{AA}\) by freezing the particles' degrees of freedom along the two faces of the containing box (along the x - direction). Open boundary conditions were created along the other two directions. This configuration was run for an additional \(200,000\) MD steps to allow the surface to "settle in" before applying a constant strain rate, denoted by \(\dot{\gamma}\), to the walls in opposing and outward directions. This application induced tension in the system, and we studied the system's response to this tension. The system was pulled till complete failure. The schematic for the process is shown in Fig. 1.
## IV Results
Fig. 2 encapsulates the outcome of our annealing protocol. We report a trend that is remarkably similar to the one observed in [33], where the annealing is performed using oscillatory shearing and under athermal conditions. Active glasses appear to age faster, enabling them to reach lower points in their potential energy landscape than their inactive counterparts. Up to a certain threshold \(f_{0}\), we observe annealing beyond which the energies again begin to rise and collapse to a single curve. Drawing an analogy with the yielding amplitude of oscillation (\(\gamma_{max}\)) seen in oscillatory shear simulations, we term this the active yielding amplitude (\({f_{0}}^{Y}\)). In our case, we find \({f_{0}}^{Y}\simeq 1.9\). The various curves collapsing to a single curve beyond \({f_{0}}^{Y}\) signifies that the system no longer remembers its original preparation history and thus can no longer be in the aging regime. This rise in saturation energy beyond yielding amplitude is illustrated in Fig. 2(a).
One also observes the degree of annealing via activity decreases as one goes to lower and lower initial state energies. Thus, poorly annealed glass responds much better to annealing via activity as compared to very well-annealed glasses. This trend, too, resonates with that reported in oscillatory shearing. One way to understand this is to realize that poorly annealed glasses have more "soft spots" or Shear Transformation Zones (STZ's) [34; 35; 36] that are available to be triggered by the microscopic perturbations caused by active forcing. A dearth of such soft spots in well-annealed glasses also explains their low sensitivity to both active and oscillatory annealing.
One crucial difference to note is that under athermal oscillatory annealing, the system below the yielding amplitude always reaches a limit cycle in the form of an absorbing state, and this limiting energy makes for a natural "stopping point." Such an absorbing state transition has also been observed in aging studies of topologically constrained active matter [37] (again, only in the absence of temperature). In active annealing, no such absorbing states are present, partly due to the dynamics being run at a finite, albeit low, temperature. The energy below the yield point decays logarithmically, and thus we take a finite time window instead to get a sense of the limiting energies. The exact values in Fig. 2 will depend on this waiting period (\(t_{w}\)). Such waiting time dependence is a hallmark of aging systems, and active glasses have been shown to display complex aging behaviors [38]. Still, the overall figure's character remains consistent, with poorly annealed glasses exhibiting superior annealing within a given window of time compared to well-annealed ones.
To understand these states' mechanical responses, we selected the states along the top-most (least annealed) curve in Fig. 2(a) and conducted the tensile testing simulations under constant strain rates, the results of which are summarised in Fig. 3. In Fig. 3(b), we see the stress response with the activity turned off after annealing. As expected, we find the annealing effects being reflected in
Figure 1: Schematic of the tensile testing procedure carried out in the study.
the stress-strain curves as well, with the highest stress peak observed for the \(f_{0}=1.9\) case. The \(f_{0}=0\) curve shows no overshoot in stress, a signature of the ductile yielding process. These stress overshoots are preserved even if the pulling simulations are performed in the presence of the activity, as shown in Fig. 3(c). One crucial difference one observes in the stress-strain curve with and without the presence of activity during the shearing process is the nature of the steady state reached by the system at large strain. Without any active force, the system reaches a unique steady state for all the configurations irrespective of their annealing history, a strong signa
Figure 3: The states encircled on the highlighted curve in (a) are sampled for tensile testing; the green circles mark the aging (or annealing) states, and red is used for the yielded state. The reason for choosing this curve is to span the largest range of inherent energies reached through active annealing. In (b), we show the stress-strain curves obtained when these states are pulled after turning off the activity. This sequence somewhat resembles a Physical Vapour Deposition (PVD) process where highly motile particles -responsible for annealing the surface- lose their mobility after being trapped in additional layers of particles. From the stress response of these states, we see better-annealed states showing a greater stress drop, indicating an enhancement in the brittleness after an active dynamics annealing treatment. In (c), we perform the same testing, but with the annealing activity kept on. Here too, the larger stress drops are preserved. Here we also note the yielded state is unable to withstand any considerable amount of stress. These results are for a system size of \(N=10,000\) averaged over \(16\) ensembles and strain rate \(\dot{\gamma}=5\times 10^{-5}\).
Figure 2: Annealing and yielding in glasses via local perturbations due to activity. Fig. (a) shows the effects of active dynamics (carried out for a time window of \(t_{w}\)) on glasses having various inherent state energies. Up to a threshold of \(f_{0}=1.9\) for this model, we see enhanced aging behavior, as seen by the system’s progression towards progressively deeper minima. Beyond this threshold, the various curves collapse to a single curve, denoting a yielded state where the system no longer retains a memory of its initial preparation. A very similar transition from absorbing to yielded state is seen in oscillatory shearing, with increasing oscillation amplitude. (b) Annealing of glass having \(E_{IS}=-6.947\) (corresponding to the green curve in (a)) is shown. Decaying behavior for states till \(f_{0}=1.9\) can be seen, whereas, for larger forcing amplitude, we observe a yielded steady state. System size of \(N=10,000\) averaged over \(16\) ensembles.
ture of attaining ergodicity after yielding, a well-known feature of passive glasses. In contrast, the steady-state stress (\(\sigma_{\infty}\)) of the system with activity on is very different. \(\sigma_{\infty}\) decreases with increasing activity, although stress overshoot increases with increasing activity. We discuss this in further detail in subsequent paragraphs. A state much above the yielding amplitude shows nearly liquid-like behavior indicating fluidization due to the active forces. Thus, somewhat counterintuitively, we see that introducing self-motility can lead to a system favoring a brittle response over a ductile one below a critical strength of self-motility.
### Effect of strain rate - faster intrinsic time-scales in active glasses
To study the nature of steady states reached during tensile extension in glasses with active driving present, we looked at the system's response under varying strain rates. For this, we took the initial state for testing to be the best-annealed state having per particle inherent structure energy, \(E_{IS}=-7.035\), that we achieved via the slowest cooling employed in our simulations. These states were not further subjected to any additional active aging process. This strategy was adopted to separate any potential effects of annealing imposed on the steady states, as we saw in Fig. 3(c). Regardless, as seen from the bottom-most curve in Fig 2(a), an additional annealing treatment for such well-annealed glasses would only have had a minute effect. In Fig. 4, we show the stress-strain curve of the system by changing the strength of the active forces for two different strain rates, \(\dot{\gamma}=5\times 10^{-5}\) and \(5\times 10^{-4}\). The inset of the respective panels shows how the steady state stress, \(\sigma_{\infty}\), decreases with increasing active force, \(f_{0}\). The qualitative nature of the results can be understood if we consider that with increasing activity, the relaxation time of the system decreases. One can effectively map this behavior to an effective temperature description of a passive system [39]. In a passive system, if one maintains the strain rate but increases the system's bath temperature, then one expects that the steady-state flow stress decreases [40; 41] as \(\sigma_{\infty}\sim T^{-2/3}\) across a wide variety of amorphous solids including metallic glasses. Thus it seems that these results can be qualitatively understood using an effective temperature description. In contrast, the annealing effect can not be understood using the same effective temperature description as with increasing the system's effective temperatures; one should observe a reduction in stress overshoot rather than an increase. This suggests that the mechanical response of an active amorphous solid is rather complex, and it is not readily understandable using the simple concept of effective temperature. Thus, further studies along this direction will be needed to explore the full complex rheological behavior of active glasses under various external loading conditions.
As a side note, we want to highlight the effect of activity on the yield strain itself. We observe that introducing activity during tensile testing shifts the yield point towards a lower value, although the magnitude of the stress drop remains fairly unaffected. This is completely different compared to the effect of random pinning on the yield strain as reported in [12]. Thus active particles can be considered as fluidizing or anti-pinning centers that make the materials less rigid mechanically during deformation. However, it helps to achieve stable mechanical
Figure 4: Interplay between intrinsic time scales and imposed time scale (in the form of strain rate) in well-annealed active glasses. A saturation to various pseudo steady states depending on activity showing Herschel-Bulkley-like signature. In the inset, we show the variation of the steady-state stress \(\sigma_{\infty}\) with different magnitudes of active forcing, \(f_{0}\). System size of \(N=10,000\) averaged over 16 ensembles.
states via annealing when it is allowed to relax.
### Effect of geometry and larger system size
To see the catastrophic characteristic of a brittle failure in the stress-strain curve, larger system size and lower inherent state energies are required [13]. For this, system sizes of \(N=32,000\), \(64,000\), and \(128,000\) are considered. To study the effect of geometry and enhanced annealing due to open surfaces, we changed the geometry from a cubical box to a cuboidal rod shape [42]. This is done by changing the aspect ratio (defined as \(AR=\frac{L_{x}}{L_{y}=L_{z}}\), with x always being the long axis along which the tension is applied) while keeping surface to volume ratio constant. We have considered AR of 1, 2, and 4 respectively for the three system sizes taken. This gives us an idea of the importance of system size, AR, and absolute surface area during the deformation process. Here, in the original protocol, we introduce an additional time window of \(5\times 10^{6}\) MD steps. During this period, we anneal the system in an open geometry while it is still active - that is, after the periodic boundary conditions (PBC) have been removed but before the activity is turned off. The importance of understanding the surface dynamics of glassy systems has far-reaching effects on processes like Physi
Figure 5: Analyzing the effect of Aspect Ratio (AR) and system size on shear band formation: The formation of shear bands is favored by both a higher aspect ratio and a larger system size. The color bar denotes the magnitude of squared displacements. For an aspect ratio less than 1, a shear band at \(45^{\circ}\)is geometrically impossible to accommodate. For aspect ratios \(\leq 1\), hindrance from the walls is significant, and it goes down with larger AR. Thus the same system size considered in (a), but with a higher AR, the system now does show a sharp stress drop; see fig 7. Similarly, the larger system size used in (b), now with decreased AR, ceases to form shear bands. See Fig 6.
cal Vapour Deposition (PVD). PVD utilizes the disparity in the mobility of surface particles with orders of magnitude higher mobility than the bulk particle to create ultra-stable glasses [7]. Thus the effects of giving additional mobility are an intriguing avenue to explore.
In Fig. 5, we show the appearance of a shear band with increasing aspect ratio and system size. In panel (a), we show that for a system size of \(N=32,000\) particles with \(AR=1\), there is no clear shear band, although, from the overshoot in the stress-strain curve in panel (d), one clearly sees that a well-annealed glass is achieved with \(f_{0}=1.9\). In panel (b), the system size is increased by two folds by increasing \(L_{x}\). Once again, one clearly sees a much sharper stress overshoot in a well-annealed sample prepared with active force \(f_{0}=1.9\), as well as a sharper shear band in the middle of the sample at an angle of \(45^{\circ}\). With a further increase in system size by increasing \(L_{x}\) by a factor of 2 again, the stress overshoot becomes nearly discontinuous, as shown in panel (f), with a clear shear band seen in panel (c).
To address the question of whether increasing system size or the increasing aspect ratio plays a dominant role in forming shear bands during failure in well-annealed glasses, we considered the following two scenarios. In the first situation, we take a system size where we clearly observed shear bands for an aspect ratio of 2 (panels b and e in Fig. 5) and decreased the aspect ratio to 1 as shown in Fig. 6. For an aspect ratio of 1, even for a large system size of \(N=64,000\), the sharpness of the stress drop is decreased significantly compared to when AR was 2. A lack of a clear shear band in the system accompanies this. This happens because, at this AR, the walls interfere with the formation of the shear band, which happens at a \(45^{\circ}\) angle. In the second scenario, we took a system size of \(N=32,000\), which did not show a shear band for an AR of 1 (panel a and d in Fig. 5), but now increased the aspect ratio to 4, as shown in Fig. 7. In this case, one can see that the stress overshoot becomes nearly discontinuous across the yielding transition, and a clear shear band appears in the sample. This suggests that the geometric shape of the sample, apart from its annealing history, will play a crucial role in determining the eventual failure mode in amorphous solids with open boundaries.
### Morphology of fractured states
The percentage elongation of material before fracture, as well as the cross-section of fracture, both provide valuable insights into the failure mechanism. A longer elongation of necking and a thinner cross-section are reminiscent of a ductile failure. Here we show the morphology of the fractured states for two of the samples considered. Sample (a) was treated with activity, whereas sample (b) was just aged passively for the same duration of time. We compute the fractional relative elongation as
\[\Gamma_{max}=\frac{L_{x}^{f}-L_{x}^{0}}{L_{x}^{0}}, \tag{3}\]
where \(L_{x}^{0}\) is the starting length of the sample in the pulling direction, and \(L_{x}^{f}\) is the length after the sample detaches into two pieces, as shown in Fig. 8. We see
Figure 8: Maximum relative elongation before complete failure. The sample in (a) was treated with activity \(f_{0}=1.9\), while the one in (b) was aged without any activity. The aspect ratio is taken to be \(AR=4\) with system size \(N=128,000\). The strain rate is \(\dot{\gamma}=5\times 10^{-5}\). \(\Gamma_{max}\) the maximum strain the system can withstand before complete failure. The data is averaged over 3 ensembles each.
Figure 6: Larger system size but with smaller AR does not form clean shear bands.
Figure 7: Smaller system size but with higher AR, enough to resolve the shear band.
the average relative elongation in our samples (denoted by \(\langle\Gamma_{max}\rangle\) ) after being aged with activity is significantly (nearly 22% ) smaller than the passively aged case. Thinner necks for the passive case are also observed. This result suggests that activity induces brittleness in the materials. Note that this behavior is completely different than what one expects if the effect of activity can be simply understood as effective temperature. In passive amorphous solids, an increase in temperature leads to more ductile failure. Some of these results are very counter-intuitive, but if one looks at the system in terms of the degree of annealing, then the observation is in agreement with the fact that better-annealed solids will show more brittle type failure behavior.
## V Conclusion and perspectives
In conclusion, we have shown that active particles can effectively anneal glasses to lower energies, just like the mechanical annealing process under oscillatory shear. The similarities observed with the oscillatory shearing suggest that thinking of activity as a form of local shearing rather than just as an effective temperature might provide fruitful insights. By performing extensive tensile testing, we also showed that the annealing effects of activity could be used to change the mode of failure from a ductile to a brittle type. This finding may seem counterintuitive if the activity is simply viewed as a temperature-like phenomenon, considering that higher temperatures generally favor ductile failure. However, it is crucial to recognize the enhanced aging effect in active glasses, which, over time, can transform a ductile active glass into a brittle one simply due to better annealing. Apart from creating better materials, this work can have consequences for developing techniques that are much better at annealing glasses than the usual ones; for instance, a combination of cyclic shear with internal activity might prove to be even better at annealing than any one of those individually. Some future research directions could be to see the effect of slightly higher temperatures on the annealing process. Is there an optimal combination of temperature and activity for which a system would anneal the fastest, akin to an optimal combination of shear amplitude and temperature reported in [43]? We also plan to explore, in particular, the effect of persistence time in the annealing process. Given the local shear interpretation of active forcing, it seems that higher values of \(\tau_{p}\) would mimic enhanced local random shearing, whereas lower values should have temperature-like effects. Our preliminary findings suggest the same. It would be intriguing to figure out the exact range of \(\tau_{p}\) over which one interpretation of activity should be preferred over the other. Could there also be absorbing states if such a study is done in the absence of temperature? This would open the possibilities for encoding memory in such systems, similar to how memory can be encoded in cyclically sheared glasses [44]. In addition, from an algorithmic perspective, one would like to understand what aspect of activity helps to optimize the search for minima in the complex glassy landscape. Is it the additional search directions available during the persistence time, or is it the stochastic nature of the activity? Answering such questions might help in developing better optimization algorithms in the near future.
## VI Acknowledgment
SK acknowledges fruitful discussions with Juergen Horbach during his visit to The Heinrich Heine University, Dusseldorf, Germany. We acknowledge funding by intramural funds at TIFR Hyderabad from the Department of Atomic Energy (DAE) under Project Identification No. RTI 4007. Core Research Grant CRG/2019/005373 from Science and Engineering Research Board (SERB) is acknowledged for generous funding. Most of the computations are done using the HPC clusters bought using CRG/2019/005373 grant and Swarna Jayanti Fellowship, grants DST/SJF/PSA01/2018-19, and SB/SFJ/2019-20/05 of SK.
|
2310.13595 | The History and Risks of Reinforcement Learning and Human Feedback | Reinforcement learning from human feedback (RLHF) has emerged as a powerful
technique to make large language models (LLMs) easier to use and more
effective. A core piece of the RLHF process is the training and utilization of
a model of human preferences that acts as a reward function for optimization.
This approach, which operates at the intersection of many stakeholders and
academic disciplines, remains poorly understood. RLHF reward models are often
cited as being central to achieving performance, yet very few descriptors of
capabilities, evaluations, training methods, or open-source models exist. Given
this lack of information, further study and transparency is needed for learned
RLHF reward models. In this paper, we illustrate the complex history of
optimizing preferences, and articulate lines of inquiry to understand the
sociotechnical context of reward models. In particular, we highlight the
ontological differences between costs, rewards, and preferences at stake in
RLHF's foundations, related methodological tensions, and possible research
directions to improve general understanding of how reward models function. | Nathan Lambert, Thomas Krendl Gilbert, Tom Zick | 2023-10-20T15:45:16Z | http://arxiv.org/abs/2310.13595v2 | # The History and Risks of Reinforcement Learning and Human Feedback
###### Abstract
Reinforcement learning from human feedback (RLHF) has emerged as a powerful technique to make large language models (LLMs) easier to use and more effective. A core piece of the RLHF process is the training and utilization of a model of human preferences that acts as a reward function for optimization. This approach, which operates at the intersection of many stakeholders and academic disciplines, remains poorly understood. RLHF reward models are often cited as being central to achieving performance, yet very few descriptors of capabilities, evaluations, training methods, or open-source models exist. Given this lack of information, further study and transparency is needed for learned RLHF reward models. In this paper, we illustrate the complex history of optimizing preferences, and articulate lines of inquiry to understand the sociotechnical context of reward models. In particular, we highlight the ontological differences between _costs_, _rewards_, and _preferences_ at stake in RLHF's foundations, related methodological tensions, and possible research directions to improve general understanding of how reward models function.
## 1 Introduction
Learning from human feedback has become incredibly popular due to the success of large language models (LLMs) such as OpenAI's ChatGPT (Schulman, Zoph, Kim, & more, 2022) and Anthropic's Claude (Bai, Jones, et al., 2022), which are heavily dependent on human labeled data. These models make use of reinforcement learning from human feedback (RLHF), a technique designed to integrate human preferences where writing an explicit reward function is otherwise challenging (Christiano et al., 2017). In the context of language models, RLHF proceeds as follows: first, a reward model is independently trained on aggregate pairwise preferences from many crowdworkers to rate any piece of text; second, the language model is optimized with an RL optimizer (Bai, Jones, et al., 2022; Ouyang et al., 2022; Touvron et al., 2023). The final language model is often subject to heavy scrutiny both internally and, more recently, externally through events like DEFCON Red Teaming Village (Bajak, 2023) or coordinated adversarial attacks Zou, Wang, Kolter, and Fredrikson (2023). The same cannot be said for the intermediate reward model. Historically, reward models have not been released as open-source or evaluated rigorously, obscuring from scrutiny the process through which values are actively encoded into the system. This paper illustrates why reward models are central to understanding the long-term impacts of RLHF, drawing from the rich history, discourse, and tension around how to best quantify human values.
RLHF is the intellectual culmination of several distinct domains. The optimization stack of RLHF is borrowed from control theory, a domain in which there are ground truths and reward functions can have an clear notion of success. The primary risk of learning human preferences for LLMs comes through the domain shift from control to language. In language, notions of values are more computationally complex or fundamentally vague Dobbe, Gilbert, and Mintz (2021) in relation to their control counterparts, but the optimization stack nevertheless remains similar. Reinforcement learning is broadly the field of study of sequential decision making, which is built on a substantial literature comprising cognitive biology, optimal control, behavioral economics, and other fields (Sutton & Barto, 2018). RLHF combines the social challenges of human data with the techniques of RL -- a field with a long history of numerical complexity.
Despite the maturity of the domains it draws on, grounding and investigating risks of RLHF requires the development of new tools and research methods. In particular, vestigial assumptions inherited from earlier technologies can re-surface as blind spots in the modern RLHF paradigm. Tracing the history of RL and RLHF as technologies allows us to identify these assumptions and where they matter in particular systems. This paper attempts to provide an exposition of some of this historical context, and to highlight specific sociotechnical areas of opportunity within the reward model specification, beyond the challenges proposed in recent literature (Casper et al., 2023). We study the histories of quantification of human preferences and reinforcement learning algorithms, from _Port-Royal Logic_ and Bentham to Markov Decision Processes and Bellman, to highlight potential shortcomings of learning models of human preferences. An initial concern that has been raised with the current deployments of LLMs is the limitations of working with aggregate human data, raising questions as to whose values the model is encoding and prioritizing. Moving beyond this, we study how structural optimization and deployment decisions can impact downstream users.
Given the nuance around modeling human preferences, we refer to these artifacts as reward models of preference, or _reward models_ for short, to better match their usage as an optimization target for reinforcement learning algorithms rather than an accurate representation of human values. In order to broaden the scope of study around these reward models, we make the following contributions:
* **Trace the complex intellectual history of RLHF** to illustrate the potential ill-posed assumptions popularized within RLHF. In Sec. 3, we explain the evolution of RL with the history of rational agents and human preferences. In doing so, we distinguish sets of _assumptions_ (explicit premises) and _presumptions_ (implicit premises) made throughout the evolution of RLHF that lead to potential shortcomings of reward models.
* **Propose a series of questions for contemporary RLHF reward models** to increase transparency and opportunities for broader multi-stakeholder engagement in modern LLM development. In Sec. 5, we break these questions down by sections of the machine learning process: data, model, and optimization, and in Sec. 7, we also discuss emerging issues that are not easily classified.
* **Discuss solutions** in Sec. 6 to measure and communicate the values and potential harms of contemporary RLHF reward models. We propose tools that can be used to add rigour to future empirical evaluation work.
## 2 Related Works
### Reinforcement learning from human feedback
RLHF is a set of techniques designed to optimize machine learning models based on human feedback in order to circumvent the need to design a complex reward function. Early work in RLHF focused on soliciting complex behaviors from AI agents in control problems using various environments, feedback methods across trajectories or rankings, and optimizers (Christiano et al., 2017; Wirth et al., 2017).
Recently, developments in RLHF have been centered around its use with LLMs. This branch of study originated with work exploring how technical value alignment may scale with learned reward models (Leike et al., 2018). Quoting Leike et al. (2018):
We claim that the approach described is agnostic to the ethical paradigm, the user's preferences, and the legal or social framework, provided we can supply enough feedback (though the preference payload might influence the amount of feedback required).
The organization that builds _applications_ where RLHF is used bears the burden of specifying the ethics they used and answering questions about whose preferences are included and how they're weighed (Baum, 2020; Prasad, 2018).
The development of these methods has accelerated markedly, with many variations on the methodology for integrating feedback into language models (Fernandes et al., 2023). Initial work on RLHF for LLMs utilized user choices from a batch of 4 completions (Ziegler et al., 2019) to train a reward model across general LLM benchmarks. When comparing recent RLHF work to Ziegler et al. (2019), group preferences were changed to pairwise preferences, and rather than general benchmarks the reward model was focused on the task of summarization (Stiennon et al., 2020; J. Wu et al., 2021). Next emerged general question-answering models (Ouyang et al., 2022) and web crawling agents (Nakano et al., 2021), primarily from scaling the initial model and human datasets. Now, RLHF is used to train general chat models across a variety of tasks (Bai et al., 2022; Schulman et al., 2022; Touvron et al., 2023) and for specific objectives such as harm reduction (Glaese et al., 2022) or information accuracy (Menick et al., 2022), but methods for collecting the feedback data (from both humans and LLMs) are still burdened by disagreement and other technical challenges (Bansal et al., 2023).
### Downstream impacts of optimizing preferences
Research venues have encouraged scientists to grapple with these questions around their work and data enrichment through humans, which are particularly relevant for techniques similar to RLHF, but there has been mixed uptake (Hawkins and Mittelstadt, 2023). RLHF faces many challenges with its integration of human preferences in an aggregate manner, and potential solutions involving personalization of preferences raise further questions of which values or norms are acceptable to encode in a model (Kirk, Vidgen, Rottger, and Hale, 2023). Specifically, the reward models trained for RLHF are known to be over-optimized during the RL stage, where the language generations continue to shift without the reward model indicating a higher score, without clear measurement of how downstream training signals for LLMs relate to preferences being correlated in the data (Gao, Schulman, and Hilton, 2022).
Training models based on human preferences also impacts how users interact with the downstream machine learning systems that refer to RLHF reward models as part of their optimization objective. This was illustrated with the launch and widespread adoption of ChatGPT, raising questions regarding the effects of regular communication with RLHF trained LLMs, such as the downstream impact on users' moral judgements (Krugel, Ostermaier, and Uhl, 2023) or exposure to value judgements and biases (Johnson et al., 2022). Additionally, there are open questions about the stability and robustness of RLHF-trained LLMs, with reports of RLHF models' tone shifting substantially within the course of a single conversation in potentially troubling ways (Nardo, 2023).
The issue of downstream model impacts is not new - for instance there is a vast prior literature on how models interface with society. For example, user facing recommendation models have long prompted inquiry around whether agents should respond to our stated or implied preferences (Milli, Hadfield-Menell, Dragan, and Russell, 2017). In RLHF, these concerns meet the'reward hacking' problem endemic to RL. Specifically, as popular models are being tuned based on user experiences, complex feedback dynamics can emerge via the combination of reward mis-specification with the power of RL optimizers (Gilbert, Dean, Zick, and Lambert, 2022), such as desired capabilities coming and going through repeated training.
## 3 The Origins of Reward Models: Costs vs. Rewards vs. Preferences
In this section, we break down the complex history inspiring the modern use of RLHF. This requires investigation into the intellectual foundations of quantifying human values, reinforcement learning and optimality, as well as behavioral economics as it relates to measuring preferences. The notion of using reinforcement learning to optimize a reward model of preferences combines the history of various once-distanced fields into an intimate optimization built on variegated assumptions about human nature. A high level timeline illustrating the history of this foundational content is shown in Fig. 1. The detailed presumptions and assumptions we reference, are showcased in Fig. 2.
Our goal is to unspool the types of uncertainty that designers have grafted to system architectures at various stages of their intellectual history. Modern problem specifications have repeatedly stepped away from domains where optimal solutions are possible and deployed under-specified models as approximate solutions.
Throughout, we distinguish between a series of _assumptions_ accepted within theoretically-grounded academic literatures, and relevant _presumptions_ which are common methods of practice for particular subject areas. As we shall see, the unresolved tensions between these assumptions and presumptions are responsible for the current state and outstanding questions of RLHF research. This section does not set out to be a survey but rather interrelates core references to illustrate the modus operandi of RLHF and preference modeling.
To begin, all of the following operates on the assumptions that human preferences exist in any form, which emerged in early philosophical discussions, such as Aristotle's Topics, Book Three.
**Assumption 1**.: _Human preferences and goals exist._
### Specifying objectives: from logic of utility to reward functions
The optimization of RLHF explicitly relies only on reward models. In order to use rewards as an optimization target, RLHF presupposes the convergence of ideas from preferences, rewards, and costs. Models of preference, reward functions, and cost landscapes all are tools used by different fields to describe a notion of relative goodness of specific actions and/or states in the domain. The history of these three framings dates back to the origins of probability theory and decision theory. In 1662, _The Port Royal Logic_ introduced the notion of decision making quality (Arnauld, 1662):
To judge what one must do to obtain a good or avoid an evil, it is necessary to consider not only the good and evil in itself, but also the probability that it happens or does not happen.
This theory has developed along with modern scientific thinking, starting with Bentham's utilitarian _Hedonic Calculus_, arguing that everything in life could be weighed (Bentham, 1823). The first quantitative application of these ideas emerged in 1931 with Ramsey's _Truth and Probability_(Ramsey, 2016).
**Assumption 2**.: _Any and all preferences and goals can be quantified and measured._
Since these works, quantifying, measuring, and influencing human preferences has been a lively topic in the social and behavioral sciences. These debates have rarely been settled on a theoretical level; rather, different subfields and branches of social science have reached internal consensus on methods and approaches to preference measurement even as they have specialized relative to each other, often developing their own distinct semantics in the process.
A minority of economists posit that preferences, if they do exist, are prohibitively difficult to measure because people have preferences over their own preferences, as well as each others' preferences (Hirschman, 1984). In this view, which is not reflected in the RLHF process, individual preferences are always embedded within larger social relations, such that the accuracy of any preference model is contingent on the definition and context of the task. Some behavioral economists have even argued that preferences don't exist-they may be less an ontological statement of what people actually value than a methodological tool for indirectly capturing psychological predispositions, perceived behavioral norms and ethical duties, commitments to social order, or legal constraints (Hadfield & Weingast, 2014). We address the links of this work to the Von Neumann-Morgenstern (VNM) utility theorem and countering impossibility theorems around quantifying preference in Sec. 3.3.
On the other hand, the reinforcement learning optimization methods used today are conceptualized around optimizing estimates of reward-to-go in a trial (Sutton & Barto, 2018), which combines the notion of reward with multi-step optimization. The term _reward_ emerged from the study of operant conditioning, animal behavior, and the _Law of Effect_(Skinner, 2019; Thorndike, 1927), where a reward is a scale of "how good an action is" (higher means better).
Reward-to-go follows the notion of utility, which is a measure of rationality (Briggs, 2014), modified to measure or predict the reward coming in a future time window. In the context of the mathematical tools used for reinforcement learning, utility-to-go was invented in control theory, specifically in the context of analog circuits in 1960 (Widrow & Hoff, 1960). These methods are designed around systems with clear definitions of optimality, or numerical representations of goals of an agent. Reinforcement learning systems are well known for their development with a discount factor, a compounding multiplicative factor, \(\gamma\in[0,1]\), for re-weighting future rewards. Both the original optimal control systems stand and early algorithms for reward stand in heavy contrast to reward models that aggregate multimodal preferences. Specifically, RL systems expect rewards to behave in a specific manner, quoting Singh, Lewis, and Barto (2009):
Rewards in an RL system correspond to primary rewards, i.e., rewards that in animals have been hard-wired by the evolutionary process due to their relevance to reproductive success.... Further, RL systems that form value functions,... effectively create conditioned or secondary reward processes whereby predictors of primary rewards act as rewards themselves... The result is that the local landscape of a value function gives direction to the system's preferred behavior: decisions are made to cause transitions to higher-valued states. A close parallel can be drawn between the gradient of a value function and incentive motivation (McClure, Daw, & Montague, 2003).
To summarize, rewards are used in RL systems as a signal to tune behavior towards clearly defined goals. The core thesis is that an learning algorithm's performance is closely coupled with notions of _expected fitness_, which permeates
Figure 1: The timeline of the integration of various subfields into the modern version of RLHF. The direct links are continuous developments of specific technologies, and the arrows indicate motivations and conceptual links.
the popular view that RL methods are _agents_ that act in environments. This view is linked to the development of reinforcement learning technology, exemplified by claims of the general usefulness of the reward formulation (Silver, Singh, Precup, & Sutton, 2021), but is in conflict when many individual desires are reduced to a single function.
**Assumption 3**.: _Increasing the score of raw reward measurements corresponds to better behaviors (or value functions learned under invariant reward transformation (Ng, Harada, & Russell, 1999))._
### Implementing optimal utility
Modern reinforcement learning methods depend strongly on the Bellman equation (Bellman, 1957; Howard, 1960) to recursively compute estimates of reward-to-go, derived within closed environments that can be modeled as a Markov Decision Process (MDP) (Sutton & Barto, 2018). These origins of RL are inspired by dynamic programming methods are were developed solely as optimal control techniques (i.e. RL did not yet exist). The MDP formulation provides theoretical guarantees of performance by structuring the environment as one with a non-changing distribution of state-actions.
**Assumption 4**.: _Optimal solutions to reward maximization problems exist._
The term reinforcement, coming from the psychology literature, became intertwined with modern methods afterwards in the 1960s as _reinforcement learning_(Mendel & McLaren, 1970; Waltz & Fu, 1965). Early work reinforcement learning utilized supervised learning of reward signals to solve tasks. Work from Harry Klopf reintroduced the notion of trial-and-error learning (Klopf, 1972), which is crucial to success the field saw in the 1980s and on.
Modern RL algorithms build within this formulation of RL as a tool to find optimal behaviors with trial-and-error, but under looser conditions. The notion of temporal-difference (TD) learning was developed to aid agents in both the credit assignment and data collection problems, by directly updating the policy as new data was collected (Sutton, 1988), a concept first applied successfully to Backgammon (Tesauro et al., 1995) (rather than updating from a large dataset of cumulative experience, which could be outdated via erroneous past value predictions). The method Q-learning, the basis for many modern forms of RL, learns a model via the Bellman equation that dictates how useful every state-action pair is with a TD update (Watkins & Dayan, 1992)1. Crucially, these notions of provable usefulness through utility have only been demonstrated for domains cast as MDPs or addressed in tasks with a single closed-form reward function, such as prominent success in games with deep learning (DQN) (Mnih et al., 2013). Deep learning allowed the methods to ingest more data and work in high dimensionality environments.
Footnote 1: The term “Q” is used in Q-learning to refer to a technical concept the Q-function, which maps from any state-action to a scalar estimate of future reward. A value-function maps from states to this same estimate.
As the methods became more general and successful, most prominent developments before ChatGPT had remained motivated within the context of adaptive control, where reward and cost functions have a finite notion of success (Golnaraghi & Kuo, 2017), e.g. a minimum energy consumption across an episode in a physical system. Prominent examples include further success in games (Silver et al., 2017), controlling complex dynamic systems such as nuclear fusion reactors (Degrave et al., 2022), and controlling rapid robotic systems (Kaufmann et al., 2023). Most reward or cost functions can return an explicit optimal behavior, whereas models of human preferences cannot.
**Presumption 1**.: _Optimal solutions can be achieved with finite data in complex environments._
Given the successes of deep RL, it is worth noting that the mechanistic understanding of how the methods succeed is not well documented. The field is prone to mistakes of statistical analysis as the methods for evaluation grow more complex (Agarwal, Schwarzer, Castro, Courville, & Bellemare, 2021). In addition, there is little mention of the subfield of inverse reinforcement learning (IRL) in the literature of RLHF. IRL is the problem of learning a reward function based on an agent's behavior (Ng, Russell, et al., 2000) and highly related to learning a reward model. This primarily reflects the engineering path by which a stable approach to performing RLHF emerged, and motivates further investment and comparison to IRL methods to scale them to the complexity of open-ended conversations.
### Steering preferences
The context in which reinforcement learning was designed means that rewards and costs are assumed to be stable and determinative. Both rewards and costs are expected to be functions, such that if the agent is in a specific state-action pair, then it will be returned a certain value. As we move into preferences, this is no longer the case, as human preferences constantly drift temporally throughout their experiences. The overloading of the term "value" within these two contexts complicates the literature of RLHF that is built on the numerical value updates in Bellman equations with the very different notion of what is a human value, which often refers to moral or ethical principles, but is not well defined in technical literature. An example of where this tension can be seen is how reward models are attempting to map from the
text on the screen to a scalar signal, but in reality, dynamics not captured in the problem specification influence the true decision (Gilbert, Dean, Zick, & Lambert, 2022; Salha, 2011), such as preference shift when labeling many examples sequentially and assuming they are independent. Therein, modeling preferences is at best compressing a multi-reward environment to a single function representation.
In theory, the Von Neumann-Morgenstern (VNM) utility theorem gives the designer license to construct such functions, because it ties together the foundations of decision theory under uncertainty, preference theory, and abstract utility functions (Von Neumann & Morgenstern, 1947); together, these ideas allow preferences to be modeled in terms of expected value to some individual agent. The MDP formulation used in most RL research has been shown in theory to be modifiable to accommodate the VNM theorem (Pitis, 2019), but this is rarely used in practice. Specifically, the Markovian formulation is limited in its expressivity (Pitis, 2023) and the transition to partially-observed processes, which is needed for language, further challenges the precision of problem specification (Abel et al., 2021).
However, the VNM utility theorem also invokes a number of assumptions about the nature of preferences and the environment where preferences are being measured that are challenged in teh context of RLHF. Human-computer interaction (HCI) researchers, for example, have emphasized that any numerical model of preference may not capture all the relevant preferences of a scenario. For example, how choices are displayed visually influences people's preferences (Salha, 2011). This means that representing preferences may be secondary to how that representation is integrated within a tool available for people to use. Work from development economics choles this notion, showing that theories of revealed preferences may just recapitulate _Hume's guillotine_ (you can't extract an "ought" from an "is"), and in particular the difference between choice (what do I want?) and preference (is X better than Y?) (Sen, 1973).
On a mathematical level, well-known impossibility theorems in social choice theory show that not all fairness criteria can be simultaneously met via a given preference optimization technique (Arrow, 1950; Maskin & Sen, 2014). Theoretical challenges to these theorems exist, for example by assuming that interpersonal comparison of utility is viable (Harsanyi, 1977). That assumption has inspired a rich line of work in AI safety and value alignment inspired by the principal-agent problem in behavioral economics (Hadfield-Menell, Russell, Abbeel, & Dragan, 2016), and may even include multiple principals (Fickinger, Zhuang, Hadfield-Menell, & Russell, 2020). However, the resulting utility functions may come into tension with desiderata for corrigibility, i.e. an AI system's capacity to cooperate with what its creators regard as corrective interventions (Soares, Fallenstein, Armstrong, & Yudkowsky, 2015). Philosophers have also highlighted that preferences change over time, raising fundamental questions about personal experiences, the nature of human decision-making, and distinct contexts (Pettigrew, 2019). These conflicts around the preference aggregation across people, places, or diverse situations is central to modern RLHF dataset engineering.
In practice, the VNM utility theorem ignores the possibility that preferences are also uncertain because of the inherently dynamic and indeterminate nature of value--human decisions are shaped by biology, psychology, culture, and agency in ways that influence their preferences, for reasons that do not apply to a perfectly rational agent. As a result, there are a variety of paths through which theoretical assumptions diverge in practice:
* measured preferences may not be transitive or comparable with each other as the environment where they are measured is made more complex;
Figure 2: The history covered in Sec. 3 that creates the assumptions and presumptions central to the current deployments of RLHF. The assumptions indicate core theoretical foundations which RLHF builds upon, transposes, prioritizes, or defers to another development stage. The presumptions represent ideas and practices required to build the current renditions of the technology.
* proxy measurements may be derived from implicit data (page view time, closing tab, repeating question to language model), without interrogating how the measurements may interact with the domain they're collected in via future training and deployment of the model;
* the number and presentation of input sources may vary the results, e.g. allowing respondents to choose between more than two options, or taking in inputs from the same user at multiple times or in multiple contexts;
* relatively low accuracy across respondents in RLHF training data, which may mask differences in context between users that the preference model can aggregate or optimize without resolving.
**Presumption 2**.: _The temporal- and context-shifting of user preferences does not mitigate the effectiveness of reward functions or notions of optimal utility as an optimization target._
## 4 Background
We continue to use _assumptions_ of the literature, grounded in theoretical backing of a subject area, and _presumptions_, which are commonly accepted methods of practice, to identify blind spots and open questions in reward modeling.
|
2304.00310 | On the Feasibility and Robustness of Pointwise Evaluation of Query
Performance Prediction | Despite the retrieval effectiveness of queries being mutually independent of
one another, the evaluation of query performance prediction (QPP) systems has
been carried out by measuring rank correlation over an entire set of queries.
Such a listwise approach has a number of disadvantages, notably that it does
not support the common requirement of assessing QPP for individual queries. In
this paper, we propose a pointwise QPP framework that allows us to evaluate the
quality of a QPP system for individual queries by measuring the deviations
between each prediction versus the corresponding true value, and then
aggregating the results over a set of queries. Our experiments demonstrate that
this new approach leads to smaller variances in QPP evaluations across a range
of different target metrics and retrieval models. | Suchana Datta, Debasis Ganguly, Derek Greene, Mandar Mitra | 2023-04-01T13:18:18Z | http://arxiv.org/abs/2304.00310v1 | # On the Feasibility and Robustness of Pointwise Evaluation of Query Performance Prediction
###### Abstract
Despite the retrieval effectiveness of queries being mutually independent of one another, the evaluation of query performance prediction (QPP) systems has been carried out by measuring rank correlation over an entire set of queries. Such a listwise approach has a number of disadvantages, notably that it does not support the common requirement of assessing QPP for individual queries. In this paper, we propose a pointwise QPP framework that allows us to evaluate the quality of a QPP system for individual queries by measuring the deviations between each prediction versus the corresponding true value, and then aggregating the results over a set of queries. Our experiments demonstrate that this new approach leads to smaller variances in QPP evaluations across a range of different target metrics and retrieval models.
(CEUR-WS.org) 2023: Query Performance Prediction and Its Evaluation in New Tasks, co-located with 45th European Conference on Information Retrieval (ECIR) from the 2nd to the 6th of April 2023 in Dublin, Ireland
\(\bigodot\) [email protected] (S. Datta); [email protected] (D. Ganguly); [email protected]
(D. Greene); [email protected] (M. Mitra)
\(\bigodot\) [https://gdebasis.github.io/](https://gdebasis.github.io/) (D. Ganguly); [http://derekgreene.com/](http://derekgreene.com/) (D. Greene);
[https://www.isical.ac.in/mandar-mitra](https://www.isical.ac.in/mandar-mitra) (M. Mitra)
\(\bigodot\) 0000-0001-9220-6652 (S. Datta); 0000-0003-0050-7138 (D. Ganguly); 0000-0001-8065-5418 (D. Greene); 0000-0001-9045-9971 (M. Mitra)
## 1 Introduction
Query performance prediction (QPP) methods have been proposed to automatically estimate the retrieval effectiveness for queries without making use of any true relevance information (e.g. [1, 2]). In practice, a QPP method allows us to dynamically adjust the processing steps for a query, depending on its initial performance estimate. Although estimating the performance of individual queries independently is a common requirement in many downstream tasks (e.g., adaptive query processing [3]), the standard QPP evaluation methodology adopted by the IR research community has previously involved a **listwise** approach, rather than a **pointwise** one. This is despite the fact that the latter represents a more appropriate strategy for use in downstream applications. To elaborate, a listwise approach operates on a _set of queries_\(\mathcal{Q}\) by first converting it into an ordered set as induced by the QPP estimated scores \(\phi(Q)\,\forall Q\in\mathcal{Q}\). It then computes a rank correlation measure, such as Kendall's \(\tau\), between the ground-truth ordering of the queries as induced by their average precision (AP) values [4] or by any other IR metric, such as nDCG [5].
A major disadvantage of listwise QPP approaches is that evaluation is conducted in a relative manner, so the performance of one query is measured relative to the others. However, a downstream performance estimate of an individual query also needs to be evaluated independently of the other queries. In contrast, a pointwise approach measures the effectiveness on individual queries, and then, if required, aggregates the results over a complete set. This is analogous to measuring the retrieval effectiveness metric MAP by computing the average precision values for individual queries and then aggregating them. Pointwise evaluation also allows us to carry out a per-query analysis of a method often leading to useful insights. For instance, Buckley [6] found that, by performing an extensive per-topic retrieval analysis, they were able to identify queries where most IR systems fail to retrieve relevant documents. However, a listwise evaluation methodology is not conducive to performing this kind of detailed per-query analysis.
Another drawback of listwise methods is that they can be overly sensitive to the configuration setup used for evaluation. The two most important such configurations are: i) the target retrieval evaluation metric that induces a ground-truth ordering over the set of queries; ii) the retrieval model used to obtain the top-\(k\) set of documents for QPP estimation. Indeed, variations in these configurations can lead to both large standard deviations in the reported rank correlation measures and significant differences in the relative ranks of various QPP systems [7]. To address the limitations of listwise methods, we propose a new QPP evaluation framework, **Aggregated Pointwise Absolute Errors** (**APAE**), which is shown to not only be consistent with the existing listwise approaches, but also to be more robust to changes in QPP experimental setup.
## 2 A Framework for Pointwise QPP Evaluation
Correlation with listwise ground-truthBefore describing our new QPP evaluation framework APAE, we begin by introducing the required notation. Formally, a QPP estimate is a function of the form \(\phi(Q,M_{k}(Q))\mapsto\mathbb{R}\), where \(M_{k}(Q)\) is the set of top-\(k\) ranked documents retrieved by an IR model \(M\) for a query \(Q\in\mathcal{Q}\), a benchmark set of queries.
For the purpose of listwise evaluation, for each \(Q\in\mathcal{Q}\), we first compute the value of a target IR evaluation metric, \(\mu(Q)\) that reflects the quality of the retrieved list \(M_{k}(Q)\). The next step uses these \(\mu(Q)\) scores to induce a _ground-truth ranking_ of the set \(\mathcal{Q}\), or in other words, arrange the queries by their decreasing (or increasing) \(\mu(Q)\) values, i.e.,
\[\mathcal{Q}_{\mu}=\{Q_{i}\in\mathcal{Q}:\mu(Q_{i})>\mu(Q_{i+1}),\,\forall i=1, \ldots,|\mathcal{Q}|-1\}\} \tag{1}\]
Similarly, the evaluation framework also yields a _predicted ranking_ of the queries, where this time the queries are sorted by the QPP estimated scores, i.e.,
\[\mathcal{Q}_{\phi}=\{Q_{i}\in\mathcal{Q}:\phi(Q_{i})>\phi(Q_{i+1}),\,\forall i =1,\ldots,|\mathcal{Q}|-1\} \tag{2}\]
A listwise evaluation framework then computes the rank correlation between these two ordered sets \(\gamma(\mathcal{Q}_{\mu},\mathcal{Q}_{\phi}),\text{ where }\gamma:\mathbb{R}^{| \mathcal{Q}|}\times\mathbb{R}^{|\mathcal{Q}|}\mapsto[0,1]\text{ is a correlation measure, such as Kendall's }\tau\).
Individual ground-truthIn contrast to listwise evaluations, where the ground-truth takes the form of an ordered set of queries, pointwise QPP evaluation involves making \(|\mathcal{Q}|\)_independent
comparisons_. Each comparison is made between a query \(Q\)'s predicted QPP score \(\phi(Q)\) and its retrieval effectiveness measure \(\mu(Q)\), i.e.,
\[\eta(\mathcal{Q},\mu,\phi)\stackrel{{\text{def}}}{{=}}\frac{1}{| \mathcal{Q}|}\sum_{Q\in\mathcal{Q}}\eta(\mu(Q),\phi(Q)) \tag{3}\]
Unlike the rank correlation \(\gamma\), here \(\eta\) is a pointwise correlation function of the form \(\eta:\mathbb{R}\times\mathbb{R}\mapsto\mathbb{R}\). It is often convenient to think of \(\eta\) as the inverse of a _distance_ function that measures the extent to which a predicted value deviates from the corresponding true value. In contrast to ground-truth evaluation metrics, most QPP estimates (e.g., NQC, WIG etc.) are not bounded within \([0,1]\). Therefore, to employ a distance measure, each QPP estimate \(\phi(Q)\) must be normalized to the unit interval. Subsequently, \(\eta\) can be defined as \(\eta(\mu(Q),\phi(Q))\stackrel{{\text{def}}}{{=}}1-|\mu(Q)-\phi(Q )/\aleph|\), where \(\aleph\) is a normalization constant which is sufficiently large to ensure that the denominator is positive.
Selecting an IR metric for pointwise QPP evaluationIn general, an unsupervised QPP estimator will be agnostic with respect to the target IR metric \(\mu\). For instance, NQC scores can be seen as being approximations of AP@100 values, but can also be interpreted as approximating any other metric, such as nDCG@20 or P@10. Therefore, a question arises around which metric should be used to compute the individual correlations in Equation 3. Of course, the results can differ substantially for different choices of \(\mu\), e.g., AP or nDCG. This is also the case for listwise QPP evaluation, as reported in [7]. To reduce the effect of such variations, we now propose a simple yet effective solution.
Metric-agnostic pointwise QPP evaluationFor a set of evaluation functions \(\mu\in\mathcal{M}\) (e.g., \(\mathcal{M}=\{\text{AP@100,nDCG@20},\ldots\}\)), we employ an aggregation function to compute the overall pointwise correlation (Equation 3) of a QPP estimate with respect to each metric. Formally,
\[\eta(Q,\mathcal{M},\phi)=\Sigma_{\mu\in\mathcal{M}}(1-|\mu(Q)-\phi(Q)/\aleph|), \tag{4}\]
where \(\Sigma\) denotes an aggregation function (it does not indicate summation). In particular, we use the most commonly-used such functions as choices for \(\Sigma\):'minimum','maximum', and 'average' - i.e., \(\Sigma\in\{\text{avg},\min,\max\}\). Next, we find the average over these values computed for a given set of queries \(\mathcal{Q}\), i.e., we substitute \(\eta(Q,\mathcal{M},\phi)\) from Equation 4 into the summation of Equation 3.
## 3 Experiments
A QPP experiment context [7] involves three configuration choices: i) the **QPP method** itself that is used to predict the relative performance of queries; ii) the **IR metric** that is used to obtain
\begin{table}
\begin{tabular}{l|l} \hline QPP Methods & AvgIDF [8], Clarity [9], NQC [10], WIG [11], UEF(Clarify), UEF(NOC), UEF(WIG) [2] \\ \hline IR Metrics & AP@100, nDCG@100, P@10, Recall@100 \\ \hline IR Models & LMJM (\(\lambda=0.6\)), LMDir (\(\mu=1000\)), BM25 \((k,b)=(0.7,0.3)\) \\ \hline \end{tabular}
\end{table}
Table 1: QPP configurations - (QPP method, IR metric, and models) used to measure variations.
a ground-truth ordering of the query performances as measured on a set of top-\(k\) (\(k=100\) in our experiments) documents retrieved by iii) a specific **IR model**. Table 1 summarizes the IR models and metrics used in our experiments, along with the relevant hyper-parameter values. The objective of our experiments is to investigate the following two key research questions:
* **RQ1**: Does APAE _agree_ with the standard listwise correlation metrics?
* **RQ2**: How _robust_ is APAE with respect to changes in the QPP experiment context?
An affirmative answer to **RQ1** would indicate that our proposed metric APAE is _consistent_ with existing metrics used for QPP evaluation, while an affirmative answer to **RQ2** would suggest that APAE is preferable to existing methods due to its higher stability with respect to different experimental settings.
We conduct our QPP experiments on the TREC Robust dataset, which consists of \(249\) topics. Following the standard practice for QPP experiments [5, 12], we report results aggregated over a total of 30 randomly chosen equal-sized train-test splits of the data. The training split of each partition was used for tuning the hyper-parameters for the QPP method.
Agreement between listwise and pointwise evaluationFirstly, we investigate the consistency of APAE with respect to three standard listwise QPP evaluation metrics: Pearson's \(r\), Spearman's \(\rho\) and Kendall's \(\tau\); and a pointwise approach, scaled Absolute Rank Error (sARE) [13]. Since sARE is an error measure, we measure correlations of APAE with \(1-\text{sARE}\) measures (which for the sake of simplicity, we refer to as sARE in Table 2). We experiment with three different instances of APAE obtained by substituting the aggregation functions - avg, min and max as \(\Sigma\) in Equation 4, denoted respectively as \(\eta_{\text{avg}}(\mathcal{M})\), \(\eta_{\text{min}}(\mathcal{M})\) and \(\eta_{\text{max}}(\mathcal{M})\).
The results presented in Table 2 answer **RQ1** in the affirmative. Each reported value here corresponds to the rank correlation (Kendall's \(\tau\)) between the relative ranks of the QPP systems ordered by their effectiveness as computed via one of the standard metrics (one of \(r\), \(\rho\), \(\tau\) or sARE) and APAE, i.e., one of \(\eta_{\text{avg}}(\mathcal{M})\), \(\eta_{\text{min}}(\mathcal{M})\) and \(\eta_{\text{max}}(\mathcal{M})\)). The high correlation values between the standard listwise and the proposed pointwise metrics show that APAE can be used as a substitute for the standard listwise evaluation. Notably, we see that the average aggregate function yields the best results, and hence for the subsequent experiments we use \(\eta_{\text{avg}}(\mathcal{M})\) as the pointwise evaluation metric.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{\(\eta_{\text{avg}}(\mathcal{M})\)} & \multicolumn{3}{c}{\(\eta_{\text{min}}(\mathcal{M})\)} & \multicolumn{3}{c}{\(\eta_{\text{max}}(\mathcal{M})\)} \\ \cline{2-13} & \(r\) & \(\rho\) & \(\tau\) & sARE & \(r\) & \(\rho\) & \(\tau\) & sARE & \(r\) & \(\rho\) & \(\tau\) & sARE \\ \hline BM25 & 0.810 & 0.810 & 0.905 & 0.887 & 0.778 & 0.778 & 0.794 & 0.813 & 0.802 & 0.810 & 0.794 & 0.794 \\ LMDir & **0.905** & **0.810** & **0.905** & **0.887** & 0.778 & 0.794 & 0.794 & 0.810 & 0.769 & 0.782 & 0.794 & 0.796 \\ LMJM & 0.810 & 0.810 & 0.810 & 0.846 & 0.794 & 0.794 & 0.782 & 0.786 & 0.794 & 0.769 & 0.810 & 0.846 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The correlation of our proposed pointwise evaluation metric APAE with the standard listwise metrics - Pearson’s \(r\), Spearman’s \(\rho\), Kendall’s \(\tau\) and sARE. The rank correlations between each pair of QPP system ranks (evaluated with a listwise measure and a pointwise measure) were computed with Kendall’s \(\tau\). The high values indicate that the pointwise measurement can effectively _substitute_ a standard list-based measure, since they lead to a fairly similar relative ordering between the effectiveness of different QPP methods.
Variances in relative effectiveness of QPP methodsTo investigate **RQ2**, we consider the relative stability of QPP system ranks for variations in QPP contexts (i.e., different IR models and target metrics), comparing both listwise and pointwise approaches (see Table 3). To clarify with an example, if working with three QPP methods, say AvgIDF, NQC, WIG, we observe that \(\tau(\text{NQC})>\tau(\text{WIG})>\tau(\text{AvgIDF})\) for LMDir as measured relative to AP@100. We expect
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Metric} & \multirow{2}{*}{Model} & \multicolumn{2}{c}{LMJM} & \multicolumn{2}{c}{BM25} & \multicolumn{2}{c}{BM25} & \multicolumn{2}{c}{LMDir} & \multicolumn{2}{c}{LMDir} \\ & & \((0.6)\) & \((0.7,0.3)\) & \((0.3,0.7)\) & \((500)\) & \((1000)\) \\ \hline AP@100 & & 0.826 & 0.904 & 0.819 & 0.714 & 0.895 \\ nDCG@100 & LMJM & 0.780 & 0.694 & 0.695 & 0.759 & 0.759 \\ R@100 & (0.3) & 0.824 & 0.769 & 0.782 & 0.904 & 0.904 \\ \hline AP@100 & & & 0.703 & 0.712 & 0.904 & 0.823 \\ nDCG@100 & LMJM & & 0.781 & 0.827 & 0.811 & 0.811 \\ R@100 & (0.6) & & 0.813 & 0.725 & 0.731 & **0.675** \\ \hline AP@100 & & & & 0.903 & 0.785 & 0.785 \\ nDCG@100 & BM25 & & & 0.897 & 0.786 & 0.786 \\ R@100 & (0.7,0.3) & & 0.812 & 0.752 & 0.779 \\ \hline AP@100 & & & & 0.887 & 0.882 \\ nDCG@100 & BM25 & & & 0.901 & 0.895 \\ R@100 & (0.3,0.7) & & & 0.889 & 0.901 \\ \hline AP@100 & & & & 0.901 & \\ nDCG@100 & LMDir & & & 0.893 & \\ R@100 & (500) & & & 0.903 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Stability of the proposed pointwise QPP metric APAE with respect to listwise approach, across different pairs of IR metrics and IR models. Red cells indicate the lowest value in each group, while the lowest values along each column are bold-faced.
to observe a similar ordering for a different choice of the IR model and target IR metric, say BM25 with nDCG@100. As in our previous experiments, here we measure the rank correlations between a total of seven QPP systems (see Table 1) via Kendall's \(\tau\).
## 4 Concluding Remarks
Unlike the standard listwise QPP evaluation mechanism of measuring an overall rank correlation with respect to a reference ranking of the queries (in terms of retrieval effectiveness), we have proposed a pointwise evaluation method that computes the relative difference between a normalized QPP score and a true IR evaluation measure (e.g., AP@100 or nDCG@20). Our experiments demonstrated that the proposed metric exhibits a high correlation with standard listwise approaches and is more robust to changes in QPP experimental setup than listwise evaluation measures. Using this metric, it should thus be possible to evaluate the effectiveness of different QPP methods on downstream tasks on a per-query basis.
Acknowledgement.The first and the third authors were supported by the Science Foundation Ireland (SFI) grant number SFI/12/RC/2289_P2.
|
2306.17015 | Evidence for additional third-order transitions in the two-dimensional
Ising model | We employ the microcanonical inflection-point analysis method, developed for
the systematic identification and classification of phase transitions in
systems of any size, to study the two-dimensional Ising model at various
lattice sizes and in the thermodynamic limit. Exact results for the density of
states, which were obtained by exact algorithmic computation, provide evidence
for higher-order transitions in addition to the well-studied second-order
ferromagnetic-paramagnetic phase transition. An independent third-order phase
transition is identified in the ferromagnetic phase, whereas another
third-order transition resides in the paramagnetic phase. The latter is a
dependent transition, i.e., it is inevitably associated with the critical
transition, but it remains separate from the critical point in the
thermodynamic limit. For a deeper insight into the nature of these additional
transitions, a detailed analysis of spin clusters is performed. | Kedkanok Sitarachu, Michael Bachmann | 2023-06-29T15:08:27Z | http://arxiv.org/abs/2306.17015v1 | # Evidence for Additional Third-Order Transitions
###### Abstract
We employ the microcanonical inflection-point analysis method, developed for the systematic identification and classification of phase transitions in systems of any size, to study the two-dimensional Ising model at various lattice sizes and in the thermodynamic limit. Exact results for the density of states, which were obtained by exact algorithmic computation, provide evidence for higher-order transitions in addition to the well-studied second-order ferromagnetic-paramagnetic phase transition. An independent third-order phase transition is identified in the ferromagnetic phase, whereas another third-order transition resides in the paramagnetic phase. The latter is a dependent transition, i.e., it is inevitably associated with the critical transition, but it remains separate from the critical point in the thermodynamic limit. For a deeper insight into the nature of these additional transitions, a detailed analysis of spin clusters is performed.
## I Introduction
The (Lenz-)Ising model was introduced about a century ago for studies of the impacts of attractive local spin-spin interaction upon macroscopic cooperative ordering across the entire system [1; 2]. As it turned out, the one-dimensional spin chain does not exhibit signs of a thermodynamic phase transition. It took almost two decades to solve the two-dimensional problem and to reveal the prominent second-order phase transition that separates the paramagnetic and the ferromagnetic phase [3; 4]. In the following decades, the simplicity and versatility of the model, an increased interest in understanding the origins of phase transitions, and the ever growing available computer power made the Ising model one of the most widely employed generic models for studies of complexity.
Traditional theory dictates that phase transitions can only occur in the thermodynamic limit, which is where energetic and configurational response parameters tend to exhibit nonanalyticities at the transition point. From a modern point of view, this strict definition was mostly a reference to the mathematical tractability of complex problems. For the same reason, most studies of phase transitions were performed by employing canonical statistical analysis techniques. However, this approach is known to lead to problems in interpreting signals in response functions for systems of finite size. With fields like nano- and biosciences moving into the focus of statistical analysis, where cooperative system behavior is governed or at least strongly influenced by finite-size effects, the theory of phase transitions has to be extended and statistical analysis techniques appropriately adapted.
The significant evolution of computational resources throughout the last decades now allows algorithmic access to problems where a mathematical approach is not manageable. As desirable a rigorous treatment is, computational methods offer additional options for estimating or calculating quantities that are virtually inaccessible mathematically. One of the most interesting such quantity is the number (or density) \(g(E)\) of microstates with system energy \(E\). The logarithm of the density of states is commonly interpreted as the microcanonical entropy [5]
\[S(E)=k_{\mathrm{B}}\ln g(E). \tag{1}\]
The generalized microcanonical inflection-point analysis method was introduced for the study of systems of any size [6]. It rests on the principle of minimal sensitivity [7; 8] in the interplay between the configurational entropy \(S(E)\) and the system energy \(E\). In the microcanonical theory of phase transitions, these are considered the central quantities that govern effects competing with each other for dominance in the respective phases [5; 9]. In consequence, their balance ensures a stable equilibrium state. In our method, the entropy and its derivatives with respect to energy are systematically analyzed to identify and classify transition signals uniquely [6]. The idea is similar to Ehrenfest's approach to identifying and classifying phase transitions by means of nonanalyticities in derivatives of thermodynamic potentials [10]. However, the Ehrenfest scheme cannot be systematically extended to accommodate finite systems as nonanalyticities can only occur in the (hypothetical) thermodynamic limit.
We recently employed our method to analyze the phase behavior of various Ising systems [6; 11; 12]. As expected, the inflection-point analysis did not reveal transition signatures for the one-dimensional Ising chain. However, Ising strips and the two-dimensional (2D) Ising model on the square lattice exhibit a variety of transition signals. Particularly interesting are the higher-order transitions we found for the 2D Ising model in addition to the well-studied critical transition. According to our classification scheme, the critical transition is a second-order _independent_ transition, whereas an additional _dependent_ third-order transition was identified in the paramagnetic phase
that is inevitably linked to the critical transition. It can be interpreted as the precursor of the critical transition in the disordered phase. Another independent transition is located in the ordered phase. In this paper, we provide evidence that these two additional transitions remain separate from the critical transition in the thermodynamic limit and thus can be considered phase transitions in the more general context provided by the microcanonical theory. By performing a detailed analysis of spin clusters, we also shed light on the character of these additional transitions.
It is worth noting that the microcanonical inflection-point analysis method has not only been successfully employed for spin systems, but also in studies of macromolecular systems [6; 13; 14]. It has proven useful as a foundation for a better understanding of general geometric properties of phase transitions [15; 16] as well.
The paper is organized as follows: The Ising model, computational techniques, and the microcanonical analysis method are briefly reviewed in Section II. Results obtained by microcanonical inflection-point analysis are presented in Section III. Properties of the additional transitions identified by means of spin-cluster analyses are discussed in Section IV. The summary of the major results in Section V concludes the paper.
## II Microcanonical statistical analysis and cluster simulations of the 2D Ising model
In the following, we briefly review the Ising model, the microcanonical inflection-point analysis method, and the simulation methodology used for the cluster analysis.
### Ising model
In the two-dimensional Ising model [1; 2] with periodic boundary conditions and absent external magnetic field, the energy of the spin configuration \(\mathbf{X}=(s_{1},s_{2},\ldots,s_{N})\) with \(N=L\times L\) spins on a square lattice with edge lengths \(L\) can simply be written as
\[E(\mathbf{X})=-J\sum_{\langle i,j\rangle}s_{i}s_{j}. \tag{2}\]
Possible values of the spin orientation are \(s_{i,j}=\pm 1\). The symbol \(\langle i,j\rangle\) indicates that only interactions of the spins \(s_{i}\) and \(s_{j}\) are considered, if they are nearest neighbors on the lattice. The energy scale is fixed by the positive-valued coupling constant \(J>0\) (ferromagnetic coupling).
### Microcanonical inflection-point analysis method
The microcanonical inflection-point analysis method, which utilizes the principle of least sensitivity [7; 8], was introduced to systematically identify and classify transition signals in systems of any size [6]. Like in canonical statistical analysis, the general assumption is that the interplay of entropy and energy governs the transition behavior.
In our method, least-sensitive inflection points of \(S(E)\), as defined in Eq. (1), and its derivatives are used to identify phase transitions. We denote the derivatives as follows: \(\beta(E)=dS(E)/dE\), \(\gamma(E)=d^{2}S(E)/dE^{2}\), and \(\delta(E)=d^{3}S(E)/dE^{3}\). Derivatives of higher order were not considered in this study.
As it turns out, it is useful to distinguish two types of transitions. _Independent_ transitions are analogs of the conventional transitions and their occurrence does not depend on other cooperative processes in the system. This is in contrast to _dependent_ transitions, which are inevitably associated with an independent transition. These transitions only occur at a higher energy (i.e., usually in the less-ordered phase), and they are of higher order than the corresponding independent transition. Therefore, dependent transitions can be considered precursors of a major independent transitions. This may have noticeable consequences for applications: If a system currently in the disordered phase is adiabatically cooled down and a dependent transition signal is detected, a major phase transition is imminent upon further cooling.
_Independent_ transitions are classified as of odd order \((2n-1)\), where \(n\) is positive integer, if the inflection point at transition energy \(E_{\rm tr}\) satisfies the condition
\[\frac{d^{(2n-1)}S(E)}{dE^{(2n-1)}}\bigg{|}_{E=E_{\rm tr}}>0, \tag{3}\]
whereas for even-order \((2n)\) independent transitions
\[\frac{d^{2n}S(E)}{dE^{2n}}\bigg{|}_{E=E_{\rm tr}}<0 \tag{4}\]
holds. Inflection points are associated with even-order \((2n)\)_dependent_ transitions, if
\[\frac{d^{2n}S(E)}{dE^{2n}}\bigg{|}_{E=E_{\rm tr}}>0, \tag{5}\]
and odd-order \((2n+1)\) dependent transitions are characterized by
\[\frac{d^{(2n+1)}S(E)}{dE^{(2n+1)}}\bigg{|}_{E=E_{\rm tr}}<0. \tag{6}\]
For finite 2D Ising systems, it is convenient to use the exact algorithmic evaluation schemes introduced in Refs. [17; 18] to determine the density of states. The latter method also allows for an extrapolation toward the thermodynamic limit, which will eventually permit us to decide whether or not transitions identified by means of the inflection-point method will survive in this limit. The derivatives of the microcanonical entropy are then obtained by numerical differentiation [12].
### Wolff cluster algorithm
For the study of cluster properties of the Ising system, we employed the Wolff single-cluster algorithm [19]. Instead of performing single spin-flip Monte Carlo updates, in this method an entire cluster of spins is updated in a single step. This is most efficient near the critical point and in the subcritical ferromagnetic region, where the majority cluster coexists with smaller minority clusters.
In this simple yet powerful Monte Carlo method, one spin in the system is selected randomly. Then, nearest-neighbor spins with the same orientation are identified and added to the (stochastic) Wolff cluster with probability \(p=1-\exp(-2\beta J)\), where \(\beta=1/k_{\rm B}T\) is the inverse thermal energy at temperature \(T\) (the Boltzmann constant \(k_{\rm B}\) was set to unity in the simulations and in the subsequent analysis). The process of adding spins to the Wolff cluster is repeated until all spins belonging to the same geometric cluster have been tested and the construction of the Wolff cluster is complete. Eventually, all spins in this cluster are flipped. For the identification of a geometric cluster, we used the standard labeling technique introduced by Hoshen and Kopelman [20].
## III Transition signals from microcanonical analysis
Exact algorithmic methods [17; 18] were employed to determine the densities of states of the 2D Ising model with periodic boundary conditions for lattice sizes with up to \(320\times 320\) spins. The method by Haggkvist et al. [18] also allows to find the density of states in the thermodynamic limit (\(L\to\infty\)), which is key to judging whether or not the additional third-order transitions predicted previously [11; 6; 12] survive in this limit. Based on the exact data obtained from these algorithms, the microcanonical inflection-point analysis method was then used to identify transitions in the curves of the microcanonical entropy and its derivatives.
The microcanonical results are shown in Fig. 1. The quantities are properly rescaled to account for obvious system size dependence and plotted as functions of the energy per spin, \(e=E/L^{2}\). Rescaled entropy and \(\beta\) curves in Fig. 1(a) and 1(b), respectively, do not exhibit much system size dependence on the scales plotted. However, whereas there is no inflection point in the entropy, the \(\beta\) curves do possess a unique least-sensitive inflection point, which indicates the well-studied critical transition separating the ferromagnetic from the paramagnetic phase. According to our microcanonical classification scheme, it satisfies the criteria of an _independent_ second-order phase transition. In the thermodynamic limit, the critical transition energy per spin is \(e_{\rm c}\approx-1.414\) and the critical temperature coincides with Onsager's result: \(T_{\rm c}=2/\ln(1+\sqrt{2})\equiv 1/\beta(e_{\rm c})\approx 2.269\), as expected. Towards the thermodynamic limit (\(L\to\infty\)), the slope converges to zero, as can clearly be seen in the next derivative \(\gamma(E)\), shown in Fig. 1(c). Interestingly, the smooth peak visible for finite systems turns into a cusp in the thermodynamic limit. Consequently, the nondifferentiability of \(\gamma\) at the critical transition energy leads to a discontinuity in the next-higher derivative \(\delta\) [Fig. 1(d)].
In addition to the critical transition, the microcanonical inflection-point analysis method identifies two addi
Figure 1: (a) Microcanonical entropy per spin \(S(e)/L^{2}\) and its derivatives (b) \(\beta(e)\), (c) \(\gamma(e)L^{2}\), and (d) \(\delta(e)L^{4}\) for various system sizes, plotted as functions of the energy per spin \(e=E/L^{2}\). The dashed vertical lines mark the transition energies per spin associated with the three transitions found in the 2D Ising model. For reference, the critical energy per spin is \(e_{\rm c}\approx-1.414\).
tional transitions of higher order. An _independent_ third-order transition (fourth order for \(L\leq 64\)) is identified in the ferromagnetic phase. The corresponding least-sensitive inflection point in \(\gamma\) [Fig. 1(c)] leads to a pronounced positive-valued local minimum in \(\delta(e)\). In the thermodynamic limit, the transition energy is \(e_{\rm ind}\approx-1.502\), which corresponds to the transition temperature \(T_{\rm ind}\approx 2.229\).
Equally interesting is the occurrence of the _dependent_ third-order transition in the paramagnetic phase. As it is inevitably coupled to the critical transition, it can be imagined as a precursor of this major transition in the disordered phase. The least-sensitive inflection point in \(\gamma(e)\), which converges to the transition energy \(e_{\rm dep}=-1.053\) (corresponding to the transition temperature \(T_{\rm dep}=2.567\)) in the thermodynamic limit, is characterized by a negative-valued peak in \(\delta\). The inset in Fig. 1(d) shows that this peak is also present in thermodynamic limit.
Whereas these additional transitions do not exhibit nonanalytic features in the way the critical transition does, the distinct signals indicating their existence survive in the thermodynamic limit and do not converge toward the critical point. This is a remarkable result as the subphases between them and the critical point create an "atmosphere" surrounding the critical transition. The dependent transition may potentially provide additional clues as to the loss of identity in the system when it approaches the critical point upon cooling. However, it is important to emphasize that the third-order transition in the ferromagnetic phase is independent of the critical transition and thus does not necessarily help understanding better the approach toward the critical transition upon adiabatic heating. It does not serve as a precursor of it in the way the dependent transition in the paramagnetic phase does.
Figure 2 contains the results for the transition temperatures obtained by microcanonical analysis for various lattice sizes and in the thermodynamic limit (dashed lines). It is important to note that the additional third-order transitions neither disappear nor converge toward the critical point in the thermodynamic limit. The transition temperatures remain well-separated from the critical temperature, but the microcanonical transition features do not develop into non-analyticities. Hence, these transitions are not phase transitions in the conventional Ehrenfest scheme. However, it should be reiterated that significant changes in system behavior in modern scientific problems and industrial applications - for many of which the thermodynamic limit is a nonsensical simplification - are not signaled by catastrophic changes in observables and data, but are rather subtle. Processes like folding and aggregation transitions of macromolecules, weather phenomena, swarm formation, and even synchronization in computer networks and social behavior occur on mesoscopic rather than macroscopic length scales. In fact, the early detection of sublying patterns leading to a catastrophic event may be more important and revealing than a thorough study of the major transition itself.
## IV Analysis of spin clusters
We now discuss the results obtained by Wolff spin-cluster simulations and cluster analysis to shed more light on the system behavior associated with the additional transitions in the Ising model identified by microcanonical inflection-point analysis.
### Third-order dependent transition in the paramagnetic phase
In order to gain more insight into the nature of the additional third-order transitions identified for the 2D Ising model, cluster simulations were performed and cluster sizes analyzed by means of canonical statistical analyses of suitable order parameters. A typical example of a spin configuration on the square lattice with 1500\(\times\)1500 with all clusters colored differently is shown in Fig. 3.
The first quantity we take a closer look at is the average cluster size, \(\langle A\rangle\), We define \(A\) as the average size of clusters containing more than a single spin in a given
Figure 2: Transition temperatures \(T_{\rm tr}\) obtained by microcanonical inflection-point analysis (MIPA) and cluster properties plotted as a function of \(L\). Symbols mark the transition temperatures at finite system size (solid lines are only guides to the eye). Horizontal dashed lines are located at the transition temperatures in the thermodynamic limit (\(L\rightarrow\infty\)) found by microcanonical analysis. For reference, the critical temperature is \(k_{\rm B}T_{c}/J=2/\ln(1+\sqrt{2})\approx 2.269\). The small uncertainties in the microcanonical results originate from the numerical error in locating the transition signals due to the necessity of using discrete differences methods for calculating derivatives.
spin configuration \(\mathbf{X}\):
\[A=\frac{1}{n^{\prime}}\sum_{l^{\prime}}C_{l^{\prime}}, \tag{7}\]
where \(l^{\prime}\) labels the clusters with more than one spin, \(C_{l^{\prime}}\) is the number of spins in cluster \(l^{\prime}\), and \(n^{\prime}\) is the total number of clusters with more than one spin in \(\mathbf{X}\). The statistical average is then obtained as
\[\langle A\rangle=\frac{1}{Z}\sum_{\mathbf{X}}A(\mathbf{X})e^{-E(\mathbf{X})/k_ {\mathrm{B}}T}, \tag{8}\]
where \(T\) is the canonical temperature and \(Z=\sum_{\mathbf{X}}\exp[-E(\mathbf{X})/k_{\mathrm{B}}T]\) is the canonical partition function.
As mentioned, spin configurations at different temperatures were obtained in Wolff cluster simulations [19]. At each temperature, up to \(10^{8}\) spin configurations were generated. Spin clusters were labeled by means of the Hoshen-Kopelman method [20] and the average cluster size \(A\) in a given configuration was determined. The canonical average of this quantity and its derivative with respect to the temperature are plotted as functions of the temperature for various lattice sizes in Fig. 4.
Figure 4(a) shows that at low temperatures, in the ferromagnetic phase, the average cluster size decreases with increasing temperatures. Near the critical temperature, \(\langle A\rangle\) exhibits a backbending pattern, which becomes more pronounced for larger lattices. For temperatures \(T>T_{\mathrm{c}}\), i.e., in the paramagnetic phase, the average cluster size decreases again. The temperature derivative of the average cluster size, \(d\langle A\rangle/dT\), is shown in Fig. 4(b). It is a measure for the rate of change of the average cluster size with respect to the temperature. The curves for the different system sizes all show a prominent peak associated with the inflection point in the backbending pattern in Fig. 4(a). The peak location converges to the critical point, as expected. As a thermodynamic response quantity, it eventually becomes nonanalytic at the critical point in the thermodynamic limit.
More interesting is the inflection point of \(\langle A\rangle\) in the paramagnetic phase close to the dependent third-order transition identified by microcanonical analysis. It does not disappear even for the largest lattices simulated (1500\(\times\)1500). The curves of the derivative \(d\langle A\rangle/dT\) exhibit a local minimum and there is no indication for it to flatten out in the thermodynamic limit. Its close proximity to the transition temperature of the dependent third-order transition \(T_{\mathrm{dep}}\approx 2.567\) suggests that this feature is related to the transition.
The decrease of the average cluster size with increasing temperature is expected in the paramagnetic phase. However, it is noteworthy that this decrease accelerates for temperatures \(T<T_{\mathrm{dep}}\), before slowing down for \(T>T_{\mathrm{dep}}\). This is an unexpected system behavior; the average cluster size could simply drop monotonously in the paramagnetic phase (in which case the third-order de
Figure 3: Clusters identified in a typical spin configuration on the 1500\(\times\)1500 lattice in the paramagnetic phase at \(T=2.605\), which is just above the dependent-transition point \(T_{\mathrm{dep}}\approx 2.567\).
Figure 4: (a) Average cluster size \(\langle A\rangle\) and (b) derivative \(d\langle A\rangle/dT\) as functions of temperature \(T\) for the two-dimensional Ising model at various system sizes. The inset enlarges the area surrounding the dependent third-order transition in the paramagnetic phase. Note that cluster simulations for system sizes 800\(\times\)800 and 1500\(\times\)1500 were only performed for temperatures \(T\geq 2.46\). The black dashed line indicates the location of the dependent third-order transition in the thermodynamic limit as obtained from microcanonical analysis, \(T_{\mathrm{dep}}\approx 2.567\). The other dashed lines locate the critical second-order and the subcritical independent third-order transition, respectively.
pendent transition would not exist). Although it seems to be a minor effect, this change of monotony is, in fact, an important signature of the catastrophic critical transition, because, as we have shown in the microcanonical analysis, these transitions are inevitably associated with each other. This means that the third-order dependent transition is a precursor of the critical transition in the paramagnetic phase, and - as our results from the cluster analysis show - is due to the change of the rate by which clusters decay in the disordered phase.
The estimates for the peak temperatures in \(d\langle A\rangle/dT\) in the paramagnetic phase have already been included in Fig. 2. They clearly converge to the third-order dependent transition temperature obtained by microcanonical analysis in the thermodynamic limit. Even for the finite lattices, the respective microcanonical estimate and the estimate from the cluster analysis are very close to each other, suggesting that the third-order transition signaled by microcanonical analysis is indeed due to the enhanced fluctuations about the average cluster size in this temperature region.
### Third-order independent transition in the ferromagnetic phase
For the study of properties of the third-order independent transition in the ordered (ferromagnetic) phase, we look at signs of emerging disorder and entropic variability, which is dependent on the formation of minority clusters in this phase, where the ferromagnetic states are always dominated by a majority cluster. The simplest of these is obviously what we call the "single-spin cluster", i.e., an isolated single spin surrounded by nearest-neighbor spins with opposite orientation. Figure 5 shows plots of the statistical average of the number of isolated spins \(\langle n_{1}\rangle\) as a function of temperature for two different
Figure 5: Average number of isolated spins per spin, \(\langle n_{1}\rangle\), as a function of temperature for two lattice sizes. Vertical dashed lines are located at the transition temperatures of the 2D Ising model obtained by microcanonical analysis.
Figure 6: Representative Ising configurations on the 200\(\times\)200 lattice at (a) \(T=2.10\), (b) \(T=2.23\), and (c) \(T=2.28\). In white areas, spins point up and in grey areas down. Isolated spins, independent of their orientation, are colored blue. The numbers of isolated spins divided by the total number of spins, \(n_{1}\), are: (a) 0.0169, (b) 0.0197, and (c) 0.0179.
lattice sizes. These results were also obtained in Wolff cluster simulations. Dashed vertical lines mark the transition points found by microcanonical analysis.
Most noteworthy is the peak near the third-order independent transition temperature \(T_{\rm ind}\approx 2.229\) in the ferromagnetic phase and the subsequent drop toward the critical point. As expected, the number of isolated spins increases again with temperature in the paramagnetic phase.
The drop in the number of isolated spins in the ferromagnetic phase just below the critical point can be attributed to the dissolution of the majority cluster. Isolated spins serve as "bond breakers." Their increased numbers and the subsequent recombination into clusters of smaller size with more and more rugged fractal boundaries occur near the third-order transition temperature. These cluster structures are not present in the pure ferromagnetic phase below \(T_{\rm ind}\). In fact, clusters of intermediate size do not exist at all.
For example, an analysis of cluster sizes for the \(500\times 500\) lattice revealed that near \(T_{\rm ind}\) clusters of sizes in the range \(10\%-70\%\) of the system size are completely absent. The population of intermediate-size cluster rapidly increases toward the critical temperature, though. The isolated spins help seed the formation of these clusters. Representative configurations on the 200\(\times\)200 Ising lattice are shown in Fig. 6 for temperatures (a) below \(T_{\rm ind}\), (b) near \(T_{\rm ind}\), and (c) close to \(T_{\rm c}\). Note that the transition at \(T_{\rm int}\approx 2.229\) is an independent transition, i.e., it is not associated with the critical transition.
In Fig. 2, we have already included the transition temperatures of this transition for various lattice sizes. Even for the largest system simulated in this phase, 1024\(\times\)1024 spins, the peak temperature read off from \(\langle n_{1}\rangle\) is very close to the microcanonical estimate at this system size. Most importantly, the transition temperature estimates for 1024\(\times\)1024 and even smaller lattice sizes, are located well within the narrow uncertainty region of the microcanonical estimate for the transition temperature in the thermodynamic limit. This increases the confidence that the peaking in the average number of isolated spins in the ferromagnetic phase is a major feature of the system behavior in the vicinity of this third-order transition. As expected, the finite-size transition temperatures of this additional transition do not converge toward the critical temperature, but remain separate from the critical point even in the thermodynamic limit.
## V Summary
The purpose of this study was twofold: First, it was necessary to verify that the recently found additional phase transitions in the two-dimensional Ising model flanking the critical transition remain present and well-separate from the critical transition in the thermodynamic limit. This goal could be achieved by microcanonical inflection-point analysis of the microcanonical entropy and the relevant derivatives [6] in the thermodynamic limit. This was made possible by means of the exact enumeration method for the density of states of the Ising model introduced by Haggkvist et al. [18].
The second objective was to find evidence of the third-order transitions in the way the Ising system forms clusters in both the paramagnetic and the ferromagnetic phase. For this purpose, extensive Wolff single-cluster simulations [19] for lattice systems with up to 1500\(\times\)1500 spins were performed and suitably introduced order parameters measured.
It turned out that the fluctuations of the average cluster size (excluding isolated single spins) become extremal at about the temperature of the third-order dependent transition in the paramagnetic phase. This suggests that a collective pre-ordering of spins occurs in this temperature region in the disordered phase as a precursor of the critical transition.
In the ferromagnetic phase, the average number of isolated spins peaks at the independent third-order transition temperature that was identified by microcanonical analysis. Here, the increased number of such "seeds" of disorder in the ferromagnetic phase enables the formation of critical clusters once the critical point is approached.
These results are encouraging and may initiate a search for higher-order transitions in other systems as well. Our analysis also shows that the study of sublying transitions in ordered and disordered phases can lead to a better understanding of the system-inherent reasons leading to major phase transitions. Dependent transitions, which are inevitably coupled to a major transition, are precursors of imminent global ordering processes in the disordered phase. The understanding of such precursor transitions may aid predicting significant ordering effects such as cooperativity and synchronization in complex systems before they actually happen.
###### Acknowledgements.
We thank the Georgia Advanced Computing Resource Center (GACRC) at the University of Georgia for providing computational resources.
|
2305.11134 | Generalized convolution quadrature based on the trapezoidal rule | We present a novel generalized convolution quadrature method that accurately
approximates convolution integrals. During the late 1980s, Lubich introduced
convolution quadrature techniques, which have now emerged as a prevalent
methodology in this field. However, these techniques were limited to constant
time stepping, and only in the last decade generalized convolution quadrature
based on the implicit Euler and Runge-Kutta methods have been developed,
allowing for variable time stepping. In this paper, we introduce and analyze a
new generalized convolution quadrature method based on the trapezoidal rule.
Crucial for the analysis is the connection to a new modified divided difference
formula that we establish. Numerical experiments demonstrate the effectiveness
of our method in achieving highly accurate and reliable results. | Lehel Banjai, Matteo Ferrari | 2023-05-18T17:29:18Z | http://arxiv.org/abs/2305.11134v1 | # Generalized convolution quadrature based on the trapezoidal rule
###### Abstract
We present a novel generalized convolution quadrature method that accurately approximates convolution integrals. During the late 1980s, Lubich introduced convolution quadrature techniques, which have now emerged as a prevalent methodology in this field. However, these techniques were limited to constant time stepping, and only in the last decade generalized convolution quadrature based on the implicit Euler and Runge-Kutta methods have been developed, allowing for variable time stepping. In this paper, we introduce and analyze a new generalized convolution quadrature method based on the trapezoidal rule. Crucial for the analysis is the connection to a new modified divided difference formula that we establish. Numerical experiments demonstrate the effectiveness of our method in achieving highly accurate and reliable results.
convolution quadrature, non-uniform time stepping, hyperbolic kernels, trapezoidal rule
## 1 Introduction
Convolution operators are widely used in applications that involve linear time-invariant non-homogeneous evolution equations, including wave and heat propagation problems, and occur in integral equations, such as Volterra, and Wiener-Hopf equations. In this paper, we present a numerical method for computing or solving linear convolution equations of the form
\[\int_{0}^{t}\kappa(t-\tau)g(\tau)\mathrm{d}\tau=\phi(t),\quad t\geq 0, \tag{1}\]
where \(\kappa\) is a fixed kernel operator and \(g\) (or \(\phi\)) is a given function. In many applications, the Laplace transform \(\mathcal{K}\) of the convolution kernel \(\kappa\) is known or easier to evaluate than \(\kappa\). The Convolution Quadrature (CQ) method involves expressing \(\kappa\) as the inverse Laplace transform of a transfer operator \(\mathcal{K}\), formulating the problem as an integro-differential equation in the Laplace domain, and approximating the differential equation using a time-stepping method such as linear multisteps [16, 17, 18, 19] or Runge-Kutta [20, 3, 4, 2]. The resulting discrete convolution equation can then be solved numerically.
The original CQ method is strongly restricted to fixed time step integration. However, in recent works [12, 14, 15] the generalized Convolution Quadrature (gCQ) has been introduced with variable time stepping, enabling adaptive resolution of non-smooth temporal behaviours. Moreover, utilizing non-uniform time stepping schemes can facilitate progress towards adaptive time stepping for parabolic and hyperbolic evolution equations. The first approach was limited to first-order implicit Euler scheme [12, 14], and was later extended to Runge-Kutta methods in [15]. Applications of gCQ have been demonstrated in various fields, including acoustics with absorbing boundary conditions [21], uncoupled quasistatic thermoelasticity in [11], and approximation of fractional integrals and associated fractional diffusion equations [10]. In [12], gCQ was introduced and formulated via high order divided differences of the transfer operator
\(\mathcal{K}\), which was appropriate for the stability and error analysis, but less suited for efficient algorithmic realization. However, in [14], an efficient algorithmic formulation of gCQ was presented. It is based on the approximation of divided differences by quadrature in the complex plane, following the approach proposed in [13]. This new formulation allows for faster and more efficient computation of gCQ.
The original analysis by Lubich [16] excluded CQ based on the trapezoidal rule method for technical reasons. However, it was known that the trapezoidal-based method outperforms the first-order backward Euler method and BDF2, which is too dispersive. In the appendix of [1] an initial analysis was developed for the CQ based on the trapezoidal rule, which was further refined in [8]. The goal of this paper is to introduce and analyze the trapezoidal gCQ. This method results in much faster convergence rates and improved long time behaviour compared to the implicit Euler method.
The paper is organized as follows: in Section 2 we provide a brief overview of one-sided convolution operators and introduce the class of convolution kernels that we consider in this paper. Section 3 presents the trapezoidal gCQ which is a method for discretizing convolution operators using variable time stepping. In Section 4, we analyze the stability and convergence of the method and derive a Leibniz formula for a new divided differences rule which is related to the gCQ weights. Section 5 presents an algorithm for the practical realization of the trapezoidal gCQ. The algorithm is based on a contour integral representation of the numerical solution and quadrature in the complex plane. We conclude with numerical experiments to demonstrate that the trapezoidal gCQ converges with optimal convergence rates for problems where the regularity of the solution is not uniformly distributed in the time interval, while other CQ-type methods converge suboptimally. Additionally, we present numerical examples for gCQ based on BDF2, although we have not yet developed a theoretical analysis for this case.
## 2 Convolution quadrature for hyperbolic symbols
We consider the class of convolution operators as described in [16, Section 2.1] (see also [5, Section 2]).
Let \(X\) and \(Y\) denote two normed vector spaces, and let \(\mathcal{B}(X,Y)\) be the space of continuous, linear mappings from \(X\) to \(Y\). As a norm in \(\mathcal{B}(X,Y)\) we consider the operator norm
\[\|\mathcal{K}\|_{\mathcal{B}(X,Y)}:=\sup_{g\in X\setminus\{0\}}\frac{\| \mathcal{K}g\|_{Y}}{\|g\|_{X}}.\]
Let define also the spaces \(\mathbb{C}_{+}:=\{s\in\mathbb{C}:\operatorname{Re}s>0\}\), and \(\mathbb{C}_{\sigma_{0}}:=\{s\in\mathbb{C}:\operatorname{Re}s>\sigma_{0}\}\) for some \(\sigma_{0}>0\).
We are interested in the one-sided convolution
\[\int_{0}^{t}\kappa(t-\tau)g(\tau)\mathrm{d}\tau,\quad t\geq 0, \tag{2}\]
of causal (\(f(t)=0,t<0\)) distributions \(\kappa\) and \(g\). The kernel operator \(\kappa\) is the inverse Laplace transform of some transfer operator \(\mathcal{K}:\mathbb{C}_{+}\to\mathcal{B}(X,Y)\), which is assumed to be an _hyperbolic symbol_.
**Definition 1** (Hyperbolic Symbol).: For given normed vector spaces \(X,Y\) and \(\mu\in\mathbb{R}\), the space of _hyperbolic symbols_\(\mathcal{A}(\mu,\mathcal{B}(X,Y))\) is the space of functions \(\mathcal{K}:\mathbb{C}_{+}\to\mathcal{B}(X,Y)\) analytic in \(\mathbb{C}_{+}\) and satisfying
\[\|\mathcal{K}(s)\|_{\mathcal{B}(X,Y)}\leq M|s|^{\mu},\quad s\in\mathbb{C}_{ \sigma_{0}}, \tag{3}\]
for some \(\sigma_{0}>0\) and \(M>0\).
If \(\mu<-1\), the time-domain operator \(\kappa:=\mathcal{L}^{-1}\{\mathcal{K}\}\) is well-defined by the Bromwich integral
\[\kappa(t):=\mathcal{L}^{-1}\{\mathcal{K}\}(t)=\frac{1}{2\pi\mathrm{i}}\int_{ \sigma+\mathrm{i}\mathbb{R}}e^{st}\mathcal{K}(s)\mathrm{d}s \tag{4}\]
for \(\sigma>\sigma_{0}\) and \(\sigma_{0}\) as in (3).
If \(\mu\geq-1\), we let the integer \(\rho:=\lfloor\mu\rfloor+1\) and \(\mathcal{K}_{\rho}(s):=s^{-\rho}\mathcal{K}(s)\). Let \(\kappa_{\rho}:=\mathcal{L}^{-1}\{\mathcal{K}_{\rho}\}\), where again the inverse Laplace transform is defined by the Bromwich integral (4). We see that \(\mathcal{L}\{\kappa\}=\mathcal{K}\) where \(\kappa:=\partial_{t}^{\rho}\kappa_{\rho}\), and \(\partial_{t}^{\rho}\) is the casual distributional derivative (see e.g. [5]). We are now able to define the convolution for \(\mu\geq-1\) by
\[\mathcal{K}(\partial_{t})g(t):=\frac{\partial^{\rho}}{\partial t^{\rho}}\int_ {0}^{t}\kappa(t-\tau)g(\tau)\mathrm{d}\tau=\int_{0}^{t}\kappa_{\rho}(t-\tau)g ^{(\rho)}(\tau)\mathrm{d}\tau,\quad t\geq 0, \tag{5}\]
for casual functions \(g\in C^{\rho-1}(\mathbb{R})\) satisfying \(g^{(j)}(0)=0\), \(j=0,\ldots,\rho-1\), and \(g^{(\rho)}\) locally integrable. If \(g\) is only defined on a finite interval \([0,T]\), we can extend it by the Taylor polynomial
\[g(t):=\sum_{j=0}^{\rho}\frac{1}{j!}g^{(j)}(T)(t-T)^{j},\quad t>T\]
and define \(\mathcal{K}(\partial_{t})g\) as above.
The motivation behind the operational notation \(\mathcal{K}(\partial_{t})g\), can be seen when considering the case \(\mathcal{K}(s)=s\), where the above definition implies that \(\mathcal{K}(\partial_{t})g=\partial_{t}g\). Furthermore, the composition rule \(\mathcal{K}_{2}\mathcal{K}_{1}(\partial_{t})g=\mathcal{K}_{2}(\partial_{t}) \mathcal{K}_{1}(\partial_{t})g\) holds for hyperbolic symbols \(\mathcal{K}_{1}\) and \(\mathcal{K}_{2}\).
Convolution quadrature (CQ) is a discretization of one-sided convolutions \(\mathcal{K}(\partial t)g\) for hyperbolic symbols based on particular ODE-solvers. Even if in literature there are various choices of high-order CQ based on Runge-Kutta methods (see e.g [4, 2]), we focus here on CQ based on A-stable linear multistep methods (see [16, 17]), and thus restricted by the Dahlquist's barrier to second order methods.
Given a fixed time-step \(\Delta>0\), the CQ is defined by the discrete convolution
\[\mathcal{K}\left(\partial_{t}^{\Delta}\right)g(t_{n}):=\sum_{j=1}^{n}\omega_{ n-j}(\mathcal{K}_{\rho})g^{(\rho)}(t_{j}) \tag{6}\]
where \(t_{j}:=j\Delta\). The convolution weights \(\omega_{j}(\mathcal{K}_{\rho})\) are expressed by the contour integral representation
\[\omega_{j}(\mathcal{K}_{\rho}):=\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{D}} \mathcal{K}_{\rho}\left(\frac{\delta(s)}{\Delta}\right)s^{-j-1}\mathrm{d}s, \tag{7}\]
where \(\delta(\zeta)\) is a generating function of an A-stable linear multistep method, and \(\mathcal{D}\) is a proper complex contour. A standard choice is \(\mathcal{D}\) a circle of radius \(0<\lambda<1\) that leads to the approximations via the compound trapezoidal rule
\[\omega_{j}(\mathcal{K}_{\rho})\approx\frac{\lambda^{-j}}{L+1}\sum_{\ell=0}^{L }\mathcal{K}_{\rho}\left(\frac{\delta(\lambda e^{-\frac{\delta^{2}\pi\mathrm{ i}}{L+1}})}{\Delta}\right)e^{\ell j}\frac{\mathrm{\bar{\sigma}}\mathrm{i}}{L +1}\]
efficiently computable for all \(j=0,\ldots,L\) simultaneously via the Fast Fourier Transform.
The CQ method as described above and its standard analysis heavily depend on the use of constant time stepping. However, in the next section, we will present a potential extension of this method to non-uniform time stepping schemes.
## 3 Generalized convolution quadrature based on the trapezoidal rule
In order to expand upon the gCQ based on the backward Euler scheme outlined in [12], we introduce the gCQ derived from the trapezoidal rule.
By applying the inverse Laplace transform to \(\kappa_{\rho}\) via the Bromwich representation (4), we can write (5) as
\[\mathcal{K}(\partial_{t})g(t)=\int_{0}^{t}\left(\frac{1}{2\pi\mathrm{i}}\int_ {\sigma+\mathrm{i}\mathbb{R}}e^{s(t-\tau)}\mathcal{K}_{\rho}(s)\mathrm{d}s \right)g^{(\rho)}(\tau)\mathrm{d}\tau,\quad t\geq 0\]
and interchanging the order of integration, we readily obtain
\[\mathcal{K}(\partial_{t})g(t)=\frac{1}{2\pi\mathrm{i}}\int_{\sigma+\mathrm{i} \mathbb{R}}\mathcal{K}_{\rho}(s)u(t;s)\mathrm{d}s,\quad t\geq 0, \tag{8}\]
where
\[u(t;s):=\int_{0}^{t}e^{s(t-\tau)}g^{(\rho)}(\tau)\mathrm{d}\tau.\]
Note that \(u(t;s)\) is the unique causal solution of following the simple initial value problem
\[\begin{cases}\partial_{t}u(t;s)=su(t;s)+g^{(\rho)}(t),\\ u(0;s)=0.\end{cases} \tag{9}\]
In the case of uniform CQ (6), the key point now is to consider the values of \(\mathcal{K}(\partial_{t})g\) at a finite number of equidistant abscissas \(t_{n}\) and to replace in (8) the functions \(u(t_{n};s)\) by an approximation of them, that we obtain by applying to (9) a linear multistep ODE solver having proper stability properties. We aim, instead, to discretize (9) with the trapezoidal rule associated to a non-uniform time mesh.
Given \(0=t_{0}<t_{1}<\ldots<t_{N}=T\) with non-uniform time-steps \(\Delta_{n}:=t_{n}-t_{n-1},n=1,\ldots,N\), the trapezoidal rule when used to approximate the solution of the initial value problem (9), results in the following difference equation:
\[u_{n}(s)=u_{n-1}(s)+\frac{1}{2}\Delta_{n}\left(su_{n-1}(s)+g^{(\rho)}(t_{n-1}) +su_{n}(s)+g^{(\rho)}(t_{n})\right)\]
where \(u_{n}(s)\approx u(t_{n};s)\), for \(n=1,\ldots,N\), and \(u_{0}(s)=0\). Solving for \(u_{n}(s)\) leads to
\[u(t_{n};s)\approx u_{n}(s)=u_{n-1}(s)\frac{2+\Delta_{n}s}{2-\Delta_{n}s}+\big{(} g^{(\rho)}(t_{n-1})+g^{(\rho)}(t_{n})\big{)}\frac{\Delta_{n}}{2-\Delta_{n}s}, \quad n=1,\ldots,N. \tag{10}\]
The recursion can be iteratively solved to obtain the following expression
\[u_{n}(s)=\sum_{j=1}^{n}g^{(\rho)}(t_{j})D_{j}^{n}\prod_{k=j+2}^{n}\big{(}2 \Delta_{k}^{-1}+s\big{)}\prod_{k=j}^{n}\big{(}2\Delta_{k}^{-1}-s\big{)}^{-1} \tag{11}\]
where the coefficients \(D_{j}^{n}\) are defined as follows
\[D_{j}^{n}:=\begin{cases}2\left(\Delta_{j}^{-1}+\Delta_{j+1}^{-1}\right)&j<n, \\ 1&j=n.\end{cases} \tag{12}\]
By considering (8) at the time point \(t_{n}\) and substituting \(u(t_{n};s)\) by the approximation \(u_{n}(s)\) in (11), we obtain the non-uniform approximation of the convolution \(\mathcal{K}(\partial_{t})g\)
\[\mathcal{K}\left(\partial_{t}^{\{\Delta_{j}\}}\right)g(t_{n}) :=\frac{1}{2\pi\mathrm{i}}\int_{\sigma+\mathrm{i}\mathbb{R}} \mathcal{K}_{\rho}(s)u_{n}(s)\mathrm{d}s\] \[=\sum_{j=1}^{n}g^{(\rho)}(t_{j})D_{j}^{n}\frac{1}{2\pi\mathrm{i} }\int_{\sigma+\mathrm{i}\mathbb{R}}\mathcal{K}_{\rho}(s)\prod_{k=j+2}^{n} \big{(}2\Delta_{k}^{-1}+s\big{)}\prod_{k=j}^{n}\big{(}2\Delta_{k}^{-1}-s\big{)} ^{-1}\,\mathrm{d}s.\]
We simplify the latter expression by writing
\[\mathcal{K}\left(\partial_{t}^{\{\Delta_{j}\}}\right)g(t_{n})=\sum_{j=1}^{n}w_ {n,j}(\mathcal{K}_{\rho})g^{(\rho)}(t_{j}) \tag{13}\]
where we have defined the weights as
\[w_{n,j}(\mathcal{K}_{\rho}):=D_{j}^{n}\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{ C}}\mathcal{K}_{\rho}(s)G_{j}^{n}(s)\mathrm{d}s,\qquad\text{with}\quad G_{j}^{n}(s):= \prod_{k=j+2}^{n}\big{(}2\Delta_{k}^{-1}+s\big{)}\prod_{k=j}^{n}\big{(}2\Delta _{k}^{-1}-s\big{)}^{-1} \tag{14}\]
and \(\mathcal{C}\) is a negatively oriented contour contained in the right half complex plane surrounding all the \(N\) poles \(2\Delta_{k}^{-1}\).
**Remark 1**.: Integrating over the contour \(\mathcal{C}\) and integrating along the line \(\sigma+\mathrm{i}\mathbb{R}\) both produce the same result. However, by choosing a suitable contour \(\mathcal{C}\), more efficient quadrature techniques can be employed to calculate the weights. This approach has been extensively demonstrated and substantiated in [14], resulting in improved computational performance and accuracy. We will revisit these quadrature rules in Section 5 for further clarification.
Our initial step is to establish that the trapezoidal-based gCQ, analogous to the method outlined in [5, Remark 2.29] for backward Euler-based gCQ, simplifies to the standard CQ when time-steps are uniform.
**Proposition 1**.: _Let \(\mathcal{K}\in\mathcal{A}(\mu,\mathcal{B}(X,Y))\) for some \(\mu\in\mathbb{R}\) and let \(\rho=\lfloor\mu\rfloor+1\). Let \(0<t_{0}<t_{1}<\ldots<t_{N}=T\) be the discrete times with uniform time steps \(\Delta=t_{j+1}-t_{j}\). Then, the trapezoidal based gCQ (13) coincides with the standard trapezoidal based CQ (6), i.e., we have_
\[\omega_{n,j}(\mathcal{K}_{\rho})=\omega_{n-j}(\mathcal{K}_{\rho}),\quad\text{ for all}\ \ \ 0<j\leq n\leq N.\]
Proof.: The generating function of the trapezoidal rule is \(\delta(\zeta)=2\frac{1-\zeta}{1+\zeta}\). In the uniform case, the weights are expressed as given in (7) by
\[\omega_{n-j}(\mathcal{K}_{\rho})=\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{D}} \mathcal{K}_{\rho}\left(\frac{2}{\Delta}\frac{1-s}{1+s}\right)s^{-(n-j)-1} \mathrm{d}s, \tag{15}\]
where \(\mathcal{D}\) a circle of fixed radius \(0<\lambda<1\). When we have equal time-steps \(\Delta=\Delta_{1}=\ldots=\Delta_{N}\), the gCQ weights defined in (14) can be written as
\[w_{n,j}(\mathcal{K}_{\rho}) =D_{j}^{n}\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}}\mathcal{K}_ {\rho}(s)G_{j}^{n}(s)\mathrm{d}s \tag{16}\] \[=D_{j}^{n}\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}}\mathcal{K}_ {\rho}(s)\prod_{k=j+2}^{n}\big{(}2\Delta^{-1}+s\big{)}\prod_{k=j}^{n}\big{(}2 \Delta^{-1}-s\big{)}^{-1}\,\mathrm{d}s\] \[=D_{j}^{n}\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}}\mathcal{K}_ {\rho}(s)\left(2\Delta^{-1}+s\right)^{\max\{0,n-j-1\}}\big{(}2\Delta^{-1}-s \big{)}^{-(n-j+1)}\,\mathrm{d}s.\]
Here, \(\mathcal{C}\) is a complex contour located in the right half-plane and encircling the singularity \(s=2\Delta^{-1}\).
Referring to (12), we can distinguish between the cases when \(j=n\) when \(j<n\). In the former case, we deduce
\[w_{n,n}(\mathcal{K}_{\rho})=\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}} \mathcal{K}_{\rho}(s)\left(2\Delta^{-1}-s\right)^{-1}\mathrm{d}s,\]
while in the latter case, we obtain
\[w_{n,j}(\mathcal{K}_{\rho})=4\Delta^{-1}\frac{1}{2\pi\mathrm{i}}\oint_{ \mathcal{C}}\mathcal{K}_{\rho}(s)\left(2\Delta^{-1}+s\right)^{n-j-1}\left(2 \Delta^{-1}-s\right)^{-(n-j+1)}\mathrm{d}s. \tag{17}\]
We see that \(\omega_{n,n}(\mathcal{K}_{\rho})=\omega_{0}(\mathcal{K}_{\rho})=f(0)\) where \(f(\zeta)=\mathcal{K}_{\rho}\left(\frac{2}{\Delta}\frac{1-\zeta}{1+\zeta}\right)\) by the Cauchy's integral Theorem.
For \(j<n\), we can use the Moebius map \(\phi(z):=\frac{2}{\Delta}\frac{(1-z)}{(1+z)}\) to make the change of variables in (17). Specifically, we set \(s=\frac{2}{\Delta}\frac{(1-\zeta)}{(1+\zeta)}\), from which we can deduce that
\[\mathrm{d}s=-\frac{4}{\Delta}\frac{1}{(1+\zeta)^{2}}\mathrm{d}\zeta,\quad \left(2\Delta^{-1}+s\right)=\frac{4}{\Delta}\frac{1}{1+\zeta}\quad\text{ and }\quad\left(2\Delta^{-1}-s\right)=\frac{2}{\Delta}\frac{\zeta^{2}}{1+\zeta}.\]
Using these substitutions, we can write
\[w_{n,j}(\mathcal{K}_{\rho})=\frac{1}{2\pi\mathrm{i}}\oint_{\phi^{-1}(\mathcal{ C})}\mathcal{K}_{\rho}\left(\frac{2}{\Delta}\frac{1-\zeta}{1+\zeta}\right) \zeta^{-(n-j)-1}\mathrm{d}\zeta. \tag{18}\]
Consider choosing \(\mathcal{C}\) in (16) to be the circle centered at \(\left(\frac{2}{\Delta}\frac{(1+\lambda^{2})}{(1-\lambda^{2})},0\right)\) with radius \(\frac{2}{\Delta}\sqrt{\frac{(1+\lambda^{2})^{2}}{(1-\lambda^{2})^{2}}-1}\) for a fixed \(0<\lambda<1\). This circle includes the point \(\left(2\Delta^{-1},0\right)\) and is in the right half-plane of the complex plane. Furthermore, we can observe that \(\phi^{-1}(\mathcal{C})=\mathcal{D}\) is exactly the circle of radius \(\lambda\) centered at \((0,0)\). Comparing (18) and (15), we can conclude.
### Divided differences formula and invertibility of gCQ based on the trapezoidal rule
The definition of the weights in (14) can be connected to Newton divided differences. This property was first noticed in [12] for BDF1 based gCQ, and relies on the following formula (see [6, Equation (51)]): given a set of points \(\{x_{0},\ldots x_{n}\}\) and a complex analytic function \(f\), then
\[\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}}f(s)\prod_{k=0}^{n}\left(s-x_{k} \right)^{-1}\mathrm{d}s=[x_{0},\ldots,x_{n}]f \tag{19}\]
where \(\mathcal{C}\) is a complex contour including the poles \(\{x_{0},\ldots,x_{n}\}\). Here, the divided difference \([x_{m},\ldots,x_{j}]f\), for \(0\leq m\leq j\leq n\), is defined in the classical way, iteratively by
\[[x_{m},\ldots,x_{j}]f:=\begin{cases}\frac{[x_{m},\ldots,x_{j-1}]f-[x_{m+1}, \ldots,x_{j}]f}{x_{m}-x_{j}}&m<j,\\ f(x_{j})&m=j.\end{cases}\]
To apply a formula similar to (19) in our situation we define an _modified_ divided difference formula
\[\left\langle x_{m},\ldots,x_{j}\right\rangle f:=\begin{cases}(x_{m}+x_{m+1}) \left[x_{m},\ldots,x_{j}\right]\left(f\prod_{k=m+2}^{j}\left(x_{k}+\cdot \right)\right)&m<j,\\ \left[x_{j}\right]f&m=j.\end{cases} \tag{20}\]
We use formula (19) to see that the gCQ based on the trapezoidal rule can be written in a different form. First, we use definition (12) of \(D_{j}^{n}\) to see that
\[\mathcal{K}\left(\partial_{t}^{\{\Delta_{j}\}}\right)g(t_{n}) =\sum_{j=1}^{n-1}g^{(\rho)}(t_{j})(-1)^{n-j+1}\left(2\Delta_{j}^ {-1}+2\Delta_{j+1}^{-1}\right)\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}} \mathcal{K}_{\rho}(s)\prod_{k=j+2}^{n}\left(s+2\Delta_{k}^{-1}\right)\prod_{k= j}^{n}\left(s-2\Delta_{k}^{-1}\right)^{-1}\mathrm{d}s\] \[\quad-g^{(\rho)}(t_{n})\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C }}\mathcal{K}_{\rho}(s)\left(s-2\Delta_{n}^{-1}\right)^{-1}\mathrm{d}s.\]
Finally, by means of formula (19), we obtain
\[\mathcal{K}\left(\partial_{t}^{\{\Delta_{j}\}}\right)g(t_{n}) =\sum_{j=1}^{n-1}g^{(\rho)}(t_{j})(-1)^{n-j+1}\left(2\Delta_{j}^{-1 }+2\Delta_{j+1}^{-1}\right)\left[2\Delta_{j}^{-1},\ldots,2\Delta_{n}^{-1} \right]\left(\mathcal{K}_{\rho}\prod_{k=j+2}^{n}(2\Delta_{k}^{-1}+\cdot)\right)\] \[\quad-g^{(\rho)}(t_{n})\left[2\Delta_{n}^{-1}\right]\mathcal{K}_{\rho}\] \[=\sum_{j=1}^{n}g^{(\rho)}(t_{j})(-1)^{n-j+1}\langle 2\Delta_{j}^{-1 },\ldots,2\Delta_{n}^{-1}\rangle\mathcal{K}_{\rho}.\]
We introduce simplified notations for the subsequent results of this subsection. Specifically, we define for \(0\leq m\leq j\leq n\), the polynomials \(\mathrm{P}_{m}^{j}\in\mathbb{P}^{j-m+1}(\mathbb{C})\), and the operators \(\mathbf{D}_{m}^{j}\), \(\mathbf{G}_{m}^{j}\) as follows
\[\mathrm{P}_{m}^{j}(z):=\prod_{k=m}^{j}(x_{k}+z),\quad\mathbf{D}_{m}^{j}(f):=[ x_{m},\ldots,x_{j}]f,\quad\mathbf{G}_{m}^{j}(f):=\langle x_{m},\ldots,x_{j} \rangle f. \tag{21}\]
Our aim is to show that a Leibniz rule analogous to the standard one holds also for the modified divided difference (20). We state the following technical lemma.
**Lemma 1**.: _Given a set of points \(\{x_{0},\ldots,x_{n}\}\subset\mathbb{C}\), then for all \(n\geq 2\) and \(2\leq\ell\leq n-2\) it holds_
\[\begin{cases}\sum_{k=\ell}^{n-2}\mathbf{D}_{\ell}^{k}(\mathrm{P}_{2}^{k}) \mathbf{D}_{k}^{k}(\mathrm{P}_{k+1}^{k+1})\mathbf{D}_{k}^{n-1}(\mathrm{P}_{k+ 2}^{n})+\mathbf{D}_{\ell}^{n-1}(\mathrm{P}_{2}^{n-1})\mathbf{D}_{n-1}^{n-1}( \mathrm{P}_{n}^{n})=\mathbf{D}_{\ell}^{n-1}(\mathrm{P}_{2}^{n})\\ \sum_{k=\ell}^{j}\mathbf{D}_{\ell}^{k}(\mathrm{P}_{2}^{k})\mathbf{D}_{k}^{k}( \mathrm{P}_{k+1}^{k+1})\mathbf{D}_{k}^{j}(\mathrm{P}_{k+2}^{n})=\mathbf{D}_{ \ell}^{j}(\mathrm{P}_{2}^{n})\end{cases}j=\ell,\ldots,n-2 \tag{22}\]
_where \(\mathrm{P}_{h}^{j}\) and \(\mathbf{D}_{h}^{j}\) are defined in (21)._
Proof.: These properties can be derived from the Leibniz product rule, which is applicable to a set of \(N\) functions
\[\begin{split}\mathbf{D}_{\ell_{1}}^{\ell_{2}}(\varphi_{1}\varphi_ {2}\cdots\varphi_{N})&=\sum_{\ell_{1}=\alpha_{0}\leq\alpha_{1}\leq \cdots\leq\alpha_{N}=\ell_{2}}\mathbf{D}_{\ell_{1}}^{\alpha_{1}}(\varphi_{1}) \mathbf{D}_{\alpha_{1}}^{\alpha_{2}}(\varphi_{2})\cdots\mathbf{D}_{\alpha_{N- 1}}^{\ell_{2}}(\varphi_{N})\\ &=\sum_{\ell_{1}=\alpha_{0}\leq\alpha_{1}\leq\cdots\leq\alpha_{N}= \ell_{2}}\prod_{\beta=0}^{N-1}\mathbf{D}_{\alpha_{\beta}}^{\alpha_{\beta+1}}( \varphi_{\beta+1})\end{split} \tag{23}\]
the sum being over integers \(\alpha_{1},\ldots,\alpha_{N-1}\) such that \(0\leq\alpha_{1}\leq\cdots\leq\alpha_{N-1}\leq n\). Specifically, we can use the multiplicative property \(\mathrm{P}_{h}^{j}=\prod_{q=h}^{j}\mathrm{P}_{q}^{q}\) to partition both sides of (22), and then apply to each term (23).
We are now able to state and prove the Leibniz rule for the modified divided difference (20). This will be the main tool to prove an inversion formula for the trapezoidal gCQ.
**Proposition 2**.: _Given a set of \(n+1\) distinct points \(\{x_{0},\ldots,x_{n}\}\subset\mathbb{C}\) and two functions \(f,g\) such that \(f(x_{i}),g(x_{i})\) are well-defined, then the following multiplicative rule holds_
\[\langle x_{0},\ldots,x_{n}\rangle(fg)=\sum_{k=0}^{n}\langle x_{0},\ldots,x_{k} \rangle f\langle x_{k},\ldots,x_{n}\rangle g. \tag{24}\]
Proof.: From definitions (20) and (21) we observe that
\[\mathbf{G}_{j}^{j}(f)=\langle x_{j}\rangle f=[x_{j}]f=\mathbf{D}_{j}^{j}(f) \quad\text{ and }\quad\mathbf{G}_{j-1}^{j}(f)=\langle x_{j-1},x_{j}\rangle f=(x_{j-1}+x_{j}) \mathbf{D}_{j-1}^{j}(f).\]
Moreover, we observe that \(\mathbf{D}_{j-1}^{j-1}(\mathrm{P}_{j}^{j})=[x_{j-1}](x_{j}+\cdot)=(x_{j-1}+x_{j})\), from which we deduce
\[\mathbf{G}_{j-1}^{j}(f)=\mathbf{D}_{j-1}^{j-1}(\mathrm{P}_{j}^{j})\mathbf{D}_{ j-1}^{j}(f).\]
For \(0\leq m\leq j-2\leq n\), we similarly obtain
\[\mathbf{G}_{m}^{j}(f)=\langle x_{m},\ldots,x_{j}\rangle f=(x_{m}+x_{m+1})[x_{m}, \ldots,x_{j}]\left(f\prod_{k=m+2}^{j}(x_{k}+\cdot)\right)=\mathbf{D}_{m}^{m} \big{(}\mathrm{P}_{m+1}^{m+1}\big{)}\mathbf{D}_{m}^{j}\big{(}f\,\mathrm{P}_{m+ 2}^{j}\big{)}.\]
The standard Leibniz rule for divided differences can be written in the form
\[\mathbf{D}_{0}^{n}(fg)=\sum_{\ell=0}^{n}\mathbf{D}_{0}^{\ell}(f)\mathbf{D}_{\ell }^{n}(g). \tag{25}\]
Our goal is to demonstrate that the equation
\[\mathbf{G}_{0}^{n}(fg)=\sum_{\ell=0}^{n}\mathbf{G}_{0}^{\ell}(f)\mathbf{G}_{ \ell}^{n}(g) \tag{26}\]
holds true for all non-negative integers \(n\).
\(\bullet\)**Case \(n=0\)**
The statement (26) is clear for \(n=0\)
\[\mathbf{G}_{0}^{0}(fg)=\mathbf{D}_{0}^{0}(fg)=f(x_{0})g(x_{0})=\mathbf{D}_{0} ^{0}(f)\mathbf{D}_{0}^{0}(g).\]
\(\bullet\)**Case \(n=1\)**
We can readily determine the case where \(n=1\) by applying the conventional Leibniz rule for divided differences, as expressed in equation (25)
\[\mathbf{G}_{0}^{1}(fg)=\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1})\mathbf{D}_{0}^ {1}(fg)=\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1})\left(\mathbf{D}_{0}^{0}(f) \mathbf{D}_{0}^{1}(g)+\mathbf{D}_{0}^{1}(f)\mathbf{D}_{1}^{1}(g)\right)= \mathbf{G}_{0}^{0}(f)\mathbf{G}_{0}^{1}(g)+\mathbf{G}_{0}^{1}(f)\mathbf{G}_{1 }^{1}(g).\]
\(\bullet\)**Case \(n\geq 2\)**
Now suppose \(n\geq 2\). We split the proof in various subcases and sub-parts. Namely, we vary three indices \((n,j,\ell)\) in the following ranges \(n\geq 0\), \(0\leq\ell\leq n\) and \(\ell\leq j\leq n\). The rest of the proof has the following structure
By recalling the definitions (21) and the Leibniz rule for divided differences (25), for the left-hand side of (26), we can obtain
\[\mathbf{G}_{0}^{n}(fg)=\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1})\mathbf{D}_{0}^ {n}\left(fg\,\mathrm{P}_{2}^{n}\right)=\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1}) \sum_{\ell=0}^{n}\mathbf{D}_{0}^{\ell}(f)\mathbf{D}_{\ell}^{n}(g\,\mathrm{P}_ {2}^{n}).\]
We split each term in the sum of the right hand side in (26) by using again (21)
\[\sum_{\ell=0}^{n}\mathbf{G}_{0}^{\ell}(f)\mathbf{G}_{\ell}^{n}(g) =\mathbf{D}_{0}^{0}(f)\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1}) \mathbf{D}_{0}^{n}(g\,\mathrm{P}_{2}^{n})+\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1 })\mathbf{D}_{0}^{1}(f)\mathbf{D}_{1}^{1}(\mathrm{P}_{2}^{2})\mathbf{D}_{1}^{ n}(g\,\mathrm{P}_{3}^{n})\] \[\quad+\sum_{\ell=2}^{n-2}\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1}) \mathbf{D}_{0}^{\ell}(f\,\mathrm{P}_{2}^{\ell})\mathbf{D}_{\ell}^{\ell}\big{(} \mathrm{P}_{\ell+1}^{\ell+1}\big{)}\mathbf{D}_{\ell}^{n}(g\,\mathrm{P}_{\ell+2 }^{n})\] \[\quad+\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1})\mathbf{D}_{0}^{n-1}(f \,\mathrm{P}_{2}^{n-1})\mathbf{D}_{n-1}^{n-1}\left(\mathrm{P}_{n}^{n}\right) \mathbf{D}_{n-1}^{n}\left(g\right)+\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1}) \mathbf{D}_{0}^{n}(f\,\mathrm{P}_{2}^{n})\mathbf{D}_{n}^{n}(g)\] \[=\mathbf{D}_{0}^{0}(\mathrm{P}_{1}^{1})\sum_{\ell=0}^{n}\mathbf{L }_{\ell}^{n}(f,g),\]
where we have defined the terms \(\mathbf{L}_{\ell}^{n}(f,g)\) as
\[\begin{cases}\mathbf{L}_{0}^{n}(f,g):=\mathbf{D}_{0}^{0}(f)\mathbf{D}_{0}^{n}( g\,\mathrm{P}_{2}^{n}),\\ \mathbf{L}_{1}^{n}(f,g):=\mathbf{D}_{0}^{1}(f)\mathbf{D}_{1}^{1}\big{(}\mathrm{ P}_{2}^{2}\big{)}\mathbf{D}_{1}^{n}(g\,\mathrm{P}_{3}^{n}),\\ \mathbf{L}_{n}^{n}(f,g):=\mathbf{D}_{0}^{0}(f\,\mathrm{P}_{2}^{\ell})\mathbf{D }_{\ell}^{\ell}\big{(}\mathrm{P}_{\ell+1}^{\ell+1}\big{)}\mathbf{D}_{\ell}^{ \ell}(g\,\mathrm{P}_{\ell+2}^{n}),\hskip 28.452756pt\ell=2,\ldots,n-2\\ \mathbf{L}_{n-1}^{n}(f,g):=\mathbf{D}_{0}^{n-1}(f\,\mathrm{P}_{2}^{n-1}) \mathbf{D}_{n-1}^{n-1}(\mathrm{P}_{n}^{n})\mathbf{D}_{n-1}^{n}(g),\\ \mathbf{L}_{n}^{n}(f,g):=\mathbf{D}_{0}^{n}(f\,\mathrm{P}_{2}^{n})\mathbf{D}_{n}^ {n}(g).\end{cases}\]
We rewrite each term \(\mathbf{L}_{\ell}^{n}(f,g)\), for \(2\leq\ell\leq n\), using the Leibniz rule for \(\mathbf{D}_{0}^{\ell}(f\,\mathrm{P}_{2}^{\ell})\)
\[\begin{cases}\mathbf{L}_{0}^{n}(f,g)=\mathbf{D}_{0}^{0}(f)\mathbf{D}_{0}^{n}(g \,\mathrm{P}_{2}^{n}),\\ \mathbf{L}_{1}^{n}(f,g)=\mathbf{D}_{0}^{0}(f)\mathbf{D}_{1}^{1}(\mathrm{P}_{2 }^{2})\mathbf{D}_{1}^{n}(g\,\mathrm{P}_{3}^{n}),\\ \mathbf{L}_{\ell}^{n}(f,g)=\left(\sum_{k=0}^{t}\mathbf{D}_{0}^{h}(f)\mathbf{D} _{k}^{h}(\mathrm{P}_{2}^{\ell})\right)\mathbf{D}_{\ell}^{\ell}(\mathrm{P}_{ \ell+1}^{\ell+1})\mathbf{D}_{\ell}^{n}\left(g\,\mathrm{P}_{\ell+2}^{n}\right),\quad\ell=2,\ldots,n-2\\ \mathbf{L}_{n-1}^{n}(f,g)=\left(\sum_{h=0}^{n-1}\mathbf{D}_{0}^{h}(f)\mathbf{D }_{h}^{n-1}\left(\mathrm{P}_{2}^{n-1}\right)\right)\mathbf{D}_{n-1}^{n-1} \big{(}\mathrm{P}_{n}^{n}\big{)}\mathbf{D}_{n-1}^{n}\left(g\right),\\ \mathbf{L}_{n}^{n}(f,g)=\left(\sum_{h=0}^{n}\mathbf{D}_{0}^{h}(f)\mathbf{D}_{h }^{n}(\mathrm{P}_{2}^{n})\right)\mathbf{D}_{n}^{n}(g).\end{cases} \tag{27}\]
We want to show that for each \(\ell=0,\ldots,n\), the sum of terms multiplying \(\mathbf{D}_{0}^{\ell}(f)\) in (27) is exactly \(\mathbf{D}_{\ell}^{n}(g\,\mathrm{P}_{2}^{n})\).
\(\bullet\)**Case \(n\geq 2\), Subcase \(\ell=0\)**
When \(\ell=0\), the only term in (27) that does not vanish is \(\mathbf{D}_{0}^{n}(g\,\mathrm{P}_{2}^{n})\), which arises from \(\mathbf{L}_{0}^{n}(f,g)\). For \(\ell=2,\ldots,n\), the term \(\mathbf{D}_{0}^{0}(f)\) multiplies \(\mathbf{D}_{0}^{\ell}(\mathrm{P}_{2}^{\ell})\), which is zero since a divided difference with \(\ell+1\) terms is always zero when applied to a polynomial of degree \(\ell-1\).
\(\bullet\)**Case \(n\geq 2\), Subcase \(\ell=n\)**
The only term for \(\ell=n\) is
\[\mathbf{D}_{n}^{n}(g)\mathbf{D}_{n}^{n}(\mathrm{P}_{2}^{n})=\mathbf{D}_{n}^{n }(g\,\mathrm{P}_{2}^{n}).\]
\(\bullet\)**Case \(n\geq 2\), Subcase \(\ell=n-1\)**
The terms for \(\ell=n-1\) are
\[\mathbf{D}_{n-1}^{n-1}\left(\mathrm{P}_{2}^{n-1}\right)\mathbf{D} _{n-1}^{n-1}\left(\mathrm{P}_{n}^{n}\right)\mathbf{D}_{n-1}^{n}(g)+\mathbf{D}_ {n-1}^{n}\left(\mathrm{P}_{2}^{n}\right)\mathbf{D}_{n}^{n}(g) =\mathbf{D}_{n-1}^{n}\left(\mathrm{P}_{2}^{n}\right)\mathbf{D}_{n }^{n}(g)+\mathbf{D}_{n-1}^{n-1}\left(\mathrm{P}_{2}^{n}\right)\mathbf{D}_{n-1}^ {n}(g)\] \[=\mathbf{D}_{n-1}^{n}(g\,\mathrm{P}_{2}^{n}).\]
\(\bullet\)**Case \(n\geq 2\), Subcase \(2\leq\ell\leq n-2\)**
We will now consider the case for general \(\ell\), where \(2\leq\ell\leq n-2\). Within each \(\mathbf{L}_{k}^{n}(f,g)\) for \(k\) ranging from \(\ell\) to \(n\), there is a term that involves \(\mathbf{D}_{\ell}^{\ell}(f)\). Hence, the sum that we are evaluating is expressed as follows:
\[\begin{split}&\sum_{k=\ell}^{n-2}\mathbf{D}_{\ell}^{k}(\mathrm{P}_{2 }^{k})\mathbf{D}_{k}^{k+1}\mathbf{D}_{k}^{n}(g\,\mathrm{P}_{k+2}^{n})+\mathbf{ D}_{\ell}^{n-1}\left(\mathrm{P}_{2}^{n-1}\right)\mathbf{D}_{n-1}^{n-1}( \mathrm{P}_{n}^{n})\mathbf{D}_{n-1}^{n}(g)\\ &\quad+\mathbf{D}_{\ell}^{n}\left(\mathrm{P}_{2}^{n}\right)\mathbf{ D}_{n}^{n}\left(g\right)\\ &=\sum_{k=\ell}^{n-2}\mathbf{D}_{\ell}^{k}(\mathrm{P}_{2}^{k}) \mathbf{D}_{k}^{k}(\mathrm{P}_{k+1}^{k+1})\sum_{j=k}^{n}\mathbf{D}_{k}^{j}( \mathrm{P}_{k+2}^{n})\mathbf{D}_{j}^{n}(g)+\mathbf{D}_{\ell}^{n-1}(\mathrm{P} _{2}^{n-1})\mathbf{D}_{n-1}^{n-1}(\mathrm{P}_{n}^{n})\mathbf{D}_{n-1}^{n}(g)\\ &\quad+\mathbf{D}_{\ell}^{n}(\mathrm{P}_{2}^{n})\mathbf{D}_{n}^{n} \left(g\right)\\ &=\sum_{j=\ell}^{n}\mathbf{D}_{j}^{n}(g)\sum_{k=\ell}^{\min\{j,n -2\}}\mathbf{D}_{\ell}^{k}(\mathrm{P}_{2}^{k})\mathbf{D}_{k}^{k}(\mathrm{P}_{ k+1}^{k+1})\mathbf{D}_{k}^{j}(\mathrm{P}_{k+2}^{n})+\mathbf{D}_{n-1}^{n}(g) \mathbf{D}_{\ell}^{n-1}(\mathrm{P}_{2}^{n-1})\mathbf{D}_{n-1}^{n-1}(\mathrm{P} _{n}^{n})\\ &\quad+\mathbf{D}_{n}^{n}(g)\mathbf{D}_{\ell}^{n}(\mathrm{P}_{2}^{n })\end{split} \tag{28}\]
where we used the standard Leibniz rule for each \(\mathbf{D}_{k}^{n}(g\,\mathrm{P}_{k+2}^{n})\). We use again (25) for \(\mathbf{D}_{\ell}^{n}(g\,\mathrm{P}_{2}^{n})\)
\[\mathbf{D}_{\ell}^{n}(g\,\mathrm{P}_{2}^{n})=\sum_{j=\ell}^{n}\mathbf{D}_{j}^{n}(g )\mathbf{D}_{\ell}^{j}\left(\mathrm{P}_{2}^{n}\right). \tag{29}\]
Now we compare the terms near each \(\mathbf{D}_{j}^{n}(g)\) for \(j=\ell,\ldots n\) in (28) and (29).
\(\bullet\)**Case \(n\geq 2\), Subcase \(2\leq\ell\leq n-2\), Sub-part \(j=n\)**
The terms for \(j=n\) are for both sides \(\mathbf{D}_{\ell}^{n}(\mathrm{P}_{2}^{n})\), this because the first term in (28) is zero. Indeed, \(\mathbf{D}_{k}^{j}(\mathrm{P}_{k+2}^{n})=0\) for all \(k=\ell,\ldots,n-2\).
\(\bullet\)**Case \(n\geq 2\), Subcase \(2\leq\ell\leq n-2\), Sub-part \(j=n-1\)**
The term for \(j=n-1\) is \(\mathbf{D}_{\ell}^{n-1}(\mathrm{P}_{2}^{n})\) in (29), while in (28) it is
\[\sum_{k=\ell}^{n-2}\mathbf{D}_{\ell}^{k}(\mathrm{P}_{2}^{k})\mathbf{D}_{k}^{k}( \mathrm{P}_{k+1}^{k+1})\mathbf{D}_{k}^{n-1}(\mathrm{P}_{k+2}^{n})+\mathbf{D}_{ \ell}^{n-1}(\mathrm{P}_{2}^{n-1})\mathbf{D}_{n-1}^{n-1}(\mathrm{P}_{n}^{n}).\]
The two terms are the same thanks to Lemma 1
\(\bullet\)**Case \(n\geq 2\), Subcase \(2\leq\ell\leq n-2\), Sub-part \(\ell\leq j\leq n-2\)**
Similarly for \(\ell\leq j\leq n-2\), in (29) \(\mathbf{D}_{\ell}^{j}(\mathrm{P}_{2}^{n})\), while in (28) is
\[\sum_{k=\ell}^{j}\mathbf{D}_{\ell}^{k}(\mathrm{P}_{2}^{k})\mathbf{D}_{k}^{k}( \mathrm{P}_{k+1}^{k+1})\mathbf{D}_{k}^{j}(\mathrm{P}_{k+2}^{n}).\]
The two terms are the same thanks to Lemma 1.
\(\bullet\)**Case \(n\geq 2\), Subcase \(\ell=1\)**
The term for \(\ell=1\) is similar with the exception of a first new summand. Precisely, from \(\mathbf{L}_{k}^{n}(f,g)\) with \(k=1,\ldots,n\) we obtain the sum
\[\mathbf{D}_{1}^{1}\left(\mathrm{P}_{2}^{2}\right)\mathbf{D}_{1}^{n}(g\, \mathrm{P}_{3}^{n})+\sum_{k=2}^{n-2}\mathbf{D}_{1}^{k}(\mathrm{P}_{2}^{k}) \mathbf{D}_{k}^{k}(\mathrm{P}_{k+1}^{k+1})\mathbf{D}_{k}^{n}(g\,\mathrm{P}_{k +2}^{n})+\mathbf{D}_{1}^{n-1}\left(\mathrm{P}_{2}^{n-1}\right)\mathbf{D}_{n-1 }^{n-1}(\mathrm{P}_{n}^{n})\mathbf{D}_{n-1}^{n}(g)+\mathbf{D}_{1}^{n}\left( \mathrm{P}_{2}^{n}\right)\mathbf{D}_{n}^{n}\left(g\right).\]
To conclude one can simply proceed as in the subcase \(2\leq\ell\leq n-2\), by considering also the new case \(j=1\).
Importantly, we can show that the inversion rule still holds for the gCQ based on the trapezoidal rule.
**Proposition 3** (Inversion formula).: _Let \(\{x_{0},\ldots,x_{n}\}\subset\mathbb{C}\), and an operator \(\mathcal{K}:\mathbb{C}_{+}\to\mathcal{B}(X,Y)\) such that \(\mathcal{K}^{-1}(x_{i}),i=0,\ldots,n\) are well defined, and \(\{g_{j}\}_{j=0}^{n},\{\phi_{j}\}_{j=0}^{n}\subset\mathbb{C}\). Then, the relation_
\[\phi_{n}=\sum_{j=0}^{n}(-1)^{n-j+1}g_{j}\left\langle x_{j},\ldots,x_{n}\right\rangle \mathcal{K} \tag{30}\]
_can be inverted and it holds that_
\[g_{n}=\sum_{\ell=0}^{n}(-1)^{n-\ell+1}\phi_{\ell}\left\langle x_{\ell},\ldots,x_{n}\right\rangle\mathcal{K}^{-1}. \tag{31}\]
Proof.: We proceed similarly as in [12, Lemma 3.1]. We denote by \(\widetilde{g}_{n}\) the left-hand side of (31) when replacing \(\phi_{j}\) by the definition (30) in (31). Our aim is to show that \(\widetilde{g}_{n}=g_{n}\). Using the Leibniz rule (24) for the modified divided difference we can write
\[\widetilde{g}_{n} =\sum_{j=0}^{n}(-1)^{n-j+1}\left\langle x_{j},\ldots,x_{n}\right\rangle \mathcal{K}^{-1}\sum_{\ell=0}^{j}(-1)^{j-\ell+1}g_{\ell}\left\langle x_{\ell}, \ldots,x_{j}\right\rangle\mathcal{K}\] \[=\sum_{\ell=0}^{n}(-1)^{n-\ell}g_{\ell}\sum_{j=\ell}^{n}\left\langle x _{j},\ldots,x_{n}\right\rangle\mathcal{K}^{-1}\left\langle x_{\ell},\ldots,x_{ j}\right\rangle\mathcal{K}\] \[=\sum_{\ell=0}^{n}(-1)^{n-\ell}g_{\ell}\left\langle x_{\ell}, \ldots,x_{n}\right\rangle\mathds{1},\]
where \(\mathds{1}\) stands for the constant function \(\mathds{1}\equiv 1\). It remains to show that \(\left\langle x_{\ell},\ldots,x_{n}\right\rangle\mathds{1}=\delta_{\ell}^{n}\).
It is clear that \(\left\langle x_{n}\right\rangle\mathds{1}=\left[x_{n}\right]\mathds{1}=1\). To conclude we show that \(\left\langle x_{\ell},\ldots,x_{n}\right\rangle\mathds{1}=0\), for \(\ell<n\), but this easily follows from definition (20)
\[\left\langle x_{\ell},\ldots,x_{n}\right\rangle\mathds{1}=\left(x_{\ell}+x_{ \ell+1}\right)\left[x_{\ell},\ldots,x_{n}\right]\left(\prod_{\ell+2}^{n}(x_{ \ell}+\cdot)\right)=0,\]
since a divided difference with \(n-\ell+1\) terms applied to a polynomial of degree \(n-\ell-1\) is zero.
## 4 Convergence analysis
Our convergence analysis follows the approach outlined in [5, Section 2.7]. We demonstrate that utilizing the gCQ method is analogous to employing a specific composite midpoint technique coupled with an appropriate approximation of the integrand. It is important to note that our analysis assumes a smooth kernel that satisfies (3) with \(\mu<-3\). While
this may be a limitation in theory, it is worth exploring whether this condition is truly necessary or simply an artificial construct for the purpose of our analysis. To this end, in Section 5, we conduct experiments to confirm whether this condition is indeed required. Further investigation in this area could yield valuable insights for practical applications.
We recall that the weights for an hyperbolic symbol \(\mathcal{K}\in\mathcal{A}(\mu,\mathcal{B}(X,Y))\) of the gCQ based on the trapezoidal formula are defined by
\[w_{n,j}(\mathcal{K})=D_{j}^{n}\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}} \mathcal{K}(s)G_{j}^{n}(s)\mathrm{d}s\]
and the convolution can be rewritten as
\[\mathcal{K}\left(\partial_{t}^{\{\Delta_{j}\}}\right)g(t_{n})=\sum_{j=1}^{n}w _{n,j}(\mathcal{K}_{\rho})g^{(\rho)}(t_{j})\]
for \(\rho\geq\lfloor\mu\rfloor+1.\) If \(\mu<-1\), we can choose \(\rho=0\), and the kernel \(\kappa\) is simply the Laplace inverse of \(\mathcal{K}\), precisely
\[\kappa(t)=\frac{1}{2\pi\mathrm{i}}\int_{\sigma+\mathrm{i}\mathbb{R}}\mathcal{ K}(s)e^{st}\mathrm{d}s. \tag{32}\]
We can now provide a precise estimate for symbols satisfying \(\mu<-3\).
**Proposition 4**.: _Let \(\mathcal{K}\in\mathcal{A}(\mu,\mathcal{B}(X,Y))\) be a hyperbolic symbol with \(\mu<-3\). Given \(N+1\) time steps \(0=t_{0}<t_{1}<\ldots<t_{N}=T\), then_
\[\left\|w_{n,j}(\mathcal{K})-\frac{t_{j+1}-t_{j-1}}{2}\kappa\left(t_{n}-\frac{ t_{j-1}+t_{j+1}}{2}\right)\right\|_{\mathcal{B}(X,Y)}\leq(\Delta_{j}+\Delta_{j+1} )\Delta_{\max}^{2}\]
_for all \(n=1,\ldots,N\) and \(j=1,\ldots,n-1\), where the implicit constant may depend on \(T,\mu\), but not on \(\{t_{j}\}\)._
Proof.: Let us fix \(n\). We observe that, using (12) and (14), for values of \(j\) ranging from \(1\) to \(n-1\) it is possible to express
\[D_{j}^{n}G_{j}^{n}(s)=\frac{\Delta_{j}+\Delta_{j+1}}{2}\prod_{k=j+2}^{n}\left( 1+\frac{\Delta_{k}}{2}s\right)\prod_{k=j}^{n}\left(1-\frac{\Delta_{k}}{2}s \right)^{-1}. \tag{33}\]
Recalling the definition of the weights (14) (with the integration now over a complex line \(\sigma+\mathrm{i}\mathbb{R}\)) combined with (33), and (32), we start by writing
\[\left\|w_{n,j}(\mathcal{K})-\frac{t_{j+1}-t_{j-1}}{2}\kappa\left( t_{n}-\frac{t_{j-1}+t_{j+1}}{2}\right)\right\|_{\mathcal{B}(X,Y)}\] \[\qquad=\frac{\Delta_{j}+\Delta_{j+1}}{4\pi}\left\|\int_{\sigma+ \mathrm{i}\mathbb{R}}\mathcal{K}(s)\left(\prod_{k=j+2}^{n}\left(1+\frac{ \Delta_{k}}{2}s\right)\prod_{k=j}^{n}\left(1-\frac{\Delta_{k}}{2}s\right)^{-1} -e^{s\left(t_{n}-\frac{t_{j-1}+t_{j+1}}{2}\right)}\right)\mathrm{d}s\right\|_ {\mathcal{B}(X,Y)}\] \[\qquad=\frac{\Delta_{j}+\Delta_{j+1}}{4\pi}\left\|\int_{\sigma+ \mathrm{i}\mathbb{R}}\mathcal{K}(s)\mathcal{I}_{j}^{n}(s)\mathrm{d}s\right\|_{ \mathcal{B}(X,Y)}\]
where
\[\mathcal{I}_{j}^{n}(s):=\prod_{k=j+2}^{n}\left(1+\frac{\Delta_{k}}{2}s\right) \prod_{k=j}^{n}\left(1-\frac{\Delta_{k}}{2}s\right)^{-1}-e^{s\left(t_{n}-\frac {t_{j-1}+t_{j+1}}{2}\right)}. \tag{34}\]
Notice that we also used that \(t_{j+1}-t_{j-1}=\Delta_{j}+\Delta_{j+1}\).
We perform a simple manipulation on the first term
\[\prod_{k=j+2}^{n}\left(1+\frac{\Delta_{k}}{2}s\right)\prod_{k=j}^{n}\left(1- \frac{\Delta_{k}}{2}s\right)^{-1}=\left(1-\frac{\Delta_{j}}{2}s\right)^{-1} \left(1-\frac{\Delta_{j+1}}{2}s\right)^{-1}\prod_{k=j+2}^{n}\frac{\left(1+\frac {\Delta_{k}}{2}s\right)}{(1-\frac{\Delta_{k}}{2}s)}. \tag{35}\]
Then, we observe that \(t_{n}-\frac{t_{j-1}+t_{j+1}}{2}=\sum_{k=j+2}^{n}\Delta_{k}+\frac{\Delta_{j}+ \Delta_{j+1}}{2}\), from which we deduce
\[e^{s\left(t_{n}-\frac{t_{j-1}+t_{j+1}}{2}\right)}=e^{\frac{\Delta_{j}}{2}}e^{ \frac{\Delta_{j+1}}{2}}\prod_{k=j+2}^{n}e^{\sigma\Delta_{k}}. \tag{36}\]
Our current objective is to compute the quantities (35) and (36) for small arguments and then compare them.
If \(|s\Delta_{\max}|\) is sufficiently small, we can expand (36) in Taylor series and readily obtain
\[\begin{split} e^{s\frac{\Delta_{j}}{2}}\,e^{s\frac{\Delta_{j+1}}{2} }\prod_{k=j+2}^{n}e^{s\Delta_{k}}&=1+s\left(\frac{\Delta_{j}+ \Delta_{j+1}}{2}+\sum_{k=j+2}^{n}\Delta_{k}\right)\\ &\qquad+\frac{1}{2}s^{2}\left(\frac{\Delta_{j}+\Delta_{j+1}}{2}+ \sum_{k=j+2}^{n}\Delta_{k}\right)^{2}+\mathcal{O}(s^{3}\Delta_{\max}^{3}).\end{split} \tag{37}\]
Similarly, for (35) we deduce when \(|s\Delta_{\max}|\to 0\)
\[\begin{split}&\left(1-\frac{\Delta_{j}}{2}s\right)^{-1}\left(1- \frac{\Delta_{j+1}}{2}s\right)^{-1}\prod_{k=j+2}^{n}\frac{\left(1+\frac{ \Delta_{k}}{2}s\right)}{\left(1-\frac{\Delta_{k}}{2}s\right)}\\ &\quad=\left(1+\frac{\Delta_{j}}{2}s+\frac{\Delta_{j}^{2}}{4}s^{ 2}\right)\left(1+\frac{\Delta_{j+1}}{2}s+\frac{\Delta_{j+1}^{2}}{4}s^{2}\right) \prod_{k=j+2}^{n}\left(1+\Delta_{k}s+\frac{\Delta_{k}^{2}}{2}s^{2}\right)+ \mathcal{O}(s^{3}\Delta_{\max}^{3})\\ &\quad=1+s\left(\frac{\Delta_{j}+\Delta_{j+1}}{2}+\sum_{k=j+2}^{n }\Delta_{k}\right)\\ &\qquad\qquad+\frac{1}{2}s^{2}\left(\frac{\Delta_{j}^{2}+\Delta_ {j+1}^{2}}{2}+\sum_{k=j+2}^{n}\Delta_{k}^{2}+\frac{\Delta_{j}\Delta_{j+1}}{2} +(\Delta_{j}+\Delta_{j+1})\sum_{k=j+2}^{n}\Delta_{k}+2\sum_{\begin{subarray}{ c}k_{1}\neq k_{2}\\ k_{1},k_{2}\neq j+2\end{subarray}}^{n}\Delta_{k_{1}}\Delta_{k_{2}}\right)\\ &\qquad+\mathcal{O}(s^{3}\Delta_{\max}^{3}).\end{split} \tag{38}\]
Combining (35), (36), (37) and (38) we deduce, for \(|s\Delta_{\max}|\) small enough,
\[\mathcal{I}_{j}^{n}(s)=-\frac{1}{2}s^{2}\left(\frac{\Delta_{j}^{2}+\Delta_{j+ 1}^{2}}{4}\right)+\mathcal{O}(s^{3}\Delta_{\max}^{3}).\]
Consequently, we can split the integral for \(|s\Delta_{\max}|<c\) with \(c\) small enough and \(|s\Delta_{\max}|>c\)
\[\begin{split}&\left\|w_{n,j}(\mathcal{K})-\frac{t_{j+1}-t_{j-1}}{2 }\kappa\left(t_{n}-\frac{t_{j-1}+t_{j+1}}{2}\right)\right\|_{\mathcal{B}(X,Y)} \\ &\quad=\frac{\Delta_{j}+\Delta_{j+1}}{4\pi}\left\|\int_{\sigma+ \mathrm{i}\mathbb{R}}\mathcal{K}(s)\mathcal{I}_{j}^{n}(s)\mathrm{d}s\right\|_{ \mathcal{B}(X,Y)}\\ &\quad\lesssim(\Delta_{j}+\Delta_{j+1})\left(\left\|\int_{\sigma +\mathrm{i}\mathbb{R},|s\Delta_{\max}|<c}\mathcal{K}(s)\mathcal{I}_{j}^{n}(s) \mathrm{d}s\right\|_{\mathcal{B}(X,Y)}+\left\|\int_{\sigma+\mathrm{i}\mathbb{R },|s\Delta_{\max}|>c}\mathcal{K}(s)\mathcal{I}_{j}^{n}(s)\mathrm{d}s\right\|_{ \mathcal{B}(X,Y)}\\ &\quad\lesssim(\Delta_{j}+\Delta_{j+1})\left((\Delta_{j}^{2}+ \Delta_{j+1}^{2})\int_{\sigma+\mathrm{i}\mathbb{R},|s\Delta_{\max}|<c}|s|^{\mu +2}\mathrm{d}s+\int_{\sigma+\mathrm{i}\mathbb{R},|s\Delta_{\max}|>c}|s|^{\mu} \left|\mathcal{I}_{j}^{n}(s)\right|\mathrm{d}s\right)\end{split} \tag{39}\]
where in the last we used (3). We deduce now two auxiliary results to bound \(|\mathcal{I}_{j}^{n}(s)|\) for \(|s\Delta_{\max}|>c\).
For \(\mathrm{Re}\,s\in[0,\nicefrac{{1}}{{2}})\), we readily verify that
\[\left|\frac{1+s}{1-s}\right|\leq\frac{1+\mathrm{Re}\,s}{1-\mathrm{Re}\,s}=1+ \frac{2\,\mathrm{Re}\,s}{1-\mathrm{Re}\,s}\leq e^{\frac{2\mathrm{Re}\,s}{1- \mathrm{Re}\,s}}\leq e^{4\,\mathrm{Re}\,s}\]
and
\[|1-s|^{-1}\leq\frac{1}{1-\mathrm{Re}\,s}=1+\frac{\mathrm{Re}\,s}{1-\mathrm{Re} \,s}\leq e^{\frac{\mathrm{Re}\,s}{1-\mathrm{Re}\,s}}\leq e^{2\,\mathrm{Re}\,s}.\]
From the latter, recalling definition (34), we obtain
\[\begin{split}\left|\mathcal{I}_{j}^{n}(s)\right|&=\left| 1-\frac{\Delta_{j}}{2}s\right|^{-1}\left|1-\frac{\Delta_{j+1}}{2}s\right|^{-1} \prod_{k=j+2}^{n}\left|\frac{1+\frac{\Delta_{k}}{2}s}{1-\frac{\Delta_{k}}{2}s }\right|\\ &\leq e^{(\Delta_{j}+\Delta_{j+1})\operatorname{Re}s}e^{2(\sum_{ k=j+2}^{n}\Delta_{k})\operatorname{Re}s}\\ &\leq e^{2(t_{n}-t_{j})\operatorname{Re}s}\leq e^{2T\operatorname {Re}s}.\end{split} \tag{40}\]
Finally, combining (39) and (40), we obtain
\[\left\|w_{n,j}(\mathcal{K})-\frac{t_{j+1}-t_{j-1}}{2}\kappa\left(t_{n}-\frac{t _{j-1}+t_{j+1}}{2}\right)\right\|_{\mathcal{B}(X,Y)}\lesssim(\Delta_{j}+ \Delta_{j+1})\Delta_{\max}^{2}\int_{\sigma+\mathrm{i}\mathbb{R}}|s|^{\mu+2} \mathrm{d}s\]
which is bounded for \(\mu<-3\).
**Remark 2** (Why do we obtain an extra order of convergence compared with BDF1?).: In the BDF1 setting, see [12] for details, the convolution \(K(\partial_{t})g\) is approximated at the time step \(t_{n}\) by
\[\mathcal{K}\left(\partial_{t}\{^{\Delta_{j}},\text{BDF1}\}\right)g(t_{n}):= \sum_{j=1}^{n}g(t_{j})\Delta_{j}\frac{1}{2\pi\mathrm{i}}\oint_{\mathcal{C}} \mathcal{K}(s)B_{j}^{n}(s)\mathrm{d}s=:\sum_{j=1}^{n}g(t_{j})\omega_{n,j}^{ \text{BDF1}}(\mathcal{K})\]
where the complex contour \(\mathcal{C}\) includes the complex poles \(\Delta_{k}^{-1}\) and \(B_{j}^{n}(s):=\prod_{k=j}^{n}\left(1-\Delta_{k}s\right)^{-1}\).
For \(|s\Delta_{\max}|\) small enough, we can proceed as in the proof of Proposition 4 and readily obtain
\[e^{s(t_{n}-t_{j-1})}-B_{j}^{n}(s)=s^{2}\sum_{\begin{subarray}{c}k_{1},k_{2}=j \\ k_{1}\neq k_{2}\end{subarray}}^{n}\Delta_{k_{1}}\Delta_{k_{2}}+\mathcal{O}(s^{ 3}\Delta_{\max}^{3}). \tag{41}\]
From which we can deduce (see details in [5, Proposition 2.31])
\[\left\|w_{n,j}^{\text{BDF1}}(\mathcal{K})-\Delta_{j}\kappa\left(t_{n}-t_{j-1} \right)\right\|_{\mathcal{B}(X,Y)}\lesssim\Delta_{j}\Delta_{\max}\]
for hyperbolic symbols satisfying \(\mathcal{K}\in\mathcal{A}(\mu,\mathcal{B}(X,Y))\) with \(\mu>-3\).
In (41), we can only place an upper limit on the coefficient of \(s^{2}\), which is \(\Delta_{\max}^{2}N\lesssim\Delta_{\max}\). This is in contrast to the bound achieved by the trapezoidal rule, where we obtain one extra order of convergence. However, we still require the same level of smoothness for \(\mathcal{K}\).
In order to establish our main result, it is essential to develop a novel quadrature formula that can accurately evaluate integrals with integrands that vanish at the limits of the integration interval. This formula plays a pivotal role in our analysis since we will utilize it to compare the gCQ discretization with a similar quadrature formula in our main proof.
**Lemma 2**.: _Let \(t_{0}<t_{1}<\ldots<t_{n}\) and set \(\Delta_{j}=t_{j}-t_{j-1}\). Given \(f\in C^{2}([t_{0},t_{n}],Y)\) such that_
\[f(t_{0})=f(t_{n})=0,\]
_we define the integration rule_
\[\mathcal{Q}^{\{\Delta_{j}\}}(f):=\sum_{j=1}^{n-1}\left(\frac{t_{j+1}-t_{j-1}}{ 2}\right)f\left(\frac{t_{j+1}+t_{j-1}}{2}\right).\]
_Then, the following holds_
\[\begin{split}\left\|\int_{t_{0}}^{t_{n}}f(t)\mathrm{d}t-\mathcal{ Q}^{\{\Delta_{j}\}}(f)\right\|_{Y}&\leq\Delta_{1}^{3}\max_{t\in[t_{0},t_{1} ]}\|f^{(2)}(t)\|_{Y}+\sum_{j=1}^{n-1}\left(\Delta_{j}+\Delta_{j+1}\right)^{3} \max_{t\in[t_{j-1},t_{j+1}]}\bigl{\|}f^{(2)}(t)\bigr{\|}_{Y}\\ &\quad+\Delta_{n}^{3}\max_{t\in[t_{n-1},t_{n}]}\|f^{(2)}(t)\|_{Y} +\Delta_{1}^{2}\|f^{(1)}(t_{0})\|_{Y}+\Delta_{n}^{2}\|f^{(1)}(t_{n})\|_{Y}. \end{split} \tag{42}\]
Proof.: To prove inequality (42), we utilize the key idea of viewing the new quadrature rule \(\mathcal{Q}^{\{\Delta_{j}\}}\) as a combination of two composite midpoint rules. Specifically, one associated to the grid \(\{t_{0},t_{2},\ldots,t_{n}\}\) and a second one to
\(\{t_{1},t_{3},\ldots,t_{n-1}\}\). We recall that the local midpoint rule is a numerical method for approximating integrals, where the integrand is evaluated at the midpoint of the integration interval. The local quadrature error of this method is given by:
\[\left\|\int_{a}^{b}f(t)\mathrm{d}t-(b-a)f\left(\frac{a+b}{2}\right)\right\|_{Y} \leq\frac{(b-a)^{3}}{24}\max_{t\in[a,b]}\|f^{(2)}(t)\|_{Y}, \tag{43}\]
where \(f:[a,b]\to Y\) is the integrand function, and \([a,b]\subset\mathbb{R}\) the integration interval. Let \(n\) be even; if \(n\) is odd the proof is similar. Using (43) with the grid \(\{t_{0},t_{2},\ldots,t_{n}\}\), we observe that
\[\left\|\int_{t_{0}}^{t_{n}}f(t)\mathrm{d}t\,-\sum_{k=0}^{\frac{n}{2}-1}\left(t _{2k+2}-t_{2k}\right)f\left(\frac{t_{2k+2}+t_{2k}}{2}\right)\right\|_{Y} \leq\sum_{k=0}^{\frac{n}{2}-1}\frac{\left(t_{2k+2}-t_{2k}\right)^{3}}{24}\max _{t\in[t_{2k},t_{2k+2}]}\|f^{(2)}(t)\|_{Y}, \tag{44}\]
and similarly
\[\left\|\int_{t_{1}}^{t_{n-1}}f(t)\mathrm{d}t-\sum_{k=1}^{\frac{n} {2}-2}\left(t_{2k+1}-t_{2k-1}\right)f\left(\frac{t_{2k+1}+t_{2k-1}}{2}\right) \right\|_{Y}\\ \leq\sum_{k=1}^{\frac{n}{2}-2}\frac{\left(t_{2k+1}-t_{2k-1}\right) ^{3}}{24}\max_{t\in[t_{2k-1},t_{2k+1}]}\|f^{(2)}(t)\|_{Y}. \tag{45}\]
We can conclude using in the remaining intervals \([t_{0},t_{1}]\) and \([t_{n-1},t_{n}]\) the trapezoidal rule (with the local error similar to (43) but with the constant \(\nicefrac{{1}}{{12}}\))
\[\left\|2\int_{t_{0}}^{t_{n}}f(t)\mathrm{d}t\right\|_{Y} \leq\left\|\int_{t_{0}}^{t_{n}}f(t)\mathrm{d}t\right\|_{Y}+\left\| \int_{t_{1}}^{t_{n-1}}f(t)\mathrm{d}t\right\|_{Y}+\left\|\int_{t_{0}}^{t_{1}} f(t)\mathrm{d}t\right\|_{Y}+\left\|\int_{t_{n-1}}^{t_{n}}f(t)\mathrm{d}t \right\|_{Y} \tag{46}\] \[\leq\left\|\int_{t_{0}}^{t_{n}}f(t)\mathrm{d}t\right\|_{Y}+\left\| \int_{t_{1}}^{t_{n-1}}f(t)\mathrm{d}t\right\|_{Y}+\frac{\Delta_{1}}{2}\left\|f (t_{1})\right\|_{Y}+\frac{\Delta_{1}^{3}}{12}\max_{t\in[t_{0},t_{1}]}\|f^{(2 )}(t)\|_{Y}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{ \Delta_{n}}{2}\|f(t_{n-1})\|_{Y}+\frac{\Delta_{n}^{3}}{12}\max_{t\in[t_{n-1},t_ {n}]}\|f^{(2)}(t)\|_{Y}\] \[\leq\left\|\int_{t_{0}}^{t_{n}}f(t)\mathrm{d}t\right\|_{Y}+\left\| \int_{t_{1}}^{t_{n-1}}f(t)\mathrm{d}t\right\|_{Y}+\Delta_{1}^{2}\left\|f^{(1)} (t_{0})\right\|_{Y}+\Delta_{1}^{3}\max_{t\in[t_{0},t_{1}]}\|f^{(2)}(t)\|_{Y}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\Delta_{ n}^{2}\|f^{(1)}(t_{n})\|_{Y}+\Delta_{n}^{3}\max_{t\in[t_{n-1},t_{n}]}\|f^{(2)}(t)\|_{Y}\]
since \(f(t_{1})=f^{\prime}(t_{0})\Delta_{1}+\frac{f^{\prime\prime}(\xi_{1})\Delta_{1} ^{2}}{2}\) and \(f(t_{n-1})=-f^{\prime}(t_{n})\Delta_{n}+\frac{f^{\prime\prime}(\xi_{n})\Delta_{n }^{2}}{2}\) for \(\xi_{1}\in[t_{0},t_{1}]\) and \(\xi_{n}\in[t_{n-1},t_{n}]\).
We conclude combining (44), (45) and (46), recalling that for \(n\) even
\[2\mathcal{Q}^{\{\Delta_{j}\}}(f)=\sum_{k=0}^{\frac{n}{2}-1}\left(t_{2k+2}-t_{2 k}\right)f\left(\frac{t_{2k+2}+t_{2k}}{2}\right)+\sum_{k=1}^{\frac{n}{2}-2} \left(t_{2k+1}-t_{2k-1}\right)f\left(\frac{t_{2k+1}+t_{2k-1}}{2}\right).\]
**Theorem 1**.: _Let \(\mathcal{K}\in\mathcal{A}(\mu,\mathcal{B}(X,Y))\), \(\mu\in\mathbb{R}\) and \(\rho>\max\{-1,\mu+3\}\). Consider a casual function \(g\in C^{\rho-1}(\mathbb{R})\) satisfying \(g^{(j)}(0)=0\), \(j=0,\ldots,\rho-1\), and \(g^{(\rho)}\) locally integrable. Then, it holds_
\[\left\|\mathcal{K}(\partial_{t})g(t_{n})-\mathcal{K}_{\rho}\left( \partial_{t}^{\{\Delta_{j}\}}\right)g^{(\rho)}(t_{n})\right\|_{Y}\\ \leq\Delta_{1}^{3}\max_{t\in[0,t_{1}]}\|g^{(\rho+2)}(t)\|_{X}+\sum _{j=1}^{n-1}(\Delta_{j}+\Delta_{j+1})^{3}\max_{t\in[t_{j-1},t_{j+1}]}\|g^{(\rho+2 )}(t)\|_{X}\\ +\Delta_{n}^{3}\max_{t\in[t_{n-1},t_{n}]}\|g^{(\rho+2)}(t)\|_{X}+ \Delta_{\max}^{2}\sum_{j=1}^{n-1}(\Delta_{j}+\Delta_{j+1})\|g^{(\rho)}(t_{j})\|_ {X}\\ +\Delta_{1}^{2}\|g^{(\rho+1)}(0)\|_{X}+\sum_{j=1}^{n-1}(\Delta_{j}+ \Delta_{j+1})|\Delta_{j+1}-\Delta_{j}|\max_{t\in[t_{j-1},t_{j+1}]}\|g^{(\rho+1 )}(t)\|_{X}.\]
_In particular, if \(|\Delta_{j+1}-\Delta_{j}|\leq\Delta_{\max}^{2}\), then we have_
\[\begin{split}&\left\|\mathcal{K}(\partial_{t})g(t_{n})-\mathcal{K}_{ \rho}\left(\partial_{t}^{\{\Delta_{j}\}}\right)g^{(\rho)}(t_{n})\right\|_{Y} \\ &\leq\Delta_{\max}^{2}\Biggl{[}\sum_{j=1}^{n-1}(\Delta_{j}+\Delta _{j+1})\left(\max_{t\in[t_{j-1},t_{j+1}]}\|g^{(\rho+2)}(t)\|_{X}+\max_{t\in[t_ {j-1},t_{j+1}]}\|g^{(\rho+1)}(t)\|_{X}+\|g^{(\rho)}(t_{j})\|_{X}\right)\\ &\qquad\qquad+\Delta_{1}\max_{t\in[0,t_{1}]}\|g^{(\rho+2)}(t)\|_ {X}+\Delta_{n}\max_{t\in[t_{n-1},t_{n}]}\|g^{(\rho+2)}(t)\|_{X}+\|g^{(\rho+1)} (0)\|_{X}\Biggr{]}.\end{split}\]
Proof.: By definition (5), we can write
\[\mathcal{K}(\partial_{t})g(t_{n})=\int_{0}^{t_{n}}\kappa_{\rho}(t_{n}-\tau)g^ {(\rho)}(\tau)\mathrm{d}\tau. \tag{47}\]
Let us define \(\widetilde{I}_{n}\) to be a first approximation of this integral
\[\widetilde{I}_{n}:=\sum_{j=1}^{n-1}\left(\frac{t_{j+1}-t_{j-1}}{2}\right) \kappa_{\rho}\left(t_{n}-\frac{t_{j+1}+t_{j-1}}{2}\right)g^{(\rho)}(t_{j}).\]
We express the error in two terms \(\mathcal{K}(\partial_{t})g(t_{n})-\mathcal{K}_{\rho}\left(\partial_{t}^{\{ \Delta_{j}\}}\right)g^{(\rho)}(t_{n})=E_{1}+E_{2}\) where
\[E_{1}:=\mathcal{K}(\partial_{t})g(t_{n})-\widetilde{I}_{n},\qquad E_{2}:= \widetilde{I}_{n}-\mathcal{K}_{\rho}\left(\partial_{t}^{\{\Delta_{j}\}} \right)g^{(\rho)}(t_{n}).\]
To bound \(E_{1}\) we notice that
\[\begin{split}\left\|E_{1}\right\|_{Y}&=\left\| \mathcal{K}(\partial_{t})g(t_{n})-\sum_{j=1}^{n-1}\left(\frac{t_{j+1}-t_{j-1}} {2}\right)\kappa_{\rho}\left(t_{n}-\frac{t_{j+1}+t_{j-1}}{2}\right)g^{(\rho)}( t_{j})\right\|_{Y}\\ &\leq\left\|\mathcal{K}(\partial_{t})g(t_{n})-\sum_{j=1}^{n-1} \left(\frac{t_{j+1}-t_{j-1}}{2}\right)\kappa_{\rho}\left(t_{n}-\frac{t_{j+1}+t _{j-1}}{2}\right)g^{(\rho)}\left(\frac{t_{j+1}+t_{j-1}}{2}\right)\right\|_{Y} \\ &\qquad+\sum_{j=1}^{n-1}\left(\frac{t_{j+1}-t_{j-1}}{2}\right) \left\|\kappa_{\rho}\left(t_{n}-\frac{t_{j+1}+t_{j-1}}{2}\right)\right\|_{ \mathcal{B}(X,Y)}\left\|g^{(\rho)}(t_{j})-g^{(\rho)}\left(\frac{t_{j+1}+t_{j-1 }}{2}\right)\right\|_{X}\\ &=:I_{1}+I_{2}.\end{split} \tag{48}\]
Recalling (47) and applying Lemma 2, we can estimate
\[\begin{split} I_{1}&=\left\|\int_{0}^{t_{n}}\kappa_{ \rho}(t_{n}-\tau)g^{(\rho)}(\tau)\mathrm{d}\tau-\mathcal{Q}^{\{\Delta_{j}\}} \left(\kappa_{\rho}(t_{n}-.)g^{(\rho)}\right)\right\|_{Y}\\ &\leq\Delta_{1}^{3}\max_{t\in[0,t_{1}]}\|g^{\rho+2}(t)\|_{X}+\sum_{ j=1}^{n-1}(\Delta_{j}+\Delta_{j+1})^{3}\max_{t\in[t_{j-1},t_{j+1}]}\|g^{(\rho+2)}(t) \|_{X}+\Delta_{n}^{3}\max_{t\in[t_{n-1},t_{n}]}\|g^{(\rho+2)}(t)\|_{X}\\ &\quad+\Delta_{1}^{2}\|g^{(\rho+1)}(0)\|_{X},\end{split} \tag{49}\]
where we also used the boundedness of \(\|\kappa_{\rho}(\cdot)\|_{\mathcal{B}(X,Y)}\) and \(\kappa_{\rho}(0)=\partial_{t}\kappa_{\rho}(0)=0\).
On the other hand, we observe that
\[\left\|g^{(\rho)}(t_{j})-g^{(\rho)}\left(\frac{t_{j+1}+t_{j-1}}{2}\right) \right\|_{X}\lesssim\left|\frac{t_{j+1}+t_{j-1}-2t_{j}}{2}\right|\max_{t\in[t _{j-1},t_{j+1}]}\|g^{(\rho+1)}(t)\|_{X},\]
from which we conclude
\[\begin{split} I_{2}&\lesssim\sum_{j=1}^{n-1}\left( \frac{t_{j+1}-t_{j-1}}{2}\right)\left|\frac{t_{j+1}+t_{j-1}-2t_{j}}{2}\right| \max_{t\in[t_{j-1},t_{j+1}]}\|g^{(\rho+1)}(t)\|_{X}\\ &\lesssim\sum_{j=1}^{n-1}(\Delta_{j}+\Delta_{j+1})|\Delta_{j+1}- \Delta_{j}|\max_{t\in[t_{j-1},t_{j+1}]}\|g^{(\rho+1)}(t)\|_{X}.\end{split} \tag{50}\]
To bound \(E_{2}\) we simply use Proposition 4
\[\begin{split}|E_{2}|&\leq\sum_{j=1}^{n-1}\left\|w_{n,j}( \mathcal{K}_{\rho})-\frac{t_{j+1}-t_{j-1}}{2}\kappa_{\rho}\left(t_{n}-\frac{t_{ j-1}+t_{j+1}}{2}\right)\right\|_{X}\|g^{(\rho)}(t_{j})\|_{\mathcal{B}(X,Y)}\\ &\leq\Delta_{\max}^{2}\sum_{j=1}^{n-1}(\Delta_{j}+\Delta_{j+1}) \|g^{(\rho)}(t_{j})\|_{X}.\end{split} \tag{51}\]
Combining estimates (48), (49), (50) and (51) we conclude.
**Remark 3**.: In practice, a common choice of variable grids is a graded mesh of the form \(\{t_{j}=(\nicefrac{{j}}{{N}})^{\alpha}\,j=0,\ldots N\}\), for some \(\alpha\geq 1\) and \(N\in\mathbb{N}\). For this particular time stepping schemes, we can verify that
\[|\Delta_{j+1}-\Delta_{j}|\leq\Delta_{\max}^{2} \tag{52}\]
for all \(j=1,\ldots,N-1\). Indeed, we reach the maximum for \(j=N-1\), thus obtaining
\[\begin{split}\max_{j\in\{1,\ldots,N-1\}}|\Delta_{j+1}-\Delta_{j}| &=\left(1-\frac{(N-1)^{\alpha}}{N^{\alpha}}\right)-\left(\frac{( N-1)^{\alpha}}{N^{\alpha}}-\frac{(N-2)^{\alpha}}{N^{\alpha}}\right)\\ &=1-2\frac{(N-1)^{\alpha}}{N^{\alpha}}+\frac{(N-2)^{\alpha}}{N^{ \alpha}}\end{split} \tag{53}\]
and similarly, since \(\Delta_{\max}=\Delta_{N}\),
\[\Delta_{\max}^{2}=\left(1-\frac{(N-1)^{\alpha}}{N^{\alpha}}\right)^{2}=1-2 \frac{(N-1)^{\alpha}}{N^{\alpha}}+\frac{(N-1)^{2\alpha}}{N^{2\alpha}}. \tag{54}\]
Combining (53) and (54), we conclude that (52) is equivalent to
\[\frac{(N-2)^{\alpha}}{N^{\alpha}}\leq\frac{(N-1)^{2\alpha}}{N^{2\alpha}},\]
but this is clearly true for all \(N\in\mathbb{N}\) and for all \(\alpha\geq 1\).
## 5 Numerical results and algorithms
This section outlines the numerical algorithms used to compute the forward and backward gCQ based on the trapezoidal rule. Additionally, we introduce the gCQ based on BDF2 with non-uniform time steps, along with the corresponding algorithms. We also provide a reminder of the quadrature rules proposed in [13] and explain how we have adapted them to our specific context. To illustrate the effectiveness of our proposed methods, we include a one-dimensional numerical example.
### gCQ based on BDF2 with non-uniform steps
Here, we present also a gCQ method based on BDF2 on variable grids. Proceeding as in (8) we need to discretize the initial value problem (9) via a non uniform BDF2 scheme (see e.g. [9, Section 5]).
Given \(0=t_{0}<t_{1}<\ldots<t_{N}=T\) with time-steps \(\Delta_{n}=t_{n}-t_{n-1},n=1,\ldots,N\), and setting \(\Delta_{0}=\Delta_{1}\), the approximation at \(t_{n}\) of the solution of (9), for \(n=1,\ldots,N\) is
\[u_{n}(s)=u_{n-1}(s)\frac{(\Delta_{n-1}+\Delta_{n})^{2}}{\Delta_{n-1}(\Delta_{n -1}+2\Delta_{n})}-u_{n-2}(s)\frac{\Delta_{n}^{2}}{\Delta_{n-1}(\Delta_{n-1}+2 \Delta_{n})}+\big{(}su_{n}(s)+g^{(\rho)}(t_{n})\big{)}\frac{\Delta_{n}(\Delta_ {n-1}+\Delta_{n})}{\Delta_{n-1}+2\Delta_{n}}\]
with \(u_{0}(s)\equiv u_{-1}(s)\equiv 0\), from which we simplify
\[u_{n}(s)=u_{n-1}(s)\frac{B_{n}}{1-A_{n}s}-u_{n-2}(s)\frac{C_{n}}{1-A_{n}s}+g^{ (\rho)}(t_{n})\frac{A_{n}}{1-A_{n}s} \tag{55}\]
with the defined coefficients
\[A_{n}:=\frac{\Delta_{n}(\Delta_{n-1}+\Delta_{n})}{\Delta_{n-1}+2\Delta_{n}}, \quad B_{n}:=\frac{(\Delta_{n-1}+\Delta_{n})^{2}}{\Delta_{n-1}(\Delta_{n-1}+2 \Delta_{n})},\quad C_{n}:=\frac{\Delta_{n}^{2}}{\Delta_{n-1}(\Delta_{n-1}+2 \Delta_{n})}. \tag{56}\]
Deriving a closed-form solution similar to equation (11) for this recursive approach is a challenging task. Hence, we have decided to postpone the theoretical analysis of BDF2 gCQ for future research. In this paper, we focus on providing a concise description of Algorithms 2 and 4 on page 18 that facilitate the computation of forward and backward gCQ utilizing BDF2. Additionally, we conduct a numerical experiment in the next subsection to highlight the effectiveness also of the BDF2. In addition to the algorithms just presented, we have also synthesized similar algorithms for gCQ based on the trapezoidal rule, both for forward and backward gCQ (see Algorithms 1 and 3). It is worth noting that the forward scheme is used to compute a convolution like (1) when \(g\) is known, while the backward scheme is used when \(\phi\) is known. Similar algorithms for BDF1 gCQ can be found in (13, Section 4), while for Runge-Kutta gCQ in (15, Section 6) and (11, Section 3).
### Quadrature aspects
The idea is to compute step by step
\[\mathcal{K}\left(\partial_{t}^{\{\Delta_{j}\}}\right)g(t_{n})=\frac{1}{2\pi \mathrm{i}}\oint_{\mathcal{C}}\mathcal{K}_{\rho}(s)u_{n}(s)\mathrm{d}s \tag{57}\]
where \(u_{n}\) is defined in (10) and the complex integral is performed via suitable quadrature rules.
In order to utilize BDF1 with gCQ effectively, it is necessary to solve a quadrature problem. This issue has been successfully addressed in (13), with experimental results provided in (14). The circle contour is the optimal choice in this case, and it is parameterized using Jacobi elliptic functions to fully exploit the analyticity domain of the integrand in (57). It is worth noting that the poles of the integrand are located in the real segment.
We will briefly review the construction and the simple modification made in our case. As per the theoretical analysis presented in (14, 13), we adopted \(N_{Q}=N\log_{2}^{2}(N)\) quadrature points, where \(N\) represents the number of time steps. The details and results of the aforementioned papers are summarized below. The integration points in the complex plane are
\[s_{\ell}:=\gamma(\sigma_{\ell}),\quad w_{\ell}:=\frac{4K(k^{2})}{2\pi\mathrm{i }N_{Q}}\gamma^{\prime}(\sigma_{\ell}),\quad\ell=1,\ldots,N_{Q}\]
where the parameters \(k\) and \(\sigma_{\ell}\), depending on
\[q:=\frac{M}{\Delta_{\min}},\qquad M:=R\max\left\{\Delta_{\max}^{-2},\Delta_{ \min}^{-1}\right\},\quad\Delta_{\min}:=\min\{\Delta_{j}\}, \tag{58}\]
are defined as
\[k:=\frac{q-\sqrt{2q-1}}{q+\sqrt{2q-1}},\quad\sigma_{\ell}:=-K(k^{2})+\left( \ell-\frac{1}{2}\right)\frac{4K(k^{2})}{N_{Q}}\]
for \(\ell=1,\ldots,N_{Q}\). The parameter \(R\) is a constant depending on the underlined ODE solvers, precisely
\[R:=\begin{cases}1&\text{BDF1}\\ 1.5&\text{BDF2}\\ 2&\text{trapezoidal rule}.\end{cases}\]
Finally, \(K(k)\) is the complete elliptic integral of first kind
\[K(k):=\int_{0}^{1}\frac{1}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}}\mathrm{d}x,\quad K ^{\prime}(k)=K(1-k),\]
and the functions \(\gamma\) is the parametrization of a circle centered in \(M\) of radius \(M\) (see (13, Lemma 15))
\[\gamma(\sigma):=\frac{M}{q-1}\left(\sqrt{2q-1}\frac{k^{-1}+\mathrm{sn}(\sigma |k^{2})}{k^{-1}-\mathrm{sn}(\sigma|k^{2})}-1\right),\quad\gamma^{\prime}( \sigma)=\frac{M\sqrt{2q-1}}{q-1}\frac{2\mathrm{cn}(\sigma|k^{2})\text{dn}( \sigma|k^{2})}{k(k^{-1}-\mathrm{sn}(\sigma|k^{2}))}\]
where \(\mathrm{sn}\), \(\mathrm{dn}\) and \(\mathrm{cn}\) are the Jacobi elliptic functions whose evaluation have been performed in MATLAB by means of Driscoll's Schwarz-Christoffel Toolbox (7).
**Remark 4**.: The only deviation from the nodes and weights proposed in (13) is the introduction of a scaling factor, \(R\), in (58). This parameter ensures that the complex poles of the integrands are suitably distanced from the contour of the circle with radius \(M\) and center \(M\) used for integration. For BDF1, the poles are \(\{\Delta_{j}^{-1}\}\), and (13) demonstrated that \(R=1\) is sufficient. In the case of BDF2, the poles are \(\{A_{j}^{-1}\}\) as defined in (56). To extend the ideas put forth in (13), we set \(R=1.5\) for this case. In fact, as \(\Delta_{\max}\) approaches 0, we have \(A_{j}^{-1}\to\frac{3}{2}\Delta_{j}^{-1}\). Finally, for the trapezoidal rule, the poles are \(\{2\Delta_{j}^{-1}\}\), and we have selected \(R=2\).
### Numerical results
We consider the following one-dimensional example, already performed in [13; 15]: find \(g\) such \(\mathcal{K}(\partial_{t})g=\phi\) with
\[\mathcal{K}(s):=\frac{1-e^{-2s}}{2s},\quad\text{and}\quad\phi(t):=t^{\nicefrac{{ 5}}{{2}}}e^{-t}. \tag{59}\]
The exact solution to this problem is given by
\[g(t):=2\sum_{k=0}^{\lfloor\frac{t}{2}\rfloor}\phi^{\prime}(t-2k)\]
We approximate \(g(t)\) for \(t\in[0,1]\) by applying Algorithms 3 and 4. Note that \(\mathcal{K}^{-1}\) satisfies (3) with \(\mu=1\). The right hand side \(\phi\) satisfies \(\phi^{(j)}(0)=0\) for \(j=0,1,2\) but is not three times differentiable at \(t=0\). This lack of regularity suggests to use a time grid which is algebraically graded towards the origin. We choose a graded mesh with points
\[t_{j}=\left(\frac{j}{N}\right)^{\alpha},\quad j=0,\ldots N \tag{60}\]
Figure 1 shows that the convergence rate is \(\mathcal{O}(\Delta^{2})\) for the quadratic graded mesh (\(\alpha=2\)) and about \(\mathcal{O}(\Delta^{1.5})\) for the uniform mesh (\(\alpha=1\)). For this example, we have \(\mu=1\), which implies that the minimal integer \(\rho>\mu+3=5\). Note that, however, we have used \(\rho=0\) in our computations. It remains an open problem whether there exist examples where a larger value of \(\rho\) is necessary for variable steps than for uniform steps, or whether our theory yields a suboptimal estimate in terms of this parameter.
## 6 Conclusion
We present an improved approach for solving one-sided convolution equations: the gCQ method with variable time stepping based on the trapezoidal rule, which we develop and analyze in this paper. This method builds on the original
Figure 1: Error with respect to the number of steps for the data in (59) for different grids (60), obtained with gCQ based on the trapezoidal rule (left) and on BDF2 (right).
Figure 2: Absolute error in the approximation with the trapezoidal rule for the data in (59) with \(N=64\) time steps: with uniform time steps, i.e. \(\alpha=1\), in the left, and with quadratically graded time steps, i.e. \(\alpha=2\), in the right.
**Initialization** Generate \(\mathcal{K}_{\rho}(s_{\ell})\) for all contour quadrature nodes \(s_{\ell}\), \(\ell=1,\ldots,N_{Q}\). Compute \(\phi_{1}\) from
\[\phi_{1}=\mathcal{K}_{\rho}\left(\frac{2}{\Delta_{1}}\right)g^{(\rho)}(t_{1}).\]
Set \(u_{0}(s)\equiv 0\).
**for \(n=2,\ldots,N\)do**
**1. Trapezoidal step:** apply a step of the trapezoidal rule and compute
\[u_{n-1}(s_{\ell})=u_{n-2}(s_{\ell})\frac{2+\Delta_{n-1}s_{\ell}}{2-\Delta_{n-1 }s_{\ell}}+\left(g^{(\rho)}(t_{n-2})+g^{(\rho)}(t_{n-1})\right)\frac{\Delta_{n -1}}{2-\Delta_{n-1}s_{\ell}},\]
for all contour quadrature nodes \(\ell=1,\ldots,N_{Q}\).
**2. Compute \(\phi_{n}\):** if \(\Delta_{n}\) is a new time step, then, generate \(\mathcal{K}_{\rho}\left(\frac{2}{\Delta_{n}}\right)\); otherwise this operator was already generated in a previous step. Compute \(\phi_{n}\) from
\[\phi_{n}=\sum_{\ell=1}^{N_{Q}}w_{\ell}\mathcal{K}_{\rho}(s_{\ell})u_{n-1}(s_{ \ell})\frac{2+\Delta_{n}s_{\ell}}{2-\Delta_{n}s_{\ell}}+\mathcal{K}_{\rho} \left(\frac{2}{\Delta_{n}}\right)\left(g^{(\rho)}(t_{n-1})+g^{(\rho)}(t_{n}) \right).\]
**end for**
**Algorithm 2**_Forward_ gCQ with contour quadrature based on BDF2
**Initialization** Generate \(\mathcal{K}_{\rho}(s_{\ell})\) for all contour quadrature nodes \(s_{\ell}\), \(\ell=1,\ldots,N_{Q}\). Compute \(\phi_{1}\) from
\[\phi_{1}=\mathcal{K}_{\rho}\left(\frac{1}{A_{1}}\right)g^{(\rho)}(t_{1}).\]
Set \(u_{0}(s)\equiv u_{-1}(s)\equiv 0\).
**for \(n=2,\ldots,N\)do**
**1. BDF2 step:** apply a step of the BDF2 and compute
\[u_{n-1}(s_{\ell})=u_{n-2}(s_{\ell})\frac{B_{n-1}}{1-A_{n-1}s_{\ell}}-u_{n-3}( s_{\ell})\frac{C_{n-1}}{1-A_{n-1}s_{\ell}}+g^{(\rho)}(t_{n-1})\frac{A_{n-1}}{1-A_{n- 1}s_{\ell}}\]
with coefficients (56), for all contour quadrature nodes \(\ell=1,\ldots,N_{Q}\).
**2. Compute \(\phi_{n}\):** if \(A_{n}\) is different from the previous coefficients, then generate \(\mathcal{K}_{\rho}\left(\frac{1}{A_{n}}\right)\); otherwise this operator was already generated in a previous step. Compute \(\phi_{n}\) from
\[\phi_{n}=\sum_{\ell=1}^{N_{Q}}w_{\ell}\mathcal{K}_{\rho}(s_{\ell})\left(u_{n-1 }(s_{\ell})\frac{B_{n}}{1-A_{n}s_{\ell}}-u_{n-2}(s_{\ell})\frac{C_{n}}{1-A_{n} s_{\ell}}\right)+\mathcal{K}_{\rho}\left(\frac{1}{A_{n}}\right)g^{(\rho)}(t_{n}).\]
**end for**
**Algorithm 2**_Forward_ gCQ with contour quadrature based on BDF2
**Algorithm 3**_Forward_ gCQ with contour quadrature nodes \(s_{\ell}\), \(\ell=1,\ldots,N_{Q}\).
**Initialization** Generate \(\mathcal{K}_{-\rho}(s_{\ell})\) for all contour quadrature nodes \(s_{\ell}\), \(\ell=1,\ldots,N_{Q}\). Compute \(g_{1}\) from
\[\mathcal{K}_{-\rho}\left(\frac{2}{\Delta_{1}}\right)g_{1}=\phi^{(\rho)}(t_{1}).\]
Set \(u_{0}(s)\equiv 0\).
**for**\(n=2,\ldots,N\)**do**
**1. Trapezoidal step:** apply a step of the trapezoidal rule and compute
\[u_{n-1}(s_{\ell})=u_{n-2}(s_{\ell})\frac{2+\Delta_{n-1}s_{\ell}}{2-\Delta_{n- 1}s_{\ell}}+\left(g_{n-2}+g_{n-1}\right)\frac{\Delta_{n-1}}{2-\Delta_{n-1}s_{ \ell}},\]
for all contour quadrature nodes \(\ell=1,\ldots,N_{Q}\).
**2. Generate linear system:** if \(\Delta_{n}\) is a new time step, then, generate \(\mathcal{K}_{-\rho}\left(\frac{2}{\Delta_{n}}\right)\); otherwise this operator was already generated in a previous step. Update the right-hand side
\[r_{n}:=\phi^{(\rho)}(t_{n})-\sum_{\ell=1}^{N_{Q}}w_{\ell}\mathcal{K}_{-\rho}(s _{\ell})u_{n-1}(s_{\ell})\frac{2+\Delta_{n}s_{\ell}}{2-\Delta_{n}s_{\ell}}- \mathcal{K}_{-\rho}\left(\frac{2}{\Delta_{n-1}}\right)g_{n-1}.\]
**3. Linear Solve:** solve the linear system
\[\mathcal{K}_{-\rho}\left(\frac{2}{\Delta_{n}}\right)g_{n}=r_{n}.\]
**end for**
**Algorithm 4**_Backward_ gCQ with contour quadrature based on BDF2
**Initialization** Generate \(\mathcal{K}_{-\rho}(z_{\ell})\) for all contour quadrature nodes \(s_{\ell}\), \(\ell=1,\ldots,N_{Q}\). Compute \(g_{1}\) from
\[\mathcal{K}_{-\rho}\left(\frac{1}{A_{1}}\right)g_{1}=\phi^{(\rho)}(t_{1}).\]
Set \(u_{0}(s)\equiv u_{-1}(s)\equiv 0\).
**for**\(n=2,\ldots,N\)**do**
**1. BDF2 step:** apply a step of the trapezoidal rule and compute
\[u_{n-1}(s_{\ell})= u_{n-2}(s_{\ell})\frac{B_{n-1}}{1-A_{n-1}s_{\ell}}-u_{n-3}(s_{ \ell})\frac{C_{n-1}}{1-A_{n-1}s_{\ell}}+g_{n-1}\frac{A_{n-1}}{1-A_{n-1}s_{ \ell}}\]
with coefficients (56), for all contour quadrature nodes \(\ell=1,\ldots,N_{Q}\).
**2. Generate linear system:** if \(A_{n}\) is different from the previous coefficients, then generate \(\mathcal{K}_{\rho}\left(\frac{1}{A_{n}}\right)\); otherwise, this operator was already generated in a previous step. Update the right-hand side
\[r_{n}:=\phi^{(\rho)}(t_{n})-\sum_{\ell=1}^{N_{Q}}w_{\ell}\mathcal{K}_{-\rho}(s _{\ell})\left(u_{n-1}(s_{\ell})\frac{B_{n}}{1-A_{n}s_{\ell}}-u_{n-2}(s_{\ell}) \frac{C_{n}}{1-A_{n}s_{\ell}}\right).\]
**3. Linear Solve:** solve the linear system
\[\mathcal{K}_{-\rho}\left(\frac{1}{A_{n}}\right)g_{n}=r_{n}.\]
**end for**
CQ method, which transforms the continuous equation to the Laplace domain and characterizes the transformed solution as an ODE. In contrast to the CQ method, we introduce variable time stepping for the solution of the ODE, resulting in the gCQ method with improved accuracy and efficiency. Specifically, we utilize the trapezoidal rule for the time stepping in the gCQ method.
To analyze the gCQ method, we develop a theory based on a new formula of quadrature integral and a pointwise error estimate of the weights. The gCQ method is also implemented in a stable algorithmic version based on both the trapezoidal and the BDF2 rules, and we report the results of numerical experiments illustrating the advantages of variable time stepping for non-smooth data. It is worth noting that constructing a quadrature on an appropriate contour in the integral formula (57) is used for stable computation, as seen in [13]. However, the fast FFT algorithms for the uniform time-step CQ are not available. A study on the stability and convergence of the BDF2 method will be conducted in a future work
## Acknowledgments
The second author is members of the GNCS group (_Gruppo Nazionale per il Calcolo Scientifico_) of INdAM (_Istituto Nazionale di Alta Matematica "F Severi"_). The second author was partially supported from the MIUR grant _Dipartimenti di Eccellenza 2018-2022_ (E11G18000350001) of the Italian Ministry for University and Research.
|
2310.02498 | Single-shot Non-destructive Quantum Sensing for Gaseous Samples with
Hundreds of Chiral Molecules | Chiral discrimination that is efficient to tiny amounts of chiral substances,
especially at the single-molecule level, is highly demanded. Here, we propose a
single-shot nondestructive quantum sensing method addressing such an issue. Our
scheme consists of two steps. In the first step, the two enantiomers are
prepared in different rotational states via microwave enantio-specific state
transfer. Then, the chiral discrimination is transferred to quantum hypothesis
testing. In the second step, we for the first time introduce a non-destructive
quantum-state detection technique assisted with a microwave resonator to chiral
discrimination, through which the molecular chirality is determined by the sign
of the output signals. Using a typical chiral molecule, 1,2-propanediol, and an
experimentally feasible model based on spherical Fabry-P\'{e}rot cavity, we
show that the molecular chirality of slowly moving enantiopure gaseous samples
with $10^2 - 10^3$ molecules can be highly credibly distinguished in a
single-shot detection. By further trapping chiral molecules, it is promising to
achieve chiral discrimination at the single molecule level by using our
approach. | Chong Ye, Yifan Sun, Yong Li, Xiangdong Zhang | 2023-10-04T00:16:04Z | http://arxiv.org/abs/2310.02498v1 | # Single-shot Non-destructive Quantum Sensing for Gaseous Samples with Hundreds of Chiral Molecules
###### Abstract
Chiral discrimination that is efficient to tiny amounts of chiral substances, especially at the single-molecule level, is highly demanded. Here, we propose a single-shot non-destructive quantum sensing method addressing such an issue. Our scheme consists of two steps. In the first step, the two enantiomers are prepared in different rotational states via microwave enantio-specific state transfer. Then, the chiral discrimination is transferred to quantum hypothesis testing. In the second step, we for the first time introduce a non-destructive quantum-state detection technique assisted with a microwave resonator to chiral discrimination, through which the molecular chirality is determined by the sign of the output signals. Using a typical chiral molecule, 1,2-propanediol, and an experimentally feasible model based on spherical Fabry-Perot cavity, we show that the molecular chirality of slowly moving enantiopure gaseous samples with \(10^{2}\sim 10^{3}\) molecules can be highly credibly distinguished in a single-shot detection. By further
trapping chiral molecules, it is promising to achieve chiral discrimination at the single-molecule level by using our approach.
Chiral molecules, also known as enantiomers, are mirror images of each other but are not super-imposable by translations and rotations. Their significance is emphasized by the enantioselectivity in the activity of biological molecules and broad classes of chemical reactions as well as the homochirality in life. Chiral discrimination, distinguishing two enantiomers of opposite chiralities, is a vitally important problem since the first discovery of molecular chirality [1]. According to Curie's dissymmetry principle, two enantiomers can be distinguished by interacting with another symmetry-broken (chiral) item, including light, molecule, and media. In traditional chiroptical methods, such as optical rotary dispersion, circular dichroism, and Raman optical activity, circularly polarized light was used for chiral discrimination. These methods relied on relatively weak optical magnetic interactions, such that highly concentrated samples with a large number of chiral molecules are needed. In addition, circularly polarized fields were also proposed to obtain the enantio-specific state transfer [2] and the laser-induced asymmetric synthesis [3].
For biomedical and pharmaceutical applications, ultra-sensitive chiral discrimination methods efficient to tiny amounts of chiral substances are highly demanded. The achievement of chiral discrimination at the single-molecule level, such as the chiral scanning tunneling microscopy on surface-bond molecules [4, 5], can substantially promote the related areas. Recently, the chiral discrimination in purely electric dipole effect was achieved in cold gaseous samples by using the microwave three-wave mixing technique [6, 7, 8, 9, 10, 11, 12, 13], where two microwave fields are applied to generate a third field. Because the transition electric dipoles change signs with two enantiomers, the phase of the third field differs by \(\pi\) for the two enantiomers. This opens up a new avenue for developing ultra-sensitive chiroptical methods. An enantiomeric excess resolution of 2% by using \(10^{6}\) repeated measurements in cold gaseous samples of \(10^{11}\) effective working molecules was reported [7]. The enantio-specific state transfer [14, 15, 16] is a direct extension of microwave three-wave mixing. The two enantiomers initially in the same-energy
inner state can be transferred to different-energy inner states by applying three microwave fields. Combining it with a state-selective free-induced decay (FID) detection, an enantiomeric excess resolution of 0.05% was realized [14]. Further, some sensitive state-selective detection techniques, including laser-induced fluorescence (LIF), resonance-enhanced multi-photon ionization (REMPI), resonance-enhanced multiphoton dissociation (REMPD), and buffer-gas infrared absorption spectroscopy (BGIRAS) were suggested to enhance the resolution [14]. Very recently, the LIF scheme was experimentally demonstrated [16].
However, the commonly state-selective detection methods have severe drawbacks for tiny amounts of chiral substances due to their destructive feature. For LIF, REMPI, and REMPD, there are many competing processes due to the complex level structure of chiral molecules. Then, the desired signals therein are usually not sufficiently generated before the initial molecular states are destroyed. For BGIRAS, it gets the frequency information by detecting signals at different locations, such that a single absorption event does not provide enough information about the spectrum. Moreover, the chiral molecules may be destroyed or disturbed after destructive detection, resulting in the chemically and biologically important chiral molecules ceasing to be effective.
In this Letter, we theoretically propose an ultra-sensitive single-shot non-destructive quantum sensor for chiral discrimination of gaseous samples with hundreds of chiral molecules. It consists of two steps as shown in Fig. 1(a). We first prepare chiral molecules of opposite chirality to different rotational states by microwave enantio-specific state transfer, transferring the chiral discrimination to quantum hypothesis testing. Then, we for the first time introduce the dispersive detection technique assisted with a microwave resonator, where a balanced Homodyne measurement is used and the molecule chirality is distinguished by the sign of the output signals. Our method is ultra-sensitive, without the requirement of repeated measurement, and non-destructive, offering advantages over three-wave mixing technique [6, 7, 8, 9, 10, 11, 12, 13]. To illustrate these advantages, we provide numerical simulations by using a typical chiral molecule, 1,2-propanediol, and an experimentally feasible model based on
a spherical Fabry-Perot cavity. The numerical results show that the molecular chirality of slowly moving enantiopure samples with \(10^{2}\sim 10^{3}\) molecules can be highly credibly distinguished in a single-shot detection. It is promising to achieve chiral discrimination at the single-molecule level by further trapping chiral molecules.
_Cavity-assisted chiral quantum sensor._ As shown in Fig. 1(a), we assume that chiral molecules with unknown chirality are initially populated in the state \(|1\rangle\). In step I, we realize the enantio-specific state transfer of chiral molecules by applying three pulses with well-designed polarization and frequencies [17] [see the left upper corner of Fig. 1(a)], such that the working model of step I is described by the cyclic three-level model [see Fig. 1(b)]. The working Hamiltonian of step I in the interaction picture is
\[\hat{H}_{Q}=\sum_{j>i=1}^{3}\Omega_{ij}^{Q}(t)|i\rangle\langle j|+h.c., \tag{1}\]
where subscript \(Q=(L,R)\) denotes the molecular chirality and the enantioselective coupling strengths \(\Omega_{ij}^{Q}\) are
\[\Omega_{21}^{Q}=\Omega_{21},\ \ \Omega_{32}^{Q}=\Omega_{32},\ \ \Omega_{31}^{L}=- \Omega_{31}^{R}=\Omega_{31}e^{\mathrm{i}\phi}. \tag{2}\]
For simplicity and without loss of generality, \(\Omega_{ji}\) are assumed to be positive. The cyclic three-level model offers the possibility of generating enantioselectivity in purely electric dipole effect, and thus intensely discussed in chiral discrimination and enantiopurification [18, 19, 20, 21, 22, 23, 24, 25]. In our treatment, we have also neglected the predicted tiny energy differences between the two enantiomers due to parity-violating interactions [26].
We apply three non-overlapping resonant pulses [see the left panel of Fig. 1(a)], yielding the molecule process in the following manner [16]\(|2\rangle\stackrel{{\pi/2}}{{\longleftarrow}}|1\rangle\stackrel{{ \pi}}{{\rightarrow}}|3\rangle\stackrel{{\pi/2}}{{ \longrightarrow}}|2\rangle\). The overall phase is adjusted to \(\phi=-\pi/2\) by tuning the phases of the applying fields. The final states of the molecules are enantioselective [see the middle panel of Fig. 1(a)]. Then, the distinguishing of molecular chirality is transferred to testing the two most distinguishable
quantum hypotheses [27]
\[\text{Hypothesis }H_{L}:|\Psi\rangle_{H_{L}}=|3\rangle,\] \[\text{Hypothesis }H_{R}:|\Psi\rangle_{H_{R}}=|2\rangle. \tag{3}\]
For details about the working states of the cyclic three-level model, the origin of its enantioselectivity in pure electric dipole-interaction physics, and the evolution of the chiral molecule, see Sec. 1.1 in Supporting Information.
In step II, we determine the two hypotheses in Eq. (3) with the help of a driven microwave
Figure 1: (a) Scheme of the chiral quantum sensor. In step I (the left panel), the tricolor synthetic light composed of three microwave pulses with mutually orthogonal polarization directions (the left upper corner) is applied through the horn antennas to generate the enantioselective states. The two enantioselective states serve as two quantum hypotheses, which are determined by monitoring the enantioselective responses of the cavity mode via homodyne detection in step II (the right panel). The microscopic light-molecule interaction models for the two steps are shown in (b) and (c).
cavity as shown in the right panel of Fig. 1(a). We constantly and resonantly drive the cavity mode \(\hat{c}\) with a classical \(\omega_{0}\)-field generated by the microwave generator labeled 'Probe' in Fig. 1(a). The geometry of the cavity is well designed, such that the cavity mode \(\hat{c}\) near-resonantly couples with the rotational transition \(|2\rangle\leftrightarrow|3\rangle\) as shown in Fig. 1(c). The hybrid system with \(N_{m}\) molecules can be described by the driven Tavis-Cummings model (\(\hbar=1\))
\[\hat{\mathcal{H}}=\sum_{l=1}^{N_{m}}\Delta_{m}\hat{\sigma}_{l}^{\dagger}\hat{ \sigma}_{l}+\sum_{l=1}^{N_{m}}\bar{g}(\hat{c}\hat{\sigma}_{l}^{\dagger}+\hat{c }^{\dagger}\hat{\sigma}_{l})+\mathrm{i}\eta(\hat{c}^{\dagger}-\hat{c}). \tag{4}\]
The operators for the \(l\)-th molecule are defined as \(\hat{\sigma}_{l}=|2\rangle_{ll}\langle 3|\) and \(\hat{\sigma}_{l}^{z}=2\hat{\sigma}_{l}^{\dagger}\hat{\sigma}_{l}-1\). The detunings are \(\Delta_{c}=\omega_{c}-\omega_{0}=0\) and \(\Delta_{m}=\omega_{32}-\omega_{0}\). Then, the cavity-molecule detuning is \(\Delta=\Delta_{m}-\Delta_{c}=\Delta_{m}\). The cavity pumping rate is \(\eta\). We consider the case that the size of the sample is much smaller than the width of the spatial profile of the cavity field, such that the coupling strength is assumed identical for different molecules in the sample and \(\bar{g}\) is used in Eq. (4) (for more details, see Sec. 2 of Supporting Information).
We assume that the free space decay, the collective decay, and the dipole-dipole interaction between molecules are negligible, yielding the equations of motion for \(\sigma\equiv\sum_{l=1}^{N_{m}}\langle\hat{\sigma}_{l}\rangle/N_{m}\) and \(\sigma^{z}\equiv\sum_{l=1}^{N_{m}}\langle\hat{\sigma}_{l}^{z}\rangle/N_{m}\) according to first-order mean-field theory in the cumulant expansion [28] (for more details, see Sec. 2.1 of Supporting Information)
\[\dot{c}=-\kappa c-\mathrm{i}\bar{g}N_{m}\sigma+\eta,\] \[\dot{\sigma}=-\mathrm{i}\Delta_{m}\sigma+\mathrm{i}\bar{g}c\sigma ^{z},\] \[\dot{\sigma}^{z}=2\mathrm{i}\bar{g}(c^{*}\sigma-c\sigma^{*}). \tag{5}\]
The cavity decay rate is \(\kappa\). When the molecular sample moves sufficiently slowly, the state-selective responses of the cavity mode are \(c(t)\simeq\eta\exp{[i\varphi_{Q}(t)]}/\kappa\) with hypothesis-selective
phase shift
\[\varphi_{L}(t)=-\varphi_{R}(t)=-\frac{\bar{g}^{2}(t)N_{m}}{\kappa\Delta_{m}}. \tag{6}\]
Here, we have chosen the initial state of cavity mode to be \(c(0)=\eta/\kappa\). We note that \(\bar{g}\) becomes time-dependent due to the movement of the sample. For the deduction of Eq. (6) in the dispersive limit, see Sec. 3.3 of Supporting Information.
To monitor the state-selective responses of cavity mode, homodyne detection is established by applying a strong local field as sketched in the upper right panel of Fig. 1(a). The local field is generated by the microwave generator labeled 'LO', and is mixed with the output of the cavity by a \(50:50\) beam splitter. Two microwave photon counters or a digital receiver are used to monitor the two output ports of the beam splitter, whose difference provides our final signals.
Then, a single-shot detection, collecting signals from \(t_{0}\) to \(t_{f}\), yields a Gaussian random variable \(n_{Q}\) with the standard derivation \(\delta=\sqrt{N_{\rm lo}(t_{f}-t_{0})}\)[29] and the enantioselective mean value
\[\bar{n}_{Q}\simeq\int_{t_{0}}^{t_{f}}\sqrt{2\kappa}\Re[c_{\rm lo}^{*}c(t^{ \prime})]dt^{\prime}\equiv\int_{t_{0}}^{t_{f}}\mathcal{N}(t^{\prime})|c_{\rm lo }|dt^{\prime}. \tag{7}\]
Here, we have defined \(\mathcal{N}\equiv\sqrt{2\kappa}\Re(c_{\rm lo}^{*}c)/|c_{\rm lo}|\). When the phase of the local field is \(\varphi_{\rm lo}=\pm\pi/2\), the mean values \(\bar{n}_{Q}\) change signs with hypotheses, i.e., the hypotheses can be decided by the sign of the counting difference between the two detectors depicted in the upper right corner of Fig. 1(a). For more details about the detection scheme, see Sec. 3 of Supporting Information.
_Numerical simulations for 1,2-propanediol._ We take a typical chiral molecule, 1,2-propanediol, as an example to illustrate our scheme. The working states are chosen as[17]\(|1\rangle=|0_{0,0,0}\rangle\), \(|2\rangle=|1_{0,1,0}\rangle\), and \(|3\rangle=(|1_{1,0,1}\rangle+|1_{1,0,-1}\rangle)/\sqrt{2}\), where the rotational eigenstates are given in \(|J_{K_{a},K_{c},M}\rangle\) notation. The angular frequencies of the three microwave fields are[17]
Figure 2: Numerical simulations for 1,2-propanediol. (a-d) give the time evolutions of interest \(\mathcal{N}\) (in the unit of \(\sqrt{\rm Hz}\)) and \(\sigma^{z}\). (a) and (b) are in the dispersive case with \(\lambda\equiv N_{0}/N_{\rm cr}=0.01\). (c) and (d) are in the non-dispersive case with \(\lambda=100\). Here, \(N_{0}=\eta^{2}/\kappa^{2}\) is the photon number corresponding to the initial state of the cavity mode with \(c(0)=\eta/\kappa\) and \(N_{\rm cr}\) is the critical photon number of the dispersive region. The particle number and the forward velocity of the samples are \(N_{m}=1000\) and \(\rm v=1\,m/s\). The time unit is \(\tau\equiv w_{0}/v\). Here, it is about \(11.59\,\rm ms\). (e) and (f) give the signal-to-noise rate (SNR) of single-shot measurement as functions of \(N_{m}\) for different \(\lambda\) and \(\rm v\). The dashed horizontal lines indicate \(\rm SNR=3\) with the corresponding error probability of \(P_{\rm err}\simeq 0.001\), whose intersections with the functions of \(N_{m}\) give the critical values of \(N_{m}\) (i.e., \(N_{m}^{\rm cr}\)) for highly credible distinguishing of molecular chirality. (g) shows \(N_{m}^{\rm cr}\) as functions of \(\lambda\) for different \(\rm v\). (h) shows \(N_{m}^{\rm cr}\) on the \(\lambda-\rm v\) plane for slowly moving enantiopure samples.
\(2\pi\times 6.43106\,\)GHz, \(\omega_{31}=2\pi\times 12.21215\,\)GHz, and \(\omega_{32}=2\pi\times 5.781096\,\)GHz. We choose the TEM\({}_{000}\) mode of the well-designed spherical Fabry-Perot cavity as the working cavity mode. The cavity decay rate is about \(\kappa=2\pi\times 121.7\,\)Hz. The cavity-molecule coupling is \(\bar{g}(t)=g_{0}\exp[-(\bar{Y}_{0}+\mbox{v}t)^{2}/w_{0}^{2}]/2\) with the waist of the mode \(w_{0}\simeq 11.59\,\)mm and \(g_{0}=2\pi\times 3.68\,\)Hz. \(\bar{Y}_{0}\) is the initial position of the sample. v is the forward velocity of the sample. The detuning is chosen as \(\Delta_{m}=2\pi\times 822.7\,\)Hz. More details about the molecular and cavity parameters, as well as the design of our working cavity, are shown in Sec. 1.2 of Supporting Information.
The initial photon number is \(N_{0}=\eta^{2}/\kappa^{2}\). By adjusting the pumping rate \(\eta\), we can tune the hybrid system from the dispersive region with \(\lambda\equiv N_{0}/N_{\rm cr}\ll 1\) to the non-dispersive region \(\lambda>1\) with the critical photon number of the dispersive region \(N_{\rm cr}=4|\Delta_{m}|/g_{0}^{2}\). In Fig. 2, we show the numerical results of \(\cal N\) and \(\sigma^{z}\) for enantiopure samples of left- (blue dashed lines) and right-handed (red solid lines) molecules by numerical solving Eq. (5). The corresponding particle number and forward velocity are set as \(N_{m}=1000\) and \(\mbox{v}=1\,\)m/s.
The numerical simulations show that \(\cal N\) change signs for the two hypotheses in both the typical dispersive region with \(\lambda=0.01\) [see Fig. 2(a)] and the typical non-dispersive region with \(\lambda=100\) [see Fig. 2(c)]. This clearly demonstrated the ability of our scheme in distinguishing molecular chirality. The time evolutions of the population difference (\(\sigma^{z}\)) in the two cases are given in Fig. 2(b) and Fig. 2(d), which clearly show the non-destructive feature of our scheme. In the dispersive case [see Fig. 2(b)], \(\sigma^{z}\) is almost unchanged in the whole process. In the non-dispersive case, \(\sigma^{z}\) is changed when the sample is inside the cavity [see \(2<t/\tau<6\) in Fig. 2(d)]. The time unit is \(\tau\equiv w_{0}/\mbox{v}\). Yet, it almost returns to the initial value [see \(t/\tau>6\) in Fig. 2(d)]. Therefore, the molecular state remains unchanged after the detection in both two cases, i.e., the non-destructive detection.
To evaluate the efficiency of our scheme, we turn to the single-shot signal-to-noise rate SNR \(\equiv|\bar{n}_{L}-\bar{n}_{R}|/(2\delta)\), which gives the error probability as \(P_{\rm err}={\rm Erfc}({\rm SNR}/\sqrt{2})/2\). Here, \({\rm Erfc}(x)\) is the complementary error function. When SNR \(\geq 3\), the error probability is
0.001, i.e., highly credible chiral discrimination via single-shot measurement can be obtained. In our scheme, the signals are collected from \(t_{0}=-\bar{Y}_{0}/\mathrm{v}-M_{Y}\tau\) to \(t_{f}=-\bar{Y}_{0}/\mathrm{v}+M_{Y}\tau\) with \(\bar{Y}_{0}/\mathrm{v}=-4\tau\). In the dispersive region, the signal-to-noise rates for the molecular samples moving sufficiently slowly are
\[\mathrm{SNR}\simeq\frac{\sqrt{\kappa N_{0}w_{0}\pi}}{4\sqrt{2\mathrm{v}M_{Y}} }\mathrm{Erf}(\sqrt{2}M_{Y})\frac{g_{0}^{2}N_{m}}{\kappa|\Delta_{m}|}. \tag{8}\]
In the dispersive region, the optimized signal-to-noise rates with respect to \(M_{Y}\) are obtained at \(M_{Y}\simeq 0.7\). Therefore, our signals are collected from \(t_{0}\simeq 3.3\tau\) to \(t_{f}\simeq 4.7\tau\). For the deduction of Eq. (8) in the dispersive limit, see Sec. 3.4 of Supporting Information.
We are interested in how the signal-to-noise rates vary with the variations of \(N_{m}\), \(\lambda\), and \(\mathrm{v}\). To this end, we numerically solve the equations of motion (5), then use the numerical results to give the enantioselective mean values \(\bar{n}_{Q}\) according to Eq. (7), and finally give the signal-to-noise rates according to its definition. The SNR are linear functions of \(N_{m}\) for different \(\lambda\) and \(\mathrm{v}\) [see Fig. 2(e) and Fig. 2(f)]. The dashed horizontal lines therein indicate \(\mathrm{SNR}=3\), whose intersections with the lines give the critical values of \(N_{m}\) (i.e., \(N_{m}^{\mathrm{cr}}\)) for highly credible distinguishing of molecular chirality. We can see that for \(\mathrm{v}=1\,\mathrm{m/s}\), the molecular chirality of samples with \(N_{m}\geq N_{m}^{\mathrm{cr}}\simeq 552\) can be distinguished with high credibility in the typical dispersive case with \(\lambda=0.01\). With the increase of \(\lambda\), our scheme will become more efficient, see the typical non-dispersive case (\(\lambda=100\)) with \(N_{m}^{\mathrm{cr}}\simeq 95\). In Fig. 2(g), we show \(N_{m}^{\mathrm{cr}}\) as functions of \(\lambda\) at different \(\mathrm{v}\). In each case, \(N_{m}^{\mathrm{cr}}\) approaches a lower limit with the increase of \(\lambda\). Moreover, we show \(N_{m}^{\mathrm{cr}}\) in our interested area on the \(\lambda-\mathrm{v}\) plane [see Fig. 2(h)], which indicates that our scheme can highly credibly distinguish the molecular chirality of slowly moving enantiopure samples with particle numbers of the order of \(10^{2}\sim 10^{3}\).
_Feasibility._ In our numerical simulations, we have assumed that the free space decay, the collective decay, and the dipole-dipole interaction between molecules are negligible. In our example of 1,2-propanediol, the free space decay rates for the three transitions are
\(\Gamma_{0}(2\to 1)=2\pi\times 1.8\times 10^{-10}\,\mathrm{Hz}\), \(\Gamma_{0}(3\to 1)=2\pi\times 3.64\times 10^{-11}\,\mathrm{Hz}\), and \(\Gamma_{0}(3\to 2)=2\pi\times 8.06\times 10^{-11}\,\mathrm{Hz}\). They are many orders of magnitude smaller than other parameters (\(g_{0}\), \(\kappa\), and \(\Delta_{m}\)). The particle numbers of the samples under consideration are small (\(N_{m}\leq 3000\)). The typical geometry of the samples under consideration is about \(0.1w_{0}\simeq 1\,\mathrm{mm}\). Under these parameters, it is sound to assume that the free space decay and the molecular dipole-dipole interaction are negligible. More details about this claim are shown in Sec. 2.2 of Supporting Information. Beyond these, we have also proven that the collective decay is negligible in our discussions by comparing the results of Eq. (5) with the results of the second-order mean-field theory in cumulant expansion that includes collective decay (see Sec. 2.3 of Supporting Information). Thus, in our issue, the dynamics of the system can be well described by the equation of motion (5).
The two core techniques of our proposal have already been experimentally demonstrated. The first core technique, i.e., the microwave enantio-specific state transfer by manipulating the purely rotational transitions, has been achieved with 1,2-propanediol [14], menthone [15], carvone [15], and 1-indanol [16]. The second core technique, i.e., the dispersive-detection technique with the help of microwave resonators, has been well established in the state-selective detection of single atoms [30, 31, 32, 33]. Its working band is overlapped with the typical rotational transition frequency of chiral molecules and thus it is promising to use such a technique to detect the molecular rotational states. The preparation of slowly moving samples that initially occupy the ground rotational state is envisioned to be handled with sub-millikelvin cooling and slowing techniques of polyatomic molecules [34, 35, 36, 37, 38, 39, 40]. Chiral molecules initially populating in the one rotational state of the cyclic three-level model can also be efficiently prepared at cold temperature (\(\sim 1\,\mathrm{K}\)) by depleting the original thermal populations in the excited states of the working model [16]. The forward velocity of samples can be controlled by using the velocity selection via the Doppler effect [41].
_Conclusion._ We have proposed a quantum sensor for detecting molecular chirality by combining the microwave enantio-specific state transfer and the dispersive-detection tech
nique with the help of a microwave resonator. It is promising to distinguish slowly moving (\(1\sim 10\,\)m/s) enantiopure samples of opposite chirality with particle numbers of the order of \(10^{2}\sim 10^{3}\) in a single-shot decision. In our scheme, chiral molecules return to their enantioselective final states of enantio-specific state transfer after the whole detection process, thus are not destroyed or disturbed after detection. We note that chiral discrimination at the single-molecule level can be obtained in our scheme by further trapping chiral molecules in the cavity for more than \(100\,\)s (for more details, see Sec. 4 of Supporting Information). It is experimentally available to trap polyatomic molecules in an electrostatic trap with typical geometry of \(3\,\)mm for up to a minute [36]. For chiral molecules, promising trapping techniques are under development [39].
Beyond this, we have for the first time established the connection between chiral discrimination and quantum hypothesis testing. To our knowledge, there are other state-selective techniques with the ability in quantum hypothesis testing based on physical phenomena including electromagnetically induced transparency [42], optical birefringence [43], Forster resonant energy transfer [44], and frequency modulation spectroscopy [45]. In this sense, our work will not only stimulate experimentalists using our scheme for chiral discrimination but also trigger the application of other promising non-destructive detection techniques in the very prospective research field.
The authors thank the Supporting by National Natural Science Foundation of China (12105011, 91850205, 12074030, and 11904022).
Additional details on the physical implementation of our scheme, the mean-field theory of our driven Tavis-Cummings model, the balanced Homodyne detection in the single-shot decision
of molecular chirality, and the chiral discrimination of trapped single molecule.
|
2304.09452 | Support and distribution inference from noisy data | We consider noisy observations of a distribution with unknown support. In the
deconvolution model, it has been proved recently [19] that, under very mild
assumptions, it is possible to solve the deconvolution problem without knowing
the noise distribution and with no sample of the noise. We first give general
settings where the theory applies and provide classes of supports that can be
recovered in this context. We then exhibit classes of distributions over which
we prove adaptive minimax rates (up to a log log factor) for the estimation of
the support in Hausdorff distance. Moreover, for the class of distributions
with compact support, we provide estimators of the unknown (in general
singular) distribution and prove maximum rates in Wasserstein distance. We also
prove an almost matching lower bound on the associated minimax risk. | Jérémie Capitao-Miniconi, Elisabeth Gassiat, Luc Lehéricy | 2023-04-19T06:46:44Z | http://arxiv.org/abs/2304.09452v2 | # Support and distribution inference from noisy data
###### Abstract
We consider noisy observations of a distribution with unknown support. In the deconvolution model, it has been proved recently [19] that, under very mild assumptions, it is possible to solve the deconvolution problem without knowing the noise distribution and with no sample of the noise. We first give general settings where the theory applies and provide classes of supports that can be recovered in this context. We then exhibit classes of distributions over which we prove adaptive minimax rates (up to a \(\log\log\) factor) for the estimation of the support in Hausdorff distance. Moreover, for the class of distributions with compact support, we provide estimators of the unknown (in general singular) distribution and prove maximum rates in Wasserstein distance. We also prove an almost matching lower bound on the associated minimax risk.
## 1 Introduction
### Context and aim
It is a common observation that high dimensional data has a low intrinsic dimension. The computational geometry point of view gave rise to a number of interesting algorithms (see [6] and references therein) for the reconstruction of a non linear shape from a point cloud, and in the statistical community, past years have seen increasing interest for manifold estimation. The case of non noisy data, that is when the observations are sampled on the unknown manifold, is by now relatively well understood. When the loss is measured using the Hausdorff distance, minimax rates for manifold estimation are known and have been proved recently. The rates depend on the intrinsic dimension of the manifold and differ when the manifold has a boundary or does not have a boundary, due to the particular way points accumulate near boundaries (see [1] for the most recent results, together with an overview of the subject and references).
When considering the estimation of a distribution with unknown non linear low dimensional support, one has to choose a loss function. The Wasserstein distance allows to compare distributions that can be mutually singular, and is thus useful to compare distributions having possibly different supports. Moreover, approximating an unknown probability distribution \(\mu\) by a good estimator \(\hat{\mu}\) with respect to the Wasserstein metric allows to infer the topology of the support of \(\mu\), see [10]. When using non noisy data, one can look at [16] and [27] for the most recent results and for an overview of the references. However, despite these fruitful developments, geometric inference from noisy data remains a theoretical and practical widely open problem.
In this paper, we are interested in the estimation of possibly low dimensional supports, and of distributions supported on such supports, when the observations are corrupted with _unknown_ noise. We aim at giving a new contribution on the type of noise which can affect the data without preventing to build consistent estimators of the support and of the law of the noisy signal.
### Previous works: estimation of the support with noisy data
Some of the geometric ideas that have been developed to handle non noisy data can be applied, or adapted, to handle noisy data and build estimators with controlled risk. These works
generally consider a noise that is normal to the unknown manifold, in which case the amplitude of the noise has to be bounded by the reach of the manifold (the reach is some regularity parameter of a manifold, see [17] for a precise definition). The upper bound on the risk contains a term depending on the amplitude of noise. Thus, the upper bound on the estimation risk is meaningful only when the bound on the noise is small, and the estimator is consistent when the noise tends to \(0\) with the amount of data tending to infinity. See [2], [1], [14], [18], see also [28] in which the noise can be non orthogonal to the manifold. In [3], the noise is not normal to the manifold but the data is uniformly sampled on a tubular neighborhood of the unknown manifold, which allows to take advantage of the fact that the manifold lies in the middle of the observations. The magnitude of the noise also has to be upper bounded by the reach. When the noise is not assumed very small, results are known in the specific setting of clutter noise, see [21], that is the situation where a proportion of data is uniformly sampled from a known compact set, and the remaining data is noiseless. The authors propose a clever idea to remove noise by comparing the way the empirical data concentrate near any regular shape, and they find a consistent estimator with upper bounded risk.
When we accept to consider noise with known distribution, a popular model for noisy data is the deconvolution model, in which the low dimensional data are corrupted with independent additive noise. In such models, all estimation procedures are roughly based on the fact that it is possible to get an estimator of the characteristic function of the non noisy data by dividing an estimator of the characteristic function of the noisy data by that (known) of the noise. In the deconvolution setting, the authors of [21] consider data corrupted with Gaussian noise, and propose as estimator of the manifold an upper level set of an estimator of a kernel smoothing density of the unknown distribution. With the Hausdorff loss, the authors prove that their estimator achieves a maximum risk (over some class of distributions) upper bounded by \((\sqrt{\log n})^{-1+\delta}\) for any positive \(\delta\), and prove a lower bound of order \((\log n)^{-1+\delta}\) for the minimax risk. Taking an upper level set of an estimated density had been earlier proposed to estimate a support based on non noisy data in [11]. In the context of full dimensional convex support and with additive Gaussian noise, [7] proposes an estimation procedure using convexity ideas. The authors prove an upper bound of order \(\log\log n/\sqrt{\log n}\) and a lower bound of order \((\log n)^{-2/\tau}\) for the minimax Hausdorff risk, for any \(\tau\in(0,1)\). Earlier work with known noise and with full dimensional support is [25], where the author first builds an estimator of the unknown density using deconvolution ideas, then samples from this estimated density and takes a union of balls centered on the sampled points, such as in [15].
### Previous works: estimation of the distribution with noisy data
The case of unknown but small (and orthogonal to the unknown manifold) noise is handled in [16], the author proposes a kernel estimator and proves that it is minimax. The rate depends on the upper bound of the noise. Non parametric Bayesian methods have been explored in [5] for observations on a tubular neighborhood of the unknown manifold, that is again for bounded noise.
In the deconvolution problem, with known Gaussian noise, the authors of [13] prove matching upper and lower bounds for the minimax risk of the estimation of the unknown distribution using the Wasserstein distance. Results for other known noises, but limited to one dimensional observations, can be found in [12].
### Contribution and main results
In this work, we consider the deconvolution problem _with totally unknown noise_. It has been proved recently [19] that, under very mild assumptions, it is possible to solve the deconvolution problem without knowing the noise distribution and with no sample of the noise. In [19], the authors consider the density estimation problem. Here, we are faced with the more general situation where the underlying non noisy data may have a distribution with a lower dimensional support than the ambient space, thus having no density with respect to Lebesgue measure.
Our main contributions are as follows.
* We first give general settings where the identifiability theory of [19] applies. We exhibit simple geometric properties of a support so that, whatever the distribution on such an (unknown) support (provided it does not have too heavy tails), the deconvolution problem can be solved without any knowledge regarding the noise, see Theorem 2. We also prove that these geometric properties almost always hold, in some sense developed in Section 2.4.
* We then exhibit classes of distributions over which we prove adaptive minimax rates (up to a \(\log\log\) factor) for the estimation of the support in Hausdorff distance, see Theorem 4, Theorem 6 and Theorem 5. Specifically, the minimax risk for the Hausdorff distance is upper bounded by \((\log\log n)^{L}/(\log n)^{\kappa}\) for some \(L\), where \(\kappa\in(1/2,1]\) is a parameter depending on the tail of the distribution of the signal (\(\kappa=1\) corresponds to compactly supported distributions, and \(\kappa=1/2\) to sub-Gaussian distributions), while the minimax risk is lower bounded by \(1/(\log n)^{\kappa}\) if \(\kappa\in(1/2,1)\) and \(1/(\log n)^{1-\delta}\) if \(\kappa=1\), \(\delta\) being any (small) positive number. Adaptation is with respect to \(\kappa\).
* We finally consider the estimation of the unknown (in general singular) distribution of the hidden non noisy data itself when it has a compact support. We prove almost matching upper and lower bounds of order \(1/(\log n)\) for the estimation risk of the distribution in Wasserstein distance, see Theorem 7 and Theorem 8.
Although we exhibit estimators, let us insist on the fact that our goal is mainly theoretical. We do not pretend to propose easy to compute estimation procedures, but to give precise answers about minimax adaptive rates for support and distribution estimation with noisy data in a very general deconvolution setting, where the noise is unknown and can have any distribution.
### Organisation of the paper
Section 2 is devoted to the identifiability question. We first recall in Section 2.2 the identifiability result proved in [19]. We then exhibit in Section 2.3 geometric conditions under which this identifiability result applies, and the genericity of such conditions is considered in Section 2.4.
We focus on support estimation in Section 3. We first refine an estimation result of the characteristic function of the signal in 3.1, which is the basic step of any of the estimation procedures we propose. In Section 3.2, we propose an estimator of the support as an upper-level set of an estimated density following ideas of [21], the main difference being with the smoothing kernel we choose. Indeed, with this kernel, no prior knowledge on the intrinsic dimension is needed to build the estimator. The upper bound on the risk depends on the tail of the distribution of the signal, and adaptive estimation using Lepski's method is detailed in Section 3.4. We prove in Section 3.3 an almost matching lower bound.
Section 4 is devoted to the estimation of the distribution when it is compactly supported. Lower bounds are proved using the usual two-points method. Here, the points for the lower bound in [21] and in [13], [12], can not be used because of our tail assumption on the signal. Detailed proofs are given in Section 6.
### Notations
The Euclidean norm (in any dimension) will be denoted \(\|\cdot\|_{2}\), and the operator norm of a linear operator will be denoted \(\|\cdot\|_{op}\). If \(A\) is a subset of \(\mathbb{R}^{D}\), we write \(\operatorname{Diam}(A)\) its diameter \(\sup\{\|x-y\|_{2}\mid x,y\in A\}\), and for any \(x\in\mathbb{R}^{D}\), \(d(x,A)=\inf\{\|x-y\|_{2}\mid y\in A\}\). For any \(\eta>0\), \(A_{\eta}\) will denote the \(\eta\)-offset of \(A\), that is the set of all points \(x\) in \(\mathbb{R}^{D}\) such that \(d(x,A)\leqslant\eta\). For any dimension \(d\), any \(x\in\mathbb{R}^{d}\) and \(r>0\), \(B(x,r)\) will denote the Euclidean open ball centered on \(x\) of radius \(r\) and \(\dot{B}(x,r)\) the closure of \(B(x,r)\) in \(\mathbb{R}^{d}\). For \(k,l\in\{1,\ldots,D\}\) with \(k\leqslant l\), write \(\pi^{(k:l)}\) the projection \(\pi^{(k:l)}:(x_{1},\ldots,x_{D})\in\mathbb{R}^{D}\mapsto(x_{k},\ldots,x_{l}) \in\mathbb{R}^{l-k+1}\) and \(\pi^{(k)}=\pi^{(k:k)}\).
We shall denote \(d_{H}(A_{1},A_{2})\) the Hausdorff distance between \(A_{1}\) and \(A_{2}\) subsets of \(\mathbb{R}^{D}\). It is defined as
\[d_{H}(A_{1},A_{2})=\sup_{x\in A_{1}\cup A_{2}}|d(x,A_{1})-d(x,A_{2})|.\]
For any \(r>0\), we write \(B_{r}=(-r,r)\) and for any measurable function \(f\) on \(B_{r}^{D}\), we write \(\|f\|_{\infty,r}\) the essential supremum of \(f\) over \(B_{r}^{D}\) and
\[\|f\|_{2,r}=\Bigg{(}\int_{B_{r}^{D}}|f(u)|^{2}du\Bigg{)}^{1/2}.\]
When \(f\) is an integrable function from \(\mathbb{R}^{D}\) to \(\mathbb{R}\), we denote by \(\mathcal{F}[f]\) (resp. \(\mathcal{F}^{-1}[f]\)) the (resp. inverse) Fourier transform of \(f\) defined, for all \(y\in\mathbb{R}^{d}\), by
\[\mathcal{F}[f](y)=\int e^{it^{\top}y}f(t)dt\ \ \text{and}\ \ \mathcal{F}^{-1}[f](y)=( \frac{1}{2\pi})^{D}\int e^{-it^{\top}y}f(t)dt.\]
For any \(p\in[1,+\infty)\) and any two probability measures \(\mu\) and \(\nu\) on \(\mathbb{R}^{D}\), we write \(W_{p}(\mu,\nu)\) the Wasserstein distance of order \(p\) between \(\mu\) and \(\nu\), that is
\[W_{p}(\mu,\nu)=\inf_{\pi\in\Pi(\mu,\nu)}\left(\int_{\mathbb{R}^{D}\times \mathbb{R}^{D}}\|x-y\|_{2}^{p}d\pi(x,y)\right)^{1/p},\]
where \(\Pi(\mu,\nu)\) is the set of probability measures on \(\mathbb{R}^{D}\times\mathbb{R}^{D}\) that have marginals \(\mu\) and \(\nu\).
## 2 The identifiability Theorem and general applications
In this section, we first recall the general identifiability Theorem proved in [19]. We then provide geometrical conditions on the support of the signal that suffice to obtain identifiability of the model (2), whatever the distribution of the signal may be. We also show that the conditions on the signal distribution of the identifiability theorem hold generically.
### Setting
We consider independent and identically distributed observations \(Y_{i}\), \(i=1,\ldots,n\) coming from the model
\[Y=X+\varepsilon, \tag{1}\]
in which the signal \(X\) and the noise \(\varepsilon\) are independent random variables. We assume that the observation has dimension at least two, and that its coordinates can be partitioned in such a way that the corresponding blocks of noise variables are independently distributed, that is
\[Y=\begin{pmatrix}Y^{(1)}\\ Y^{(2)}\end{pmatrix}=\begin{pmatrix}X^{(1)}\\ X^{(2)}\end{pmatrix}+\begin{pmatrix}\varepsilon^{(1)}\\ \varepsilon^{(2)}\end{pmatrix}=X+\varepsilon \tag{2}\]
in which \(Y^{(1)},X^{(1)},\varepsilon^{(1)}\in\mathbb{R}^{d_{1}}\) and \(Y^{(2)},X^{(2)},\varepsilon^{(2)}\in\mathbb{R}^{d_{2}}\), for \(d_{1},d_{2}\geqslant 1\) with \(d_{1}+d_{2}=D\), and we assume that the noise components \(\varepsilon^{(1)}\) and \(\varepsilon^{(2)}\) are independent random variables. We write \(G\) the distribution of \(X\) and \(\mathcal{M}_{G}\) its support. For \(i\in\{1,2\}\), we write \(\mathbb{Q}^{(i)}\) the distribution of \(\varepsilon^{(i)}\), so that \(\mathbb{Q}=\mathbb{Q}^{(1)}\otimes\mathbb{Q}^{(2)}\) is the distribution of \(\varepsilon\).
We shall not make any more assumption on the distribution of the noise \(\varepsilon\), and we shall not assume that its distribution is known. Indeed in [19], it is proved that under very mild conditions on the distribution of the signal \(X\), model (2) is fully identifiable, that is one can recover \(G\), and thus its support, and \(\mathbb{Q}\) from \(G\ast\mathbb{Q}\).
### Identifiability Theorem
Let us introduce the assumptions on the distribution of the signal we shall use. The first one is about the tail of \(G\). Let \(\rho\) be a positive real number.
**A(\(\rho\))**: There exists \(a,b>0\) such that for all \(\lambda\in\mathbb{R}^{D}\), \(\mathbb{E}\left[\exp\left(\lambda^{\top}X\right)\right]\leqslant a\exp\left(b \|\lambda\|_{2}^{\rho}\right)\).
**Proposition 1**.:
* _A random variable_ \(X\) _satisfies A(1) if and only if its support is compact._
* _A random variable_ \(X\) _satisfies A(_\(\rho\)_) for_ \(\rho>1\) _if and only if there exists constants_ \(c,d>0\) _such that for any_ \(t\geqslant 0\)_,_ \[\mathbb{P}(\|X\|\geqslant t)\leqslant c\exp(-dt^{\rho/(\rho-1)}).\]
The proof of Proposition 1 is detailed in Section 6.1.
Under A(\(\rho\)), the characteristic function of the signal can be extended into the multivariate analytic function
\[\Phi_{X}:\mathbb{C}^{d_{1}}\times\mathbb{C}^{d_{2}} \longrightarrow \mathbb{C}\] \[(z_{1},z_{2}) \longmapsto \mathbb{E}\left[\exp\left(iz_{1}^{\top}X^{(1)}+iz_{2}^{\top}X^{(2 )}\right)\right].\]
The second assumption is a mild dependence assumption (see the discussion after Theorem 2.1 in [19]).
**(Adep)**: For any \(z_{0}\in\mathbb{C}^{d_{1}}\), \(z\mapsto\Phi_{X}(z_{0},z)\) is not the null function and for any \(z_{0}\in\mathbb{C}^{d_{2}}\), \(z\mapsto\Phi_{X}(z,z_{0})\) is not the null function.
Obviously, if no centering constraint is put on the signal or on the noise, it is possible to translate the signal by a fixed vector \(m\in\mathbb{R}^{D}\) and the noise by \(-m\) without changing the observation. The model can thus be identifiable only up to translation.
**Theorem 1** (from [19]).: _If the distribution of the signal satisfies A(\(\rho\)) and (Adep), then the distribution of the signal and the distribution of the noise can be recovered from the distribution of the observations up to translation._
The proof of this theorem is based on recovering \(\Phi_{X}\). The arguments show that knowing the characteristic function of the observations in a neighborhood of the origin allows to recover \(\Phi_{X}\) in a neighborhood of the origin, and then over the whole multidimensional complex plane. Similarly, our estimators for the distribution of the signal or its support will start with the estimation of \(\Phi_{X}\), which is detailed in Section 3.1.
The end of the section is devoted to some geometric understanding of assumption (Adep). We first provide simple but useful properties.
**Proposition 2**.: _The following holds._
* _Let_ \(U\) _and_ \(V\) _independent random variables satisfying A(_\(\rho\)_). Then_ \(U\) _and_ \(V\) _satisfy (Adep) if and only if_ \(U+V\) _satisfies (Adep)._
* _Let_ \(U=\begin{pmatrix}U^{(1)}\\ U^{(2)}\end{pmatrix}\) _be a random variable such that_ \(U^{(1)}\in\mathbb{R}^{d_{1}}\) _and_ \(U^{(2)}\in\mathbb{R}^{d_{2}}\)_. Let_ \(A\in GL_{d_{1}}(\mathbb{C})\)_,_ \(B\in GL_{d_{2}}(\mathbb{C})\)_,_ \(m_{1}\in\mathbb{C}^{d_{1}}\) _and_ \(m_{2}\in\mathbb{C}^{d_{2}}\)_. Define_ \(V=\begin{pmatrix}A&0\\ 0&B\end{pmatrix}\begin{pmatrix}U^{(1)}\\ U^{(2)}\end{pmatrix}+\begin{pmatrix}m_{1}\\ m_{2}\end{pmatrix}\)_. Then_ \(U\) _satisfies A(_\(\rho\)_) if and only if_ \(V\) _satisfies A(_\(\rho\)_). Moreover,_ \(U\) _satisfies A(_\(\rho\)_) and (Adep) if and only if_ \(V\) _satisfies A(_\(\rho\)_) and (Adep)._
* _Let_ \(U^{(1)}\) _and_ \(U^{(2)}\) _be two independent random variables in_ \(\mathbb{R}^{d_{1}}\) _and_ \(\mathbb{R}^{d_{2}}\) _respectively that satisfy A(_\(\rho\)_) for some_ \(\rho\geqslant 1\)_, then_ \(U=\begin{pmatrix}U^{(1)}\\ U^{(2)}\end{pmatrix}\) _satisfies (Adep) if and only if_ \(U^{(1)}\) _and_ \(U^{(2)}\) _are Gaussian or Dirac random variables._
The proof of Proposition 2 is detailed in section 6.2.
Point (i) of Proposition 2 makes it possible to transfer a proof of (Adep) for a support with full dimension \(D\) to a support with dimension \(d<D\). Indeed, if \(U\) is a random variable with support of dimension \(d<D\), by introducing an independent random variable \(V\) with support of full dimension \(D\), proving that \(U+V\) (whose support has full dimension) satisfies (Adep) ensures that \(U\) satisfies (Adep) as well. For instance, Theorem 2 below shows that a
random variable having support the centered Euclidean ball with radius \(\eta>0\) satisfies A(1) and (Adep). Thus geometric conditions such as those proposed in Section 2.3 can be transposed from one dimension to another.
Point (ii) shows that the fact that A(\(\rho\)) and (Adep) hold is not modified by linear transformations of each component of the signal.
Finally, Point (iii) shows that to verify (Adep), outside of trivial cases, the two signal components cannot be independent. Even further, combined with Point (i), this shows that it is not possible to write the signal as the sum of two independent signals where one of them has independent components: such independent sub-signals with independent components must be part of the noise.
### Sufficient geometrical conditions for (Adep) to hold
In [9], the authors prove that (Adep) holds for random variables supported on a sphere. In such a context, they prove that the radius of the sphere can be estimated at almost parametric rate. Here we give much more general conditions on the support of a random variable that are sufficient for (Adep) to hold.
We define the following assumptions (H1) and (H2).
1. For any \(\Delta>0\), there exists \(A_{\Delta}\subset\mathbb{R}^{d_{2}}\) and \(B_{\Delta}\subset\mathbb{R}^{d_{1}}\) such that \(\mathbb{P}(X^{(2)}\in A_{\Delta})>0\), \(\lim_{\Delta\to 0}\operatorname{Diam}(B_{\Delta})=0\) and \(\mathbb{P}(X^{(1)}\in B_{\Delta}\,|\,X^{(2)}\in A_{\Delta})=1\).
2. For any \(\Delta>0\), there exists \(A_{\Delta}\subset\mathbb{R}^{d_{1}}\) and \(B_{\Delta}\subset\mathbb{R}^{d_{2}}\) such that \(\mathbb{P}(X^{(1)}\in A_{\Delta})>0\), \(\lim_{\Delta\to 0}\operatorname{Diam}(B_{\Delta})=0\) and \(\mathbb{P}(X^{(2)}\in B_{\Delta}\,|\,X^{(1)}\in A_{\Delta})=1\).
It is showed in Theorem 2 that these assumptions are sufficient to ensure identifiability provided that A(\(\rho\)) is satisfied.
**Theorem 2**.: _Assume that the distribution of \(X\) satisfies A(\(\rho\)), (H1) and (H2). Then \(X\) satisfies A(\(\rho\)) and (Adep)._
The proof of Theorem 2 is detailed in Section 6.3.
One can interpret the assumptions (H1) and (H2) geometrically as shown in Figure 1. In essence, it means that there exists a slice (along the first \(d_{1}\), resp. last \(d_{2}\), coordinates, with base \(A_{\Delta}\)) such that the random variable belongs to this slice with positive probability and such that on this slice, the support of the distribution is contained in an orthogonal slice (along the last \(d_{2}\), resp. first \(d_{1}\), coordinates) of diameter smaller than \(\Delta\).
Figure 1: Left : Assumption (H1). Right : Assumption (H2).
A reformulation of (H1) and (H2) based on the support of the signal is as follows. Let
\[\mathcal{A}_{1}(\Delta,\varepsilon)=\{\mathcal{M}\subset\mathbb{R}^ {D}\ |\ \text{There exists }x=(x_{1},x_{2})\in\mathcal{M}\\ \text{such that }\ \operatorname{Diam}\left(\pi^{(1:d_{1})}\bigg{[} \mathcal{M}\cap(\mathbb{R}^{d_{1}}\times\bar{B}(x_{2},\varepsilon))\bigg{]} \right)<\Delta\}\]
and
\[\mathcal{A}_{2}(\Delta,\varepsilon)=\{\mathcal{M}\subset\mathbb{R }^{D}\ |\ \text{There exists }x=(x_{1},x_{2})\in\mathcal{M}\\ \text{such that }\operatorname{Diam}\left(\pi^{(d_{1}+1:D)}\bigg{[} \mathcal{M}\cap(\bar{B}(x_{1},\varepsilon)\times\mathbb{R}^{d_{2}})\bigg{]} \right)<\Delta\}.\]
The proof of the following proposition is straightforward.
**Proposition 3**.: _Let \(\mathcal{M}\in(\cap_{\Delta>0}\cup_{\varepsilon>0}\mathcal{A}_{1}(\Delta, \varepsilon))\cap(\cap_{\Delta>0}\cup_{\varepsilon>0}\mathcal{A}_{2}(\Delta, \varepsilon))\). Then any random variable with support \(\mathcal{M}\) satisfies (H1) and (H2)._
We now propose sets of compact subsets of \(\mathbb{R}^{D}\) for which Proposition 3 holds. Define the sets \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) as
\[\mathcal{B}_{1}=\{\mathcal{M}\subset\mathbb{R}^{D}\ \text{compact}\,|\,\exists x _{1}\in\mathbb{R}^{d_{1}},\text{Card}((\{x_{1}\}\times\mathbb{R}^{d_{2}})\cap \mathcal{M})=1\},\]
and
\[\mathcal{B}_{2}=\{\mathcal{M}\subset\mathbb{R}^{D}\ \text{compact}\,|\,\exists x _{2}\in\mathbb{R}^{d_{2}},\text{Card}((\mathbb{R}^{d_{1}}\times\{x_{2}\})\cap \mathcal{M})=1\}.\]
**Proposition 4**.: _Let \(\mathcal{M}\) be a subset of \(\mathbb{R}^{D}\) such that \(\mathcal{M}\in\mathcal{B}_{1}\cap\mathcal{B}_{2}\). If \(X\) is a random variable with support \(\mathcal{M}\), then \(X\) satisfies A(1), (H1) and (H2)._
The proof of Proposition 4 is detailed in Section 6.4.
For instance, any closed Euclidean ball, and more generally any strictly convex compact set in \(\mathbb{R}^{D}\), is in \(\mathcal{B}_{1}\cap\mathcal{B}_{2}\). To see this, consider the points of the set with maximal first (resp. last) coordinate: they are unique by strict convexity, which ensures that the set is in \(\mathcal{B}_{1}\) (resp. \(\mathcal{B}_{2}\)). The same holds for the boundary of any strictly convex compact set.
### Genericity
The main purpose of this subsection is to show that hypotheses (H1) and (H2) are verified generically.
First, we show that while the set of supports satisfying Proposition 3 is dense in the set of closed sets of \(\mathbb{R}^{D}\), its complement is also dense.
**Proposition 5**.: _The set \((\cap_{\Delta>0}\cup_{\varepsilon>0}\mathcal{A}_{1}(\Delta,\varepsilon)) \bigcap(\cap_{\Delta>0}\cup_{\varepsilon>0}\mathcal{A}_{2}(\Delta,\varepsilon))\) and its complement are dense in the set of closed subsets of \(\mathbb{R}^{D}\) endowed with the Hausdorff distance._
The proof of Proposition 5 is detailed in Section 6.5.
This result shows that any support \(\mathcal{M}\) can be altered by a small perturbation to produce both supports that satisfy (H1) and (H2) and supports that satisfy neither. _A fortiori_, the same is true for (Adep), as on one hand (H1), (H2) and A(\(\rho\)) ensure (Adep) by Theorem 2 and on the other hand a small perturbation of the signal is enough to no longer satisfy (Adep) by Point (i) of Proposition 2.
Therefore, we need a stronger notion than topological density to assess the genericity of (H1) and (H2). Similarly to how "almost everywhere" (with respect to the Lebesgue measure) is a strong indication of genericity in \(\mathbb{R}^{D}\), we construct a random and small perturbation of \(\mathbb{R}^{D}\) such that any compact set is almost surely transformed into a compact set in \(\mathcal{B}_{1}\cap\mathcal{B}_{2}\).
More precisely, for any \(\varepsilon>0\), we define a (random) continuous bijection \(f:\mathbb{R}^{D}\longrightarrow\mathbb{R}^{D}\) such that almost surely, \(|f(x)-x|\leqslant\varepsilon\) for all \(x\in\mathbb{R}^{D}\), and such that if \(\mathcal{M}\) is compact, then \(f(\mathcal{M})\) is in \(\mathcal{B}_{1}\cap\mathcal{B}_{2}\) almost surely. This random bijection does not depend on which support
\(\mathcal{M}\) is considered, and can for instance be seen as a modeling of the imperfections of "realistic" supports, or as a way to introduce a Bayesian prior on the support. In that sense, compact supports are almost surely in \(\mathcal{B}_{1}\cap\mathcal{B}_{2}\), and thus compactly supported random variables almost surely satisfy (Adep).
There is no canonical way to define a random perturbation of \(\mathbb{R}^{D}\). Our approach is to tile the space with simplices, then add a small perturbation to each vertex of the tiling, keeping the transformation linear inside each simplex.
Simplicial tiling of \(\mathbb{R}^{D}\).Let us recall a few definitions about simplicial complexes. For any \(k\in\{0,\ldots,D\}\), a \(k\)-simplex of \(\mathbb{R}^{D}\) is the convex hull of \((k+1)\) affinely independent points of \(\mathbb{R}^{D}\). A simplicial complex \(\mathcal{P}\) is a set of simplices such that every face of a simplex from \(\mathcal{P}\) is also in \(\mathcal{P}\), and the non-empty intersection of any two simplices \(F_{1},F_{2}\in\mathcal{P}\) is a face of both \(F_{1}\) and \(F_{2}\). \(\mathcal{P}\) is a homogeneous simplicial \(D\)-complex if each simplex of dimension less than \(D\) of \(\mathcal{P}\) is the face of a \(D\)-simplex of of \(\mathcal{P}\). For any simplex \(F\), we write \(\operatorname{relint}(F)\) its relative interior. Finally, a homogeneous simplicial \(D\)-complex \(\mathcal{P}\) is called a simplicial tiling of \(A\subset\mathbb{R}^{D}\) if the relative interior of its simplices form a partition of \(A\). Note that the facets of \(\mathcal{P}\), that is, its \(D\)-simplices, do not necessarily form a partition of \(A\): two facets can have a non-empty intersection when they share a face.
First, consider a finite simplicial tiling of the hypercube \([0,1]^{D}\), and extend it to \(\mathbb{R}^{D}\) by mirroring it along the hyperplanes orthogonal to the canonical axes crossing them at integer coordinates. Formally, for any \(k=(k_{1},\ldots,k_{D})\in\mathbb{Z}^{D}\), the hypercube \(\prod_{i=1}^{D}[k_{i},k_{i}+1]\) contains the tiling of \([0,1]^{D}\), mirrored along axis \(i\) if and only if \(k_{i}\) is odd. The faces of the hypercubes defined in this way match, as each pair of hypercubes sharing a face are mirrors of each other with respect to that face. Thus, the resulting tiling \(\mathcal{P}\) is a simplicial tiling of \(\mathbb{R}^{D}\).
Let \((x_{n})_{n\in\mathbb{N}}\) be the sequence of vertices of the simplicial tiling \(\mathcal{P}\) (i.e. its \(0\)-simplices). We identify each simplex \(F\in\mathcal{P}\) with the set of its \(0\)-dimensional faces \(\{x_{i}\}_{i\in I}\), and write \(F_{I}\) in that case. Note that the set \(I\) is unique for any given simplex \(F\) and characterizes \(F\).
Perturbation of the tiling.Fix a small \(r>0\). Let \((\varepsilon_{n})_{n\in\mathbb{N}}\) be a sequence of i.i.d. uniform variables on \([-r,r]^{D}\), and define \(\mathcal{P}^{\varepsilon}\) the simplicial complex defined by
\[\mathcal{P}^{\varepsilon}=\{\{x_{i}+\varepsilon_{i}\}_{i\in I}:\{x_{i}\}_{i \in I}\in\mathcal{P}\}.\]
Note that since the original tiling of \([0,1]^{D}\) was finite, there exists \(r_{0}>0\) such that for any \((\varepsilon_{n})_{n\in\mathbb{N}}\in([-r_{0},r_{0}]^{D})^{\mathbb{N}}\), the vertices of any simplex in \(\mathcal{P}\) are still affinely independent after being moved according to \(\varepsilon\) and any two simplices \(F,F^{\prime}\in\mathcal{P}\) sharing a face \(F^{\prime\prime}\) (resp. with no intersection) are transformed into two simplices of \(\mathcal{P}^{\varepsilon}\) that share exactly the transformation of \(F^{\prime\prime}\) (resp. with no intersection), so that \(\mathcal{P}^{\varepsilon}\) is indeed a simplicial complex. Finally, \(\mathcal{P}^{\varepsilon}\) still covers \(\mathbb{R}^{D}\) (as seen when moving each vertex in \([-1,2]^{D}\) one after the other along a continuous path, showing that no hole is created in the covering of \([0,1]^{D}\) at any point in time), so for any \(r\in(0,r_{0}]\), \(\mathcal{P}^{\varepsilon}\) is almost surely a simplicial tiling of \(\mathbb{R}^{D}\).
Since the relative interiors of the simplices of \(\mathcal{P}\) define a partition of \(\mathbb{R}^{D}\), for each \(z\in\mathbb{R}^{D}\), there exists exactly one face \(F_{I}\in\mathcal{P}\) such that \(z\in\operatorname{relint}(F_{I})\). Writing \(z=\sum_{i\in I}\alpha_{i}x_{i}\) (for \(\alpha\in(0,1]^{|I|}\) such that \(\sum_{i\in I}\alpha_{i}=1\)), we define the image of \(z\) by the perturbation as \(f^{\varepsilon}(z)=\sum_{i\in I}\alpha_{i}(x_{i}+\varepsilon_{i})\). In other words, each simplex is deformed according to the linear transformation given by the perturbation of its vertices.
The mapping \(f^{\varepsilon}\) is a (random) bijective and continuous transformation of \(\mathbb{R}^{D}\) that is "small", in the sense that almost surely, \(\sup_{z\in\mathbb{R}^{D}}\|z-f^{\varepsilon}(z)\|\leqslant r\).
Note that the transformation \(f^{\varepsilon}\) can be made with arbitrarily small granularity: the same approach works when considering tilings of \([0,\delta]^{D}\) for any \(\delta>0\) instead of \([0,1]^{D}\) (up to changing \(r\)). We may also iterate several random independent transformations \(f^{\varepsilon^{(1)}}\circ\cdots\circ f^{\varepsilon^{(m)}}\) for \(m\geqslant 1\), and the transformation of \(\mathcal{M}\) will still almost surely belong to \(\mathcal{B}_{1}\cap\mathcal{B}_{2}\).
**Theorem 3**.: _Let \(r\in(0,r_{0}]\) with \(r_{0}\) as above, \(\varepsilon=(\varepsilon_{n})_{n\in\mathbb{N}}\) be a sequence of i.i.d. uniform r.v. on \([-r,r]^{D}\), \(\delta>0\), and \(f^{\varepsilon}\) be the bijective transformation of \(\mathbb{R}^{D}\) defined above._
_Then for any (random) continuous mapping \(G:\mathbb{R}^{D}\to\mathbb{R}^{D}\) that is independent of \(\varepsilon\), the mapping \(F:z\longmapsto\delta f^{\varepsilon}\big{(}\frac{G(z)}{\delta}\big{)}\) satisfies: for any compact set \(\mathcal{M}\subset\mathbb{R}^{D}\), \(F(\mathcal{M})\in\mathcal{B}_{1}\cap\mathcal{B}_{2}\) a.s.._
The proof of Theorem 3 is detailed in Section 6.6.
This shows that for any compact set \(\mathcal{M}\in\mathbb{R}^{D}\), a small change into the set \(F(\mathcal{M})\) where \(F\) is a transformation of \(\mathbb{R}^{D}\) of the type described in the Theorem almost surely results in a set in \(\mathcal{B}_{1}\cap\mathcal{B}_{2}\).
## 3 Estimation of the support
As explained after Theorem 1, the estimation of the characteristic function of the signal will be the first step to derive efficient estimators. In Section 3.1, we describe the estimator of the characteristic function used in all our procedures, and we give its properties. In Section 3.2, we provide an estimator of the support of the signal when \(\rho\) is known, and prove an upper bound for the maximum risk in Hausdorff distance. In Section 3.3, we prove a lower bound which shows that our estimator is minimax up to some power of \(\log\log n\) for all \(\rho\in(1,2)\) and up to any small power of \(\log n\) for \(\rho=1\). Section 3.4 is devoted to the construction of an adaptive estimator of the support for unknown \(\rho\).
### Estimation of the characteristic function
We shall need sets of multivariate analytic functions for which A(\(\rho\)) and (Adep) hold. For any \(S>0\), let \(\Upsilon_{\rho,S}\) be the subset of multivariate analytic functions from \(\mathbb{C}^{D}\) to \(\mathbb{C}\) defined as follows.
\[\Upsilon_{\rho,S}=\left\{\phi\text{ analytic s.t. }\forall z\in\mathbb{R}^{D}, \overline{\phi(z)}=\phi(-z),\phi(0)=1\text{ and }\forall i\in\mathbb{N}^{D}\setminus\left\{0\right\},\left| \frac{\partial^{i}\phi(0)}{\prod_{a=1}^{d}i_{a}!}\right|\right|\leqslant\frac{ S^{\|i\|_{1}}}{\|i\|_{1}^{\|i_{1}/\rho}}\right\}\]
where \(\|i\|_{1}=\sum_{a=1}^{D}i_{a}\). If the distribution of \(X\) satisfies A(\(\rho\)), then there exists \(S\) such that \(\Phi_{X}\in\Upsilon_{\rho,S}\), and the converse also holds, see Lemma 3.1 in [19].
Let \(\Phi_{\varepsilon^{(i)}}\) be the characteristic function of \(\varepsilon^{(i)}\), \(i=1,2\), and define for all \(\phi\in\Upsilon_{\rho,S}\) and any \(\nu>0\),
\[M(\phi;\nu|\Phi_{X})=\int_{B_{\nu}^{d_{1}}\times B_{\nu}^{d_{2}}}|\phi(t_{1},t _{2})\Phi_{X}(t_{1},0)\Phi_{X}(0,t_{2})-\Phi_{X}(t_{1},t_{2})\phi(t_{1},0)\phi( 0,t_{2})|^{2}|\Phi_{\varepsilon^{(1)}}(t_{1})\Phi_{\varepsilon^{(2)}}(t_{2})| ^{2}dt_{1}dt_{2}.\]
It follows from the proof of Theorem 1, see [19], that for any \(\nu>0\), if \(\phi\in\Upsilon_{\rho,S}\) satisfies (Adep), then \(M(\phi;\nu|\Phi_{X})=0\) if and only if \(\phi=\Phi_{X}\) (up to translation). The estimator of the characteristic function of the signal can then be defined as a minimizer of an empirical estimator \(M_{n}\) of \(M\). Fix some \(\nu_{\mathrm{est}}>0\), and define \(M_{n}\) for any \(\phi\) as follows
\[M_{n}(\phi)=\int_{B_{\mathrm{est}}^{d_{1}}\times B_{\mathrm{est}}^{d_{2}}}| \phi(t_{1},t_{2})\tilde{\phi}_{n}(t_{1},0)\tilde{\phi}_{n}(0,t_{2})-\tilde{ \phi}_{n}(t_{1},t_{2})\phi(t_{1},0)\phi(0,t_{2})|^{2}dt_{1}dt_{2},\]
where for all \((t_{1},t_{2})\in\mathbb{R}^{d_{1}}\times\mathbb{R}^{d_{2}}\),
\[\tilde{\phi}_{n}(t_{1},t_{2})=\frac{1}{n}\sum_{\ell=1}^{n}\exp\left\{it_{1}^{ \top}Y_{\ell}^{(1)}+it_{2}^{\top}Y_{\ell}^{(2)}\right\}.\]
Define now, for all \(m\in\mathbb{N}\), the set \(\mathbb{C}_{m}[X_{1},\ldots,X_{D}]\) of multivariate polynomials in \(D\) variables with total degree \(m\) and coefficients in \(\mathbb{C}\), simply written \(\mathbb{C}_{m}[X]\) in the following. If \(\phi\) is an analytic function defined in a neighborhood of \(0\) in \(\mathbb{C}^{D}\) written as \(\phi:x\mapsto\sum_{(i_{1},\ldots,i_{D})\in\mathbb{N}^{D}}c_{i}\prod_{a=1}^{D} x_{a}^{i_{a}}\), define its truncation on \(\mathbb{C}_{m}[X]\) as
\[T_{m}\phi:x\mapsto\sum_{(i_{1},\ldots,i_{D})\in\mathbb{N}^{D}\ :\ \|i\|_{1}\leqslant m}c_{i}\prod_{a=1}^{D}x_{a}^{i_{a}}.\]
Let \(\mathcal{H}\) be a subset of functions \(\mathbb{C}^{D}\to\mathbb{C}^{D}\) such that all elements of \(\mathcal{H}\) satisfy (Adep) and such that the set of the restrictions to \([-\nu_{\mathrm{est}},\nu_{\mathrm{est}}]^{D}\) of functions in \(\mathcal{H}\) is closed in \(L^{2}([-\nu_{\mathrm{est}},\nu_{\mathrm{est}}]^{D})\). We are now ready to define our estimator of \(\Phi_{X}\):
For any integer \(m\) and any \(\rho>1\), let \(\widehat{\Phi}_{n,m,\rho}\) be a (up to \(1/n\)) measurable minimizer of the functional \(\phi\mapsto M_{n}(T_{m}\phi)\) over \(\Upsilon_{\rho,S}\cap\mathcal{H}\).
For good choices of \(m\), \(\widehat{\Phi}_{n,m,\rho}\) is a consistent estimator of \(\Phi_{X}\) in \(L^{2}([-\nu,\nu]^{d})\) at almost parametric rate. The constants will depend on the signal through \(\rho\) and \(S\), and on the noise through its second moment and the following quantity:
\[c_{\nu}=\inf\{|\Phi_{e^{(1)}}(t)|,\;t\in[-\nu,\nu]^{d_{1}}\}\wedge\inf\{|\Phi_ {e^{(2)}}(t)|,\;t\in[-\nu,\nu]^{d_{2}}\}. \tag{3}\]
Note that for any noise distribution, for small enough \(\nu\), \(c_{\nu}\) is a positive real number. For any \(\nu>0\), \(c(\nu)>0\), \(E>0\), define \(\mathcal{Q}^{(D)}(\nu,c(\nu),E)\) the set of distributions \(\mathbb{Q}=\otimes_{j=1}^{D}\mathbb{Q}_{j}\) on \(\mathbb{R}^{D}\) such that \(c_{\nu}\geqslant c(\nu)\) and \(\int_{\mathbb{R}^{D}}\|x\|^{2}d\mathbb{Q}(x)\leqslant E\).
**Proposition 6** (Variant of Proposition 1 in [9]).: _For all \(\rho_{0}<2\), \(\nu\in(0,\nu_{\text{est}}]\), \(S,c(\nu),E,C>0\) and \(\delta,\delta^{\prime},\delta^{\prime\prime}\in(0,1)\) with \(\delta^{\prime}>\delta\), there exist positive constants \(c\) and \(n_{0}\) such that the following holds: let \(\rho\in[1,\rho_{0}]\), for all \(\Phi_{X}\in\Upsilon_{\rho,S}\cap\mathcal{H}\) and \(\mathbb{Q}\in\mathcal{Q}^{(D)}(\nu,c(\nu),E)\), for all \(n\geqslant n_{0}\) and \(x\in[1,n^{1-\delta^{\prime}}]\), with probability at least \(1-2e^{-x}\),_
\[\sup_{\rho^{\prime}\in[\rho,\rho_{0}],\;m\in[2\rho^{\prime}\frac{\log n}{\log n \log n},C\frac{\log n}{\log n\log n}]}\int_{B^{d_{1}^{1}}\times B^{d_{2}^{2}} _{\rho}}|\widehat{\Phi}_{n,m,\rho^{\prime}}(t)-\Phi_{X}(t)|^{2}dt\leqslant c \left(\frac{x}{n^{1-\delta}}\right)^{1-\delta^{\prime\prime}}.\]
_Moreover, the same result holds when replacing \(\widehat{\Phi}_{n,m,\rho^{\prime}}\) by \(T_{m}\widehat{\Phi}_{n,m,\rho^{\prime}}\)._
Note that the constants \(c\) and \(n_{0}\) do not depend on the distribution of \(X\) or \(\varepsilon\). The proof of Proposition 6 is based on results in [9] and [19] and is detailed in Section 6.7. For sake of simplicity, we denote \(\widehat{\Phi}_{n,\rho}\) the estimator \(\widehat{\Phi}_{n,m,\rho}\) in which \(m=\lceil 4\frac{\log n}{\log\log n}\rceil\). Note that this is a valid choice of \(m\) for any \(\rho\in[1,2)\).
### Estimation of the support: upper bound
We are now ready to provide an estimator of the support of the signal. The idea is the following. Define \(\bar{g}\) a probability density which is the convolution of \(G\), the unknown distribution of the signal \(X\), with a kernel \(\Psi_{A,h}\) defined later, in which \(h\) is a bandwidth parameter. Then, the multiplication of the estimator of the characteristic function of the signal with the Fourier transform of the kernel will give a good estimator \(\widehat{g}_{n}\) of \(\bar{g}\). When \(h\) tends to \(0\), \(\bar{g}\) becomes larger on \(\mathcal{M}\) and tends to \(0\) outside of it. Thus, by letting \(h\) tend to \(0\) with \(n\) and choosing an appropriate threshold \(\lambda_{n}\), the set of points \(y\) for which \(\widehat{g}_{n}\geqslant\lambda_{n}\) should be a good estimator of \(\mathcal{M}\). Figure 2 illustrates this idea.
We now define the class over which we will prove an upper bound for the maximum risk in Hausdorff distance.
For any compact set \(\mathcal{K}\) of \(\mathbb{R}^{D}\), and for any positive constants \(a\), \(d\) and \(r_{0}\), we define \(St_{\mathcal{K}}(a,d,r_{0})\) as the set of positive measures \(G\) such that for all \(x\in\mathcal{K}\), for all \(r\leqslant r_{0}\), \(G(B(x,r))\geqslant ar^{d}\). The distributions in \(St_{\mathcal{K}}(a,d,r_{0})\) are called \((a,d)\)-standard. It is commonly used for inferring topological information, see for instance [6].
**Remark 1**.:
* _If a measure_ \(\mu\) _(for instance the_ \(d\)_-dimensional Hausdorff measure on a manifold) is_ \((a,d^{\prime})\)_-standard for some positive constants_ \(a\) _and_ \(d^{\prime}\)_, and if_ \(G\) _admits a density_ \(g\) _with respect to_ \(\mu\) _such that_ \(g\) _is lower bounded by_ \(c>0\)_, then_ \(G\) _is_ \((ac,d^{\prime})\)_-standard._
* _We do not make any assumptions on the reach of the support of_ \(G\) _(see_ _[_17_]__) since it is not necessary here, although it provides a convenient way to check the_ \((a,d)\)_-standard assumption: if_ \(\mathcal{M}\) _is a Riemannian manifold of dimension_ \(d\) _with reach_\((\mathcal{M})\geqslant\tau_{\min}>0\)_, then the_ \(d\)_-dimensional Hausdorff measure restricted to_ \(\mathcal{M}\) _is_ \((a,d)\)_-standard for some_ \(a>0\) _(see Lemma_ 32 _of_ _[_21_]__)._
As in [19], it will be convenient to use \(\kappa=1/\rho\). We denote \(\mathcal{L}(\kappa,S,\mathcal{H})\) the set of distributions \(G\) such that, if \(X\) is a random variable with distribution \(G\), then \(\Phi_{X}\in\mathcal{H}\cap\Upsilon_{1/\kappa,S}\).
When \(\kappa<1\) and \(G\in\mathcal{L}(\kappa,S,\mathcal{H})\), the support of \(G\) is not compact. Since we allow the support to be a non-compact set, we define a truncated loss function as in [21]. For any \(\mathcal{K}\) compact subset of \(\mathbb{R}^{D}\) and for any \(S_{1}\), \(S_{2}\) subsets of \(\mathbb{R}^{D}\), the truncated loss function is
\[H_{\mathcal{K}}(S_{1},S_{2})=d_{H}(S_{1}\cap\mathcal{K},S_{2}\cap\mathcal{K}).\]
We now introduce the kernel we shall use for our construction. For any \(A>0\), define, for all \(y\in\mathbb{R}\),
\[u_{A}(y)=\exp\left\{-\frac{1}{(1-2y)^{A}}-\frac{1}{(1+2y)^{A}}\right\}1|_{[- \frac{1}{2},\frac{1}{2}]}(y)\]
and
\[\tilde{\psi}_{A}(y)=I(A)\ \mathcal{F}^{-1}[u_{A}*u_{A}](y)\ \text{with}\ I(A)= \frac{1}{\int\mathcal{F}^{-1}[u_{A}*u_{A}](x)dx}.\]
We shall extend \(\tilde{\psi}_{A}\) to \(\mathbb{R}^{D}\) as an isotropic function. For \(y\in\mathbb{R}^{D}\), we write
\[\psi_{A}(y)=I(A)\ \mathcal{F}^{-1}[u_{A}*u_{A}](\|y\|_{2})\ \text{with}\ I(A)= \frac{1}{\int\mathcal{F}^{-1}[u_{A}*u_{A}](\|x\|_{2})dx}.\]
For \(h>0\) and \(x\in\mathbb{R}^{D}\), we write \(\psi_{A,h}(x)=h^{-D}\psi_{A}(x/h)\), hence \(\mathcal{F}[\psi_{A,h}](t)=\mathcal{F}[\psi_{A}](th)\). The following properties of \(\psi_{A}\) and \(\mathcal{F}[\psi_{A}]\) hold
1. The support of \(\mathcal{F}[\psi_{A}]\) is the unit ball \(\{y\in\mathbb{R}^{D}\,:\,\|y\|_{2}\leqslant 1\}\).
2. \(\psi_{A}>0\) and \(\mathcal{F}[\psi_{A}]\geqslant 0\).
3. There exist constants \(c_{A}>0\) and \(d_{A}>0\) such that for all \(x\in\{y\in\mathbb{R}^{D}\,:\,\|y\|_{2}\leqslant c_{A}\}\), \(\psi_{A}(x)\geqslant d_{A}\).
* \(\psi_{A}\) and \(\psi_{A,h}\) are probability densities on \(\mathbb{R}^{D}\).
* (Lemma in [30]) For all \(A>0\), there exists \(\beta_{A}>0\) such that \[\lim_{\|t\|_{2}\to\infty}\exp\left\{\beta_{A}\|t\|_{2}^{\frac{4}{4+1}}\right\} \psi_{A}(t)=0\] (4)
* It holds \[\|\psi_{A,h}\|_{2}=\frac{I(A)}{h^{D/2}}\|u_{A}*u_{A}\|_{2}.\] (5)
Fix \(A>0\) and define the convoluted density of the signal, \(\bar{g}\) by
\[\forall y\in\mathbb{R}^{D},\ \bar{g}(y)=(\frac{1}{2\pi})^{D}\int e^{-it^{\top}y} \mathcal{F}[\psi_{A}](ht)\ \Phi_{X}(t)dt,\]
which may be rewritten using usual Fourier calculus, for all \(y\in\mathbb{R}^{D}\), as
\[\bar{g}(y)=(\psi_{A,h}*G)(y)=\frac{1}{h^{D}}\int_{\mathbb{R}^{D}}\psi_{A}( \frac{\|y-u\|_{2}}{h})dG(u).\]
The density \(\bar{g}\) is a kernel smoothing of the distribution \(G\). The bandwidth parameter \(h\) will be chosen appropriately in Theorem 4 below.
We now construct an estimator of \(\bar{g}\) by truncating \(\widehat{\Phi}_{n,1/\kappa}\) depending on \(\kappa\). Adaptation with respect to \(\kappa\) is handled in Section 3.4. For some integer \(m_{\kappa}>0\) to be chosen later, let
\[\forall y\in\mathbb{R}^{D},\ \widehat{g}_{n,\kappa}(y)=\left(\frac{1}{2\pi} \right)^{D}\int e^{-it^{\top}y}\mathcal{F}[\psi_{A}](ht)\ T_{m_{\kappa}} \widehat{\Phi}_{n,1/\kappa}(t)dt.\]
Since for all \(t\in\mathbb{R}^{D}\), \(T_{m_{\kappa}}\widehat{\Phi}_{n,1/\kappa}(-t)=T_{m_{\kappa}}\widehat{\Phi}_{n,1/\kappa}(t)\), the function \(\widehat{g}_{n,\kappa}\) is real valued. Finally, define an estimator of the support of the signal as the upper level set
\[\widehat{\mathcal{M}}_{\kappa}=\left\{y\in\mathbb{R}^{D}\ |\ \widehat{g}_{n, \kappa}(y)>\lambda_{n,\kappa}\right\},\]
for some \(\lambda_{n,\kappa}\). The main theorem of this section gives an upper bound of the maximum risk.
**Theorem 4**.: _Let \(\kappa\in(1/2,1]\), \(a>0\), \(d\leqslant D\), \(r_{0}>0\). For \(c_{h}\geqslant\exp\left(2D+2\right)\) and \(\ell\in(0,1)\), define \(m_{\kappa}\) and \(h\) as_
\[m_{\kappa}=\left\lfloor\frac{1}{4\kappa}\frac{\log(n)}{\log\log(n)}\right\rfloor,\quad h=c_{h}Sm_{\kappa}^{-\kappa}\]
_and \(\lambda_{n,\kappa}\) depending whether \(d<D\) or \(d=D\) as_
* _if_ \(d<D\)_,_ \[\lambda_{n,\kappa}=\left(\frac{1}{h}\right)^{\ell},\]
* _if_ \(d=D\)_,_ \[\lambda_{n,\kappa}=\frac{1}{4}ac_{A}^{D}d_{A}.\]
_Then for any \(\kappa_{0}\in(1/2,1]\), \(\nu\in(0,\nu_{est}]\), \(c(\nu)>0\), \(E>0\)\(S>0\), there exists \(n_{0}\) and \(C>0\) such that for all \(n\geqslant n_{0}\),_
\[\sup_{\kappa\in[\kappa_{0},1]}\sup_{\begin{subarray}{c}G\in St_{\kappa}(a,d,r _{0})\cap\mathcal{L}(\kappa,S,\mathcal{H})\\ \mathbb{Q}\in\mathcal{Q}^{(D)}(\nu,c(\nu),E)\end{subarray}}\mathbb{E}_{(G* \mathbb{Q})^{\otimes n}}[H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_ {\kappa})]\leqslant C\frac{\log(\log(n))^{\kappa+\frac{4+1}{A}}}{\log(n)^{ \kappa}}.\]
**Remark 2**.:
* _We prove in the next section a nearly matching lower bound. Thus, the minimax rate of convergence of the support in truncated Hausdorff distance depends on_ \(\kappa\)_, that is on the way the distribution of the signal behaves at infinity. This rate deteriorates when the distribution of the signal has heavier tails. Indeed, since the distribution of the noise is unknown, taking into account distant observation points to build the estimator of the support becomes more difficult._
* _When_ \(d<D\)_, thanks to the use of the kernel_ \(\psi_{A}\)_, our estimator does not require the knowledge of_ \(d\)_, which has to be compared with the estimator in_ _[_21_]_ _where prior knowledge of_ \(d\) _is needed._
* _In_ _[_21_]__, the upper bound on the rate is of order_ \(1/\sqrt{\log n}\)_. Here we get a bound of order_ \(1/(\log n)^{\kappa}\) _depending on the tail of the distribution of the signal. We do not need to know the distribution of the noise, contrarily to_ _[_21_]_ _where the distribution of the noise is used in the construction of the estimator, as usual in the classical deconvolution litterature._
* _It may be seen from the proof of Theorem_ 4 _that the choice_ \(\lambda_{n,\kappa}=\frac{1}{4}ac_{A}^{D}d_{A}\) _is valid for any_ \(d\)_. However, this requires the knowledge of_ \(a\)_._
* _Note that there are two truncation steps: the first one in the construction of_ \(\hat{\Phi}_{n,1/\kappa}\) _(chosen at the end of Section_ 3_) and the second one in the definition of_ \(\widehat{g}_{n,\kappa}\)_. This second truncation is necessary to control the error of_ \(\hat{\Phi}_{n,1/\kappa}\) _on_ \(B_{1/h}^{D}\) _(see Lemma_ 3_, compared to the error on_ \(B_{\nu}^{D}\) _in Proposition_ 6_), and the degree_ \(m_{\kappa}\) _in the second truncation is always smaller than the degree_ \(m\) _used in the construction of_ \(\hat{\Phi}_{n,1/\kappa}\)_._
The proof of Theorem 4 is detailed in section 6.11. As in [21], the idea is to lower bound \(\bar{g}\) on the support \(\mathcal{M}_{G}\) when the bandwidth parameter \(h\) becomes small, and to upper bound it on every points further than a small distance (depending on \(h\)) from that support, see Lemma 1 and 2 below.
**Lemma 1**.: _Assume \(G\in St_{\mathcal{K}}(a,d,r_{0})\), then for any \(h\leqslant r_{0}/c_{A}\),_
\[\inf_{y\in\mathcal{M}_{G}\cap\mathcal{K}}\bar{g}(y)\geqslant ac_{A}^{d}d_{A} \left(\frac{1}{h}\right)^{D-d},\]
_where \(c_{A}\) and \(d_{A}\) are defined in property (III) of \(\psi_{A}\)._
The proof of Lemma 1 is detailed in Section 6.8.
**Lemma 2**.: _For any \(C_{1}>0\), there exists \(h_{0}>0\) depending only on \(C_{1}\), \(D\) and \(A\) such that for any \(h\leqslant h_{0}\),_
\[\sup\left\{\bar{g}(y)\ |\ y\in\mathcal{K},\ d(y,\mathcal{M}_{G})>h\left[\frac{D}{ \beta_{A}}\log\left(\frac{1}{h}\right)\right]^{\frac{A+1}{A}}\right\}\leqslant C _{1}.\]
The proof of Lemma 2 is detailed in Section 6.9.
The last ingredient is to control the difference between the convoluted density and its estimator, defined as \(\Gamma_{n,\kappa}=\|\widehat{g}_{n,\kappa}-\bar{g}\|_{\infty}=\sup_{y\in \mathbb{R}^{D}}|\widehat{g}_{n,\kappa}(y)-\bar{g}(y)|\). We first relate it to \(\|T_{m_{\kappa}}\widehat{\Phi}_{n,1/\kappa}-\Phi_{X}\|_{2,1/h}\).
**Lemma 3**.: _Let \(h>0\) and \(m>0\). For any \(A>0\),_
\[\Gamma_{n,\kappa}\leqslant I(A)\frac{\|u_{A}*u_{A}\|_{2}}{h^{D/2}}\|T_{m_{ \kappa}}\widehat{\Phi}_{n,1/\kappa}-\Phi_{X}\|_{2,1/h}.\]
The proof of Lemma 3 is detailed in Section 6.10. The parameters \(m_{\kappa}\) and \(h\) are chosen so that \(\Gamma_{n,\kappa}\) tends to \(0\) with high probability, and the threshold \(\lambda_{n,\kappa}\) is chosen using Lemmas 1 and 2.
### Lower bound
The aim of this subsection is to prove a lower bound for the minimax risk of the estimation of \(\mathcal{M}_{G}\) using the distance \(H_{\mathcal{K}}\) as loss function. The proof of Theorem 5 is based on Le Cam's two-points method, see [31], one of the most widespread technique to derive lower bounds. Note that we can not use the lower bound proved in [21] since the two distributions they use for the signal \(X\) in their two-points proof have Gaussian tails, for which \(\kappa=1/2\).
**Theorem 5**.: _For any \(\kappa\in(1/2,1]\), there exists \(S_{\kappa}>0\), \(a_{\kappa}>0\) and \(\mathcal{H}_{\kappa}^{\star}\) a set of complex functions satisfying (Adep) such that the set of the restrictions of its elements to \([-\nu,\nu]^{D}\) is closed in \(L_{2}([-\nu,\nu]^{D})\) for any \(\nu>0\), and such that for all \(S\geqslant S_{\kappa}\), \(a\leqslant a_{\kappa}\), \(d\geqslant 1\), \(0<r_{0}<1\), \(E>0\) and \(\nu\in(0,\nu_{\kappa\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_For any \(\delta\in(0,1)\), there exists a continuous compactly supported density function \(f_{1}:[-1,1]\to\mathbb{R}\) positive everywhere such that_
\[|\mathcal{F}[f_{1}](u)|\leqslant A\exp(-B|u|^{\delta})\quad\text{and}\quad| \mathcal{F}[f_{1}]^{\prime}(u)|\leqslant A\exp(-B|u|^{\delta}).\]
The proof of Lemma 4 is detailed in Section 6.12.
Then, inspired by [21], for all \(\gamma\in(0,1]\), define \(\tilde{g}_{\gamma}:\mathbb{R}\to\mathbb{R}\) and \(g_{\gamma}:\mathbb{R}\to\mathbb{R}^{D-d}\) for all \(x\in\mathbb{R}\) as
\[\tilde{g}_{\gamma}(x)=\cos\left(\frac{x}{\gamma}\right)\quad\text{and}\quad g _{\gamma}(x)=(\tilde{g}_{\gamma}(x),0,\ldots,0)^{\top}\,.\]
Let \(M_{0}(\gamma)=\{(u,\gamma g_{\gamma}(u)):u\in\mathbb{R}\}\), \(M_{1}(\gamma)=\{(u,-\gamma g_{\gamma}(u)):u\in\mathbb{R}\}\), and for \(\alpha\neq 0\), define the matrix \(A_{\alpha}\in\mathbb{R}^{D\times D}\) by
\[A_{\alpha}=\left(\begin{array}{cccc}\alpha&0&0&\ldots&0\\ \alpha&\alpha/2&0&\ldots&0\\ \hline 0&0&\alpha&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\ldots&\alpha\end{array}\right).\]
For any \(\kappa\in(1/2,1]\), let \(U(\kappa)\) be the random variable in \(\mathbb{R}\) having density \(f_{\kappa}\) defined in Lemma 4 and let \(S_{0}(\kappa)=(U(\kappa),\gamma g_{\gamma}(U(\kappa)))\), \(S_{1}(\kappa)=(U(\kappa),-\gamma g_{\gamma}(U(\kappa)))\). For \(i\in\{0,1\}\), we shall denote \(T_{i}(\kappa)\) the distribution of \(S_{i}(\kappa)\). Finally we define \(X_{i}(\kappa)=A_{\alpha}S_{i}(\kappa)\), \(i=0,1\) and \(G_{i}(\kappa)\) the distribution of \(X_{i}(\kappa)\).
To obtain the lower bound, the parameter \(\gamma\) will be chosen as large as possible while making sure that the joint distributions of the observations have total variation distance smaller than some \(C<1\).
Let us comment on dimensionality. The distributions used here are distributions with support of dimension \(1\). This is not an issue since the \(d\) in the definition of \(St_{\mathcal{K}}(a,d,r_{0})\) is an upper bound on the dimension of the support. We could also have used supports with dimension \(d\) by adding to \(X_{i}(\kappa)\) an independent uniform distribution on a ball of a linear space of dimension \(d\).
**Lemma 5**.: _For any \(i\in\{0,1\}\) and \(\kappa\in(1/2,1]\), \(X_{i}(\kappa)\) satisfies \(A(1/\kappa)\)._
The proof of Lemma 5 is detailed in Section 6.13.
**Lemma 6**.: _Let \(\alpha>0\). There exists \(a_{0}>0\) such that for \(i\in\{0,1\}\), for any \(d\geqslant 1\), \(r_{0}<1\) and \(a\leqslant a_{0}\), \(G_{i}(\kappa)\in St_{\mathcal{K}}(a,d,r_{0})\)._
The proof of Lemma 6 is detailed in Section 6.14.
The support of \(G_{i}(\kappa)\) is \(A_{\alpha}M_{i}(\gamma)\), and the following lemma follows easily from the fact that for \(i\in\{0,1\}\), \(A_{\alpha}M_{i}(\gamma)=\gamma A_{\alpha}M_{i}(1)\).
**Lemma 7**.: _For any \(\gamma>0\), and \(\alpha>0\),_
\[H_{\mathcal{K}}(A_{\alpha}M_{0}(\gamma),A_{\alpha}M_{1}(\gamma))=\gamma H_{ \mathcal{K}}(A_{\alpha}M_{0}(1),A_{\alpha}M_{1}(1)). \tag{8}\]
We finally exhibit a set of complex functions \(\mathcal{H}_{\kappa}^{\star}\) such that all of its elements satisfy (Adep) and such that the set of the restrictions of its elements to \([-\nu,\nu]^{D}\) is closed in \(L_{2}([-\nu,\nu]^{D})\) for any \(\nu>0\).
We first define a class of such sets of complex functions, then choose \(\mathcal{H}_{\kappa}^{\star}\) as one in that class. Let \((A_{\Delta}^{(1)})_{\Delta>0}\) and \((B_{\Delta}^{(1)})_{\Delta>0}\) be families of subsets of \(\mathbb{R}\) and \((A_{\Delta}^{(2)})_{\Delta>0}\) and \((B_{\Delta}^{(2)})_{\Delta>0}\) be families of subsets of \(\mathbb{R}^{D-1}\) such that
1. For all \(\Delta>0\), \(\text{Diam}(B_{\Delta}^{(2)})\leqslant\Delta\) and \(\text{Diam}(B_{\Delta}^{(1)})\leqslant\Delta\),
2. The topological frontiers \(\partial A_{\Delta}^{(1)}\) and \(\partial A_{\Delta}^{(2)}\) in \(\mathbb{R}\) are negligible with respect to the Lebesgue measure.
For \(\kappa\in(1/2,1]\), \(S>0\), \(M>0\) and \((c_{\Delta})_{\Delta>0}\) a family of positive constants, let
\[\mathcal{H}(\kappa,S,M,(c_{\Delta},A^{(1)}_{\Delta},B^{(1)}_{\Delta},A^{(2)}_{ \Delta},B^{(2)}_{\Delta})_{\Delta>0})\]
be the set of functions \(\phi:\mathbb{C}^{D}\to\mathbb{C}\) in \(\Upsilon_{\kappa,S}\) such that there exists a random variable \(X\) satisfying \(A(1/\kappa)\) such that \(\phi=\Phi_{X}\) and such that the following holds. Write \(X^{(1)}\) the first coordinate of \(X\) and \(X^{(2)}\) the vector of the last \(D-1\) coordinates of \(X\).
1. For all \(\Delta>0\), \(\mathbb{P}[X^{(1)}\in A^{(1)}_{\Delta}]\geqslant c_{\Delta}\) and \(\mathbb{P}[X^{(2)}\in B^{(2)}_{\Delta}|X^{(1)}\in A^{(1)}_{\Delta}]=1\).
2. For all \(\Delta>0\), \(\mathbb{P}[X^{(2)}\in A^{(2)}_{\Delta}]\geqslant c_{\Delta}\) and \(\mathbb{P}[X^{(1)}\in B^{(1)}_{\Delta}|X^{(2)}\in A^{(2)}_{\Delta}]=1\).
3. All the coordinates of \(X^{(2)}\) are null except the first one, and \(X^{(1)}\) and the first coordinate of \(X^{(2)}\) admit a continuous density with respect to Lebesgue measure which is upper bounded by \(M\).
**Lemma 8**.: _For any \(\nu>0\), the set \(\mathcal{H}(\kappa,S,M,(c_{\Delta},A^{(1)}_{\Delta},B^{(1)}_{\Delta},A^{(2)}_ {\Delta},B^{(2)}_{\Delta})_{\Delta>0})\) is closed in \(L_{2}([-\nu,\nu]^{D})\). Moreover, all elements of \(\mathcal{H}(M,(c_{\Delta},A^{(1)}_{\Delta},B^{(1)}_{\Delta},A^{(2)}_{\Delta},B^{(2)}_{\Delta>0})\) satisfy (Adep)._
Note that \(\Phi_{X}\in\mathcal{H}(\kappa,S,M,(c_{\Delta},A^{(1)}_{\Delta},B^{(1)}_{\Delta },A^{(2)}_{\Delta},B^{(2)}_{\Delta})_{\Delta>0})\) implies that \(X\) satisfies (H1) and (H2) so that the second part of the Lemma is a consequence of Theorem 2. The remaining of the proof of Lemma 8 is detailed in Section 6.15.
**Lemma 9**.: _For any \(\kappa\in(1/2,1]\), there exist \(S_{\kappa}\), \(M>0\), \((c_{\Delta})_{\Delta>0}\) a sequence of positive constants, and \((A^{(1)}_{\Delta},B^{(1)}_{\Delta},A^{(2)}_{\Delta},B^{(2)}_{\Delta})_{\Delta>0}\) a sequence of sets such that for \(i\in\{0,1\}\), \(\Phi_{X_{i}(\kappa)}\in\mathcal{H}(\kappa,S_{\kappa},M,(c_{\Delta},A^{(1)}_{ \Delta},B^{(1)}_{\Delta},A^{(2)}_{\Delta},B^{(2)}_{\Delta})_{\Delta>0})\)._
The proof of Lemma 9 is detailed in Section 6.16.
### Adaptation to unknown \(\kappa\)
We now propose a data-driven model selection procedure to select \(\kappa\) such that the resulting estimator has the right rate of convergence. As usual, the idea is to perform a bias-variance trade off. Although we have an upper bound for the variance term, the bias is not easily accessible. We will use Goldenshluger and Lepski's method, see [22]. The variance bound is given as follows:
\[\sigma_{n}(\kappa)=c_{\sigma}\frac{(\log\log n)^{\kappa+\frac{A+1}{\lambda}}}{ (\log n)^{\kappa}}.\]
Fix some \(\kappa_{0}>1/2\). The bias proxy is defined as
\[B_{n}(\kappa)=0\vee\sup_{\kappa^{\prime}\in[\kappa_{0},\kappa]}\left(H_{ \mathcal{K}}(\widehat{\mathcal{M}}_{\kappa},\widehat{\mathcal{M}}_{\kappa^{ \prime}})-\sigma_{n}(\kappa^{\prime})\right).\]
The estimator of \(\kappa\) is now given by
\[\widehat{\kappa}_{n}\in\text{arg min}\left\{B_{n}(\kappa)+\sigma_{n}(\kappa), \;\kappa\in[\kappa_{0},1]\right\},\]
and the estimator of the support of the signal is \(\widehat{\mathcal{M}}_{\widehat{\kappa}_{n}}\). The following theorem states that this estimator is rate adaptive.
**Theorem 6**.: _For any \(\kappa_{0}\in(1/2,1]\), \(\nu\in(0,\nu_{est}]\), \(c(\nu)>0\), \(E>0\)\(S>0\), \(a>0\), \(d\leqslant D\), there exists \(c_{\sigma}>0\) such that_
\[\limsup_{n\to+\infty}\sup_{\kappa\in[\kappa_{0},1]}\sup_{\begin{subarray}{c}G\in St _{\mathcal{K}}(a,d)\cap\mathcal{L}(\kappa,S,\mathcal{H})\\ \mathbb{Q}\in\mathbb{Q}^{(D)}(\nu,c(\nu),E)\end{subarray}}\frac{\log(n)^{ \kappa}}{\log(\log(n))^{\kappa+\frac{A+1}{\lambda}}}\mathbb{E}_{(G\star\mathbb{ Q})^{\otimes n}}[H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_{\widehat{ \kappa}_{n}})]<+\infty.\]
The proof of Theorem 6 is detailed in Section 6.18.
Estimation of the distribution of the signal
In this section, we assume that the support \(\mathcal{M}_{G}\) of \(G\) is a compact subset of \(\mathbb{R}^{D}\). To estimate \(G\), we shall consider the probability density \(\bar{g}\) defined in Section 3.2 and define the probability distribution \(P_{\psi_{A,h_{n}}}\) on \(\mathbb{R}^{D}\) such that, for any \(\mathcal{O}\) borelian set of \(\mathbb{R}^{D}\),
\[P_{\psi_{A,h_{n}}}(\mathcal{O})=\int_{\mathcal{O}}\bar{g}(y)dy.\]
This probability distribution can be considered as a convoluted approximation of \(G\) with kernel \(\Psi_{A}\) and smoothing parameter \(h_{n}\). We then estimate \(P_{\psi_{A,h_{n}}}\) using the estimation of \(\bar{g}\) defined in Section 3.2 for \(\kappa=1\), \(\widehat{g}_{n}:=\widehat{g}_{n,1}\). Since \(\widehat{g}_{n}\) can be non positive, we use \(\widehat{g}_{n}^{+}=\max\left\{0,\widehat{g}_{n}\right\}\) and renormalize it to get a probability distribution. We shall also estimate \(P_{\psi_{A,h_{n}}}\) with a probability distribution having support on a (small) enlargement of the estimated support \(\widehat{\mathcal{M}}\) restricted to the closed euclidean ball \(\bar{B}(0,R_{n})\), for some radius \(R_{n}\) that grows to infinity with \(n\). Thus we fix some \(\eta>0\) and define \(\widehat{P}_{n,\eta}\) such that, for any \(\mathcal{O}\) borelian set of \(\mathbb{R}^{D}\),
\[\widehat{P}_{n,\eta}(\mathcal{O})=\frac{1}{\int_{(\widehat{\mathcal{M}}\cap \bar{B}(0,R_{n}))_{\eta}}\widehat{g}_{n}^{+}(y)dy}\int_{\mathcal{O}\cap( \widehat{\mathcal{M}}\cap\bar{B}(0,R_{n}))_{\eta}}\widehat{g}_{n}^{+}(y)dy=c _{n}\int_{\mathcal{O}\cap(\widehat{\mathcal{M}}\cap\bar{B}(0,R_{n}))_{\eta}} \widehat{g}_{n}^{+}(y)dy.\]
### Upper bound for the Wasserstein risk
The aim of this subsection is to give an upper bound of the Wasserstein maximum risk for the estimation of \(G\).
**Theorem 7**.: _For all \(\nu\in(0,\nu_{est}]\), \(c(\nu)>0\), \(E>0\), \(S>0\), \(\eta>0\), \(a>0\), \(r_{0}>0\), \(d\leqslant D\), define \(m_{n}\), \(h_{n}\) and \(\lambda_{n}\) as in Theorem 4 for \(\kappa=1\). Assume that \(\lim_{n\to+\infty}R_{n}=+\infty\) and that there exists \(\delta\in(0,\frac{1}{2})\) such that \(R_{n}\leqslant\exp(n^{1/2-\delta})\). Then there exist \(n_{0}\) and \(C>0\) such that for all \(n\geqslant n_{0}\),_
\[\sup_{\begin{subarray}{c}G\in St_{\mathcal{K}}(a,d,r_{0})\cap(1,S,\mathcal{H })\\ \mathbb{Q}\in\mathcal{Q}^{(D)}(\nu,c(\nu),E)\end{subarray}}\mathbb{E}_{(G^{*} \mathbb{Q})^{\otimes n}}[W_{2}(G,\widehat{P}_{n,\eta})]\leqslant C\frac{\log \log(n)}{\log(n)}.\]
The proof of Theorem 7 is detailed in Section 6.19. Note that the magnitude of \(\eta\) does not appear to be crucial when looking at the proof, at least in an asymptotic perspective.
**Remark 4**.:
* _The lower bound in Theorem_ 8 _almost matches the upper bound for the maximum risk of our estimator in Theorem_ 7_. Thus our work identifies the main factor in the minimax rate for the estimation of the distribution in Wasserstein loss._
* _Comparison with earlier results in the deconvolution setting_ _[_13_]_ _or_ _[_12_]_ _with known noise is not easy since the classes of signals they consider is much different than the ones we consider._
### Lower bound for the Wasserstein risk
The aim of this subsection is to establish a lower bound for the minimax Wasserstein risk of order \(p\) for any \(p\geqslant 1\). Again, we can not use previous lower bounds proved in [13] or [12] since they use in the two-points method signals with distributions having too heavy tails.
**Theorem 8**.: _For any \(p\geqslant 1\), there exists \(S_{1}>0\), \(a_{1}>0\) and \(\mathcal{H}_{1}^{*}\) a set of complex functions satisfying (Adep) such that the set of the restrictions of its elements to \([-\nu,\nu]^{D}\) is closed in \(L_{2}([-\nu,\nu]^{D})\) for any \(\nu>0\), and such that for all \(S\geqslant S_{1}\), \(a\leqslant a_{1}\), \(d\geqslant 1\), \(0<r_{0}<1\), \(E>0\) and \(\nu\in(0,\nu_{est}]\) such that \(c(\nu)>0\), there exists \(C>0\) depending only on \(a\), \(D\), \(S\), \(E\) and \(\nu\), and there exists \(n_{0}\), such that for all \(n\geqslant n_{0}\),_
\[\inf_{\widehat{P}_{n}}\sup_{\begin{subarray}{c}G\in St_{\mathcal{K}}(a,d,r) \cap\mathcal{L}(1,S,\mathcal{H}_{1}^{*})\\ \mathbb{Q}\in\mathcal{Q}^{(D)}(\nu,c(\nu),E)\end{subarray}}\mathbb{E}_{(G^{*} \mathbb{Q})^{\otimes n}}[W_{p}(G,\widehat{P}_{n})]\geqslant C\frac{1}{\log(n) ^{1+\delta}},\]
_where the infimum is taken on all possible estimate \(\widehat{P}_{n}\) of \(G\)._
As for Theorem 5, we use Le Cam's two-points method with the same two distributions \(G_{0}(1)\) and \(G_{1}(1)\). The proof essentially consists in showing that there exists a constant \(C>0\) independent of \(\gamma\) such that \(W_{p}(G_{0}(1),G_{1}(1))\geqslant CH_{\mathcal{K}}(M_{0}(\gamma),M_{1}(\gamma))\), that is \(W_{p}(G_{0}(1),G_{1}(1))\geqslant C\gamma\) for a constant \(C>0\). Once such an equality is established, the lower bound follows from taking \(\gamma\) as for Theorem 5.
The rest of the proof is detailed in Section 6.20.
## 5 Acknowledgments
Jeremie Capitao-Miniconi would like to acknowledge support from the UDOPIA-ANR-20-THIA-0013. Elisabeth Gassiat would like to acknowledge the Institut Universitaire de France and the ANR ASCAI : ANR-21-CE23-0035-02.
## 6 Proofs
### Proof of Proposition 1
Case \(\rho=1\).It is clear that any compactly supported distribution satisfies A(1). Conversely, if \(\mathbb{E}[e^{\langle\lambda,X\rangle}]\leqslant a\exp(b\|\lambda\|)\), then for any \(\mu>0\), we get, for any \(b^{\prime}>b\), if we denote \((e_{j})_{1\leqslant j\leqslant D}\) the canonical basis of \(\mathbb{R}^{D}\),
\[\mathbb{P}(\|X\|\geqslant Db^{\prime}) \leqslant\sum_{j=1}^{D}\mathbb{P}(|X_{j}|\geqslant b^{\prime})\] \[=\sum_{j=1}^{D}\left\{\mathbb{P}(X_{j}\geqslant b^{\prime})+ \mathbb{P}(X_{j}\leqslant-b^{\prime})\right\}\] \[=\sum_{j=1}^{D}\left\{\mathbb{P}(\langle\mu e_{j},X\rangle \geqslant\mu b^{\prime})+\mathbb{P}(-\langle\mu e_{j},X\rangle\geqslant\mu b ^{\prime})\right\}\] \[\leqslant\sum_{j=1}^{D}\left\{\frac{\mathbb{E}\left[\exp(\langle \mu e_{j},X\rangle)\right]}{\exp(b^{\prime}\mu)}+\frac{\mathbb{E}\left[\exp(- \langle\mu e_{j},X\rangle)\right]}{\exp(b^{\prime}\mu)}\right\}\quad\text{by Markov inequality}\] \[\leqslant 2D\frac{a\exp(b\mu)}{\exp(b^{\prime}\mu)}\underset{\mu \rightarrow+\infty}{\longrightarrow}0,\]
and hence \(\|X\|\leqslant Db\) almost surely.
Case \(\rho>1\).Assume that for any \(\lambda\in\mathbb{R}\), \(\mathbb{E}[e^{\langle\lambda,X\rangle}]\leqslant a\exp(b\|\lambda\|^{\rho})\) for some \(a,b>0\). Then by using the same directional method as for \(\rho=1\), we get that for any \(\mu,t\geqslant 0\),
\[\mathbb{P}(\|X\|\geqslant t) \leqslant 2Da\exp(b\mu^{\rho}-\mu t)\] \[=2Da\exp\left(-\left(\frac{\rho}{b}\right)^{\frac{1}{\rho-1}} \left(1-\rho^{-\frac{\rho+1}{\rho-1}}\right)t^{\frac{\rho}{\rho-1}}\right) \quad\text{by taking }\mu=\left(\frac{t}{b\rho}\right)^{\frac{1}{\rho-1}}.\]
Observe that since \(\rho>1\), \(\left(1-\rho^{-\frac{\rho+1}{\rho-1}}\right)>0\) to get the result.
Now, assume that for any \(t\geqslant 0\), \(\mathbb{P}(\|X\|\geqslant t)\leqslant c\exp(-dt^{\rho/(\rho-1)})\) for some \(c,d>0\), then by the Cauchy-Schwarz inequality, for any \(\lambda\in\mathbb{R}^{D}\),
\[\mathbb{E}[e^{\langle\lambda,X\rangle}]\leqslant\mathbb{E}[e^{\|\lambda\| \|X\|}].\]
Then, using that for any nonnegative random variable \(Y\), \(\mathbb{E}[Y]=\int_{t\geqslant 0}\mathbb{P}(Y\geqslant t)dt\),
\[\mathbb{E}[e^{(\lambda,X)}] \leqslant 1+\int_{t\geqslant 1}\mathbb{P}(e^{\|\lambda\|\|X\|} \geqslant t)dt\] \[\leqslant 1+\int_{s\geqslant 0}\mathbb{P}(\|X\|\geqslant s)\| \lambda\|\exp(\|\lambda\|s)ds\quad\text{with }t=e^{\|\lambda\|s}\] \[\leqslant 1+c\|\lambda\|\int_{s\geqslant 0}\exp(-ds^{\frac{\rho}{ \rho-1}}+\|\lambda\|s)ds\] \[\leqslant 1+c\|\lambda\|^{\rho}\int_{s^{\prime}\geqslant 0}\exp(\| \lambda\|^{\rho}(-ds^{\prime}\frac{\rho}{\rho-1}+s^{\prime}))ds^{\prime}\quad \text{with }s=s^{\prime}\|\lambda\|^{\rho-1}.\]
Note that \(-ds^{\prime}\frac{\rho}{\rho-1}+s^{\prime}\leqslant\frac{1}{\rho}(\frac{\rho- 1}{\rho d})^{\rho-1}\) for any \(s^{\prime}\geqslant 0\), and \(-ds^{\prime}\frac{\rho}{\rho-1}+s^{\prime}\leqslant-s^{\prime}\) when \(s^{\prime}\geqslant(\frac{2}{d})^{\rho-1}\). In particular,
\[\mathbb{E}[e^{(\lambda,X)}] \leqslant 1+c\|\lambda\|^{\rho}\int_{s^{\prime}=0}^{(\frac{2}{d})^ {\rho-1}}\exp(\|\lambda\|^{\rho}(-ds^{\prime}\frac{\rho}{\rho-1}+s^{\prime})) ds^{\prime}+c\|\lambda\|^{\rho}\int_{s^{\prime}\geqslant(\frac{2}{d})^{\rho-1}} \exp(\|\lambda\|^{\rho}(-ds^{\prime}\frac{\rho}{\rho-1}+s^{\prime}))ds^{\prime}\] \[\leqslant 1+c\|\lambda\|^{\rho}\left(\frac{2}{d}\right)^{\rho-1} \exp\left(\frac{\|\lambda\|^{\rho}}{\rho}\left(\frac{\rho-1}{\rho d}\right)^{ \rho-1}\right)+c\exp\left(-\|\lambda\|^{\rho}\left(\frac{2}{d}\right)^{\rho-1 }\right),\]
which proves that \(A(\rho)\) holds.
### Proof of Proposition 2
First, note that if \(U\) and \(V\) are independent random variables satisfying \(\mathrm{A}(\rho)\) then \(U+V\) satisfies also \(\mathrm{A}(\rho)\) with the same constant \(\rho\).
1. If \(U\) and \(V\) are independent, then for all \((z_{1},z_{2})\in\mathbb{C}^{d_{1}}\times\mathbb{C}^{d_{2}}\), \[\Phi_{U+V}(z_{1},z_{2})=\Phi_{U}(z_{1},z_{2})\Phi_{V}(z_{1},z_{2}).\] (9) Assume first that \(U\) and \(V\) satisfy (Adep). Suppose that there exists \(z_{0}\in\mathbb{C}^{d_{1}}\) such that for all \(z\in\mathbb{C}^{d_{2}}\), \(\Phi_{U+V}(z_{0},z)=0\). Then for all \(z\in\mathbb{C}^{d_{2}}\), \[\Phi_{U}(z_{0},z)\Phi_{V}(z_{0},z)=0.\] If \(Z_{U}^{(1)}(z_{0})=\{z\in\mathbb{C}^{d_{2}}\,|\,\Phi_{X}(z_{0},z)=0\}\) and \(Z_{V}^{(1)}(z_{0})=\{z\in\mathbb{C}^{d_{2}}\,|\,\Phi_{Y}(z_{0},z)=0\}\), \(Z_{U}^{(1)}(z_{0})\cup Z_{V}^{(1)}(z_{0})=\mathbb{C}^{d_{2}}\). Since \(\Phi_{U}(z_{0},\cdot)\) and \(\Phi_{V}(z_{0},\cdot)\) are not the null functions, Corollary 10 of [23], p. 9, implies that \(Z_{U}^{(1)}(z_{0})\cup Z_{V}^{(1)}(z_{0})\) has zero \(2d_{2}\)-Lebesgue measure, which contradicts the fact that \(Z_{U}^{(1)}(z_{0})\cup Z_{V}^{(1)}(z_{0})=\mathbb{C}^{d_{2}}\). If instead we suppose that there exists \(z_{0}\in\mathbb{C}^{d_{2}}\) such that for all \(z\in\mathbb{C}^{d_{1}}\), \(\Phi_{U+V}(z,z_{0})=0\), analogous arguments lead to a contradiction. Thus \(U+V\) satisfies (Adep). Assume now that \(U+V\) satisfies (Adep). Then (9) implies that \(\Phi_{U}(z_{1},\cdot)\), \(\Phi_{V}(z_{1},\cdot)\), \(\Phi_{U}(\cdot,z_{2})\), \(\Phi_{V}(\cdot,z_{2})\) can not be the null function, so that \(U\) and \(V\) both satisfy (Adep).
2. Assume that \(U\) satisfies \(A(\rho)\) with constants \(a\) and \(b\). Then, for any \(\lambda\in\mathbb{R}^{D}\), \[\mathbb{E}\left[\exp\left(\lambda^{\top}V\right)\right] =\mathbb{E}\left[\exp\left(\lambda^{\top}\begin{pmatrix}A&0\\ 0&B\end{pmatrix}\begin{pmatrix}U^{(1)}\\ U^{(2)}\end{pmatrix}+\lambda^{\top}\begin{pmatrix}m_{1}\\ m_{2}\end{pmatrix}\right)\right]\] \[\leqslant a\exp\left(b\left\|\lambda^{\top}\begin{pmatrix}A&0\\ 0&B\end{pmatrix}\right\|_{2}^{\rho}+\|\lambda\|_{2}\left\|\begin{pmatrix}m_{1}\\ m_{2}\end{pmatrix}\right\|_{2}\right)\] \[\leqslant a\exp\left(b\left\|\begin{pmatrix}A&0\\ 0&B\end{pmatrix}\right\|_{op}^{\rho}\|\lambda\|_{2}^{\rho}+\|\lambda\|_{2}\left\| \begin{pmatrix}m_{1}\\ m_{2}\end{pmatrix}\right\|_{2}\right)\!.\]
Since \(\rho\geqslant 1\), \(\|\lambda\|_{2}\leqslant\|\lambda\|_{2}^{p}\) for \(\|\lambda\|_{2}\geqslant 1\), so that if \(U\) satisfies \(A(\rho)\) with constants \(a\) and \(b\), then \(V\) satisfies \(A(\rho)\) with constants \(a\exp\left(\left\|\left(\begin{matrix}m_{1}\\ m_{2}\end{matrix}\right)\right\|_{2}\right)\) and \(b\left\|\left(\begin{matrix}A&0\\ 0&B\end{matrix}\right)\right\|_{op}^{\rho}+\left\|\left(\begin{matrix}m_{1}\\ m_{2}\end{matrix}\right)\right\|_{2}\). The converse follows from applying the direct proof to \(V\) with \(-\left(\begin{matrix}A^{-1}&0\\ 0&B^{-1}\end{matrix}\right)\left(\begin{matrix}m_{1}\\ m_{2}\end{matrix}\right)\) and \(\left(\begin{matrix}A^{-1}&0\\ 0&B^{-1}\end{matrix}\right)\). Now, for all \((z_{1},z_{2})\in\mathbb{C}^{d_{1}}\times\mathbb{C}^{d_{2}}\), \[\Phi_{V}(z_{1},z_{2})=\exp\left(\lambda\top\left(\begin{matrix}m_{1}\\ m_{2}\end{matrix}\right)\right)\Phi_{U}(A\top z_{1},B\top z_{2})\] and \[\Phi_{U}(z_{1},z_{2})=\exp\left(-\lambda\top\left(\begin{matrix}A^{-1}&0\\ 0&B^{-1}\end{matrix}\right)\left(\begin{matrix}m_{1}\\ m_{2}\end{matrix}\right)\right)\Phi_{V}((A^{-1})\top z_{1},(B^{-1})\top z_{2}),\] so that \(U\) verifies (Adep) if and only if \(V\) verifies (Adep).
3. Since \(U^{(1)}\) and \(U^{(2)}\) are independent, for all \(z_{1}\in\mathbb{C}^{d_{1}}\) and \(z_{2}\in\mathbb{C}^{d_{2}}\), \(\Phi_{U}(z_{1},z_{2})=\Phi_{U^{(1)}}(z_{1})\Phi_{U^{(2)}}(z_{2})\). Thus if \(U^{(1)}\) and \(U^{(2)}\) are deterministic or Gaussian random variables, \(U\) satisfies (Adep). Conversely, if \(U\) satisfies (Adep), then neither \(\Phi_{U^{(1)}}\) nor \(\Phi_{U^{(2)}}\) have any zero. By Hadamard's Theorem together with \(\mathrm{A}(\rho)\), reasoning variable by variable we obtain that \(\Phi_{U^{(1)}}=\exp(P_{1})\) and \(\Phi_{U^{(2)}}=\exp(P_{2})\) for some polynomials \(P_{1}\) and \(P_{2}\) with degree bounded by \(\rho\) in each variable. Now, for \(j=1,2\), for any \(\lambda\in\mathbb{R}^{d_{j}}\), \(t\mapsto\Phi_{U^{(j)}}(t\lambda)\) is the characteristic of the random variable \(\langle\lambda,X^{(j)}\rangle\) and writes \(\exp(P_{j}(t\lambda))\). But by Marcinkiewicz's theorem 2bis in [24], this implies that \(t\mapsto P_{j}(t\lambda)\) is of degree at most two. Since this is true for any \(\lambda\), we get that \(P_{1}\) and \(P_{2}\) are polynomials with total degree at most two. Thus the polynomials \(P_{1}\) and \(P_{2}\) are of the form \(i\langle A,X\rangle-\frac{1}{2}X^{\top}BX\) for some symmetric matrix \(B\) since characteristic functions are equal to \(1\) at zero and \(\Phi_{U}(-z)=\overline{\Phi_{U}(z)}\) for all \(z\in\mathbb{R}^{d}\). Therefore the distribution of \(U_{1}\) (resp. \(U_{2}\)) is a (possibly singular) Gaussian distribution.
### Proof of Theorem 2
Consider a random variable \(X\) satisfying \(\mathrm{A}(\rho)\). Theorem 2 is a direct consequence of the following Lemma. Indeed, for any \(z_{0}\in\mathbb{C}^{d_{1}}\) and \(z\in\mathbb{C}^{d_{2}}\),
\[\mathbb{E}\left[\exp\left(iz_{0}^{\top}X^{(1)}+iz^{\top}X^{(2)}\right)\right] =\mathbb{E}\left[\mathbb{E}\left[\exp\left(iz_{0}^{\top}X^{(1)}\right)\,|\,X^ {(2)}\right]\exp\left(iz^{\top}X^{(2)}\right)\right].\]
Usual arguments for multivariate analytic functions show that \(z\mapsto\mathbb{E}[\exp(iz_{0}^{\top}X^{(1)}+iz^{\top}X^{(2)})]\) is the null function if and only if \(\mathbb{E}[\exp(iz_{0}^{\top}X^{(1)})\,|\,X^{(2)}]\) is zero \(\mathbb{P}_{X^{(2)}}\)-a.s. Likewise, for any \(z_{0}\in\mathbb{C}^{d_{2}}\), \(z\mapsto\mathbb{E}[\exp(iz^{\top}X^{(1)}+iz_{0}^{\top}X^{(2)})]\) is the null function if and only if \(\mathbb{E}[\exp(iz_{0}\top X^{(2)})\,|\,X^{(1)}]\) is zero \(\mathbb{P}_{X^{(1)}}\)-a.s.
**Lemma 10**.: _Assume (H1) and (H2). Then, for all \(z\in\mathbb{C}^{d_{1}}\), \(\mathbb{E}[\exp\left(iz^{\top}X^{(1)}\right)|X^{(2)}]\) is not \(\mathbb{P}_{X^{(2)}}\)-a.s. the null random variable and for all \(z\in\mathbb{C}^{d_{2}}\), \(\mathbb{E}[\exp\left(iz^{\top}X^{(2)}\right)|X^{(1)}]\) is not \(\mathbb{P}_{X^{(1)}}\)-a.s. the null random variable_
Proof of Lemma 10To begin with, by Proposition 2, we may assume without loss of generality that \(0\in B_{\Delta}\) in (H1) and (H2) (up to translation of \(X\)).
Let \(z\in\mathbb{C}^{d_{1}}\) be such that \(\mathbb{E}[\exp\left(iz^{\top}X^{(1)}\right)|X^{(2)}]\) is \(\mathbb{P}_{X^{(2)}}\)-a.s. the null random variable. Then for any \(\Delta>0\), if we denote \(A_{\Delta}\) a set given by (H1), \(\mathbb{E}[\exp\left(iz^{\top}X^{(1)}\right)|X^{(2)}]1|_{X^{(2)}\in A_{\Delta}}=0\)\(\mathbb{P}_{X_{1}^{(2)}}\) a.s., and taking the real part of this equation shows that
\[\mathbb{E}[\cos(\mathrm{Re}(z)^{\top}X^{(1)})\exp\left(-\mathrm{Im}(z)^{\top}X^ {(1)}\right)|X^{(2)}]1|_{X^{(2)}\in A_{\Delta}}=0\quad\mathbb{P}_{X^{(2)}} \mathrm{a.s.} \tag{10}\]
Using (H1), we can fix \(\Delta>0\) small enough such that if \(x\in B_{\Delta}\), \(\cos(\operatorname{Re}(z)^{\top}x)>0\). But for such \(\Delta\), Equation (10) can not hold since \(\mathbb{P}(X^{(1)}\in B_{\Delta}\,|\,X^{(2)}\in A_{\Delta})=1\). Thus \(\mathbb{E}[\exp{(iz^{\top}X^{(1)})}|X^{(2)}]\) is not \(\mathbb{P}_{X^{(2)}}\)-a.s. the null random variable.
The proof of the other part of Lemma 10 is analogous using (H2).
### Proof of Proposition 4
Let \(\mathcal{M}\) be a compact subset of \(\mathbb{R}^{D}\). Let us first prove that the function \(u\longmapsto\operatorname{Diam}(\{u\}\times\mathbb{R}^{d_{2}}\cap\mathcal{M})\) is upper semi-continuous.
Let \(u\in\mathbb{R}^{d_{1}}\). Since \(\mathcal{M}\) is compact, there exists sequences \(u_{n}\to u\) and \(x_{n}\), \(y_{n}\) in \((\{u_{n}\}\times\mathbb{R}^{d_{2}}\cap\mathcal{M})\) such that \(\|x_{n}-y_{n}\|_{2}=\operatorname{Diam}(\{u_{n}\}\times\mathbb{R}^{d_{2}}\cap \mathcal{M})\) and \(\lim_{n\to+\infty}\|x_{n}-y_{n}\|_{2}=\limsup_{v\to u}\operatorname{ Diam}(\{v\}\times\mathbb{R}^{d_{2}}\cap\mathcal{M})\). Moreover, we may assume that there exists \(x\), \(y\) in \((\{u\}\times\mathbb{R}^{d_{2}}\cap\mathcal{M})\) such that \(x_{n}\to x\) and \(y_{n}\to y\). Taking the limit along those sequences shows that \(\operatorname{Diam}(\{u\}\times\mathbb{R}^{d_{2}}\cap\mathcal{M})\geqslant\| x-y\|=\limsup_{v\to u}\operatorname{Diam}(\{v\}\times\mathbb{R}^{d_{2}}\cap \mathcal{M})\), proving the claimed upper-semi continuity.
Now, since \(\mathcal{M}\) is compact, there exists \(R>0\) such that \(\mathcal{M}\subset\bar{B}(0,R)\). If moreover \(\mathcal{M}\in\mathcal{B}_{1}\), there exists \(x_{1}\in\mathbb{R}^{d_{1}}\) such that \(\operatorname{Diam}(\{x_{1}\}\times\mathbb{R}^{d_{2}}\cap\mathcal{M})=0\). Using the upper semi-continuity shows that that \(\mathcal{M}\in\cap_{n\geqslant 1}\mathcal{A}_{2}(1/n,R)\). Likewise, if \(\mathcal{M}\in\mathcal{B}_{2}\), there exists \(x_{2}\in\mathbb{R}^{d_{2}}\) such that \(\operatorname{Diam}(\mathbb{R}^{d_{1}}\times\{x_{2}\}\cap\mathcal{M})=0\) and \(\mathcal{M}\in\cap_{n\geqslant 1}\mathcal{A}_{1}(1/n,R)\).
The end of the proof follows from Proposition 3 and the fact that any random variable with compact support satisfies A(1).
### Proof of Proposition 5
First, let us show that the set \(\mathcal{A}:=(\cap_{\Delta>0}\cup_{\varepsilon>0}\mathcal{A}_{1}(\Delta, \varepsilon))\bigcap(\cap_{\Delta>0}\cup_{\varepsilon>0}\mathcal{A}_{2}( \Delta,\varepsilon))\) is dense.
Let \(\delta>0\) and let \(\mathcal{M}\) be a closed subset of \(\mathbb{R}^{D}\), we show that there exists a closed \(\mathcal{M}^{\prime}\) in \(\cap_{\Delta>0}(\mathcal{A}_{1}(\Delta,\delta)\cap\mathcal{A}_{2}(\Delta,\delta))\) (and thus in \(\mathcal{A}\)) such that \(d_{H}(\mathcal{M},\mathcal{M}^{\prime})\leqslant 8\delta\).
Let \(z=(z_{1},z_{2})\in\mathcal{M}\) with \(z_{1}=\pi^{(1:d_{1})}(z)\) and \(z_{2}=\pi^{(d_{1}+1:D)}(z)\), \(\mathcal{M}^{\prime}\) is defined by cutting the space in half through \(z\) orthogonally to the space of the first \(d_{1}\) cooordinates and spreading the two halves apart, connecting them by a single segment to ensure it is in \(\mathcal{A}_{2}(\Delta,\delta)\), then cut and connect again orthogonally to the \((d_{1}+1)\)-th axis to be in \(\mathcal{A}_{1}(\Delta,\delta)\).
Formally, define \(\mathcal{M}^{\prime}\) as the union of:
* \(\{y\,|\,y=(y_{1},y_{2})\in\mathcal{M},\pi^{(1)}(y_{1})\leqslant\pi^{(1)}(z_{1})\) and \(\pi^{(1)}(y_{2})\leqslant\pi^{(1)}(z_{2})\}\),
* \(\{(y_{1}+4\delta(1,0,\ldots,0),y_{2})\,|\,y=(y_{1},y_{2})\in\mathcal{M},\pi^{ (1)}(y_{1})\geqslant\pi^{(1)}(z_{1})\) and \(\pi^{(1)}(y_{2})\leqslant\pi^{(1)}(z_{2})\}\),
* \(\{(y_{1},y_{2}+4\delta(1,0,\ldots,0))\,|\,y=(y_{1},y_{2})\in\mathcal{M},\pi^{ (1)}(y_{1})\leqslant\pi^{(1)}(z_{1})\) and \(\pi^{(1)}(y_{2})\geqslant\pi^{(1)}(z_{2})\}\),
* \(\{(y_{1}+4\delta(1,0,\ldots,0),y_{2}+4\delta(1,0,\ldots,0))\,|\,y=(y_{1},y_{2} )\in\mathcal{M},\pi^{(1)}(y_{1})\geqslant\pi^{(1)}(z_{1})\) and \(\pi^{(1)}(y_{2})\geqslant\pi^{(1)}(z_{2})\}\),
* the segments between \(z\) and \((z_{1}+4\delta(1,0,\ldots,0),z_{2})\) and between \(z\) and \((z_{1},z_{2}+4\delta(1,0,\ldots,0))\).
An illustration of this construction is given in Figure 3.
By construction, the Hausdorff distance between this set \(\mathcal{M}^{\prime}\) and \(\mathcal{M}\) is smaller than \(8\delta\) (the points in the first four sets have moved at most \(8\delta\) and the segments are at distance at most \(8\delta\) of \(z\)). \(\mathcal{M}^{\prime}\) is also closed, and taking \(x=(z_{1}+2\delta(1,0,\ldots,0),z_{2})\) and \(x_{2}=(z_{1},z_{2}+2\delta(1,0,\ldots,0))\) in the definition of \(\mathcal{A}_{1}(\Delta,\delta)\) and \(\mathcal{A}_{2}(\Delta,\delta)\) is enough to check that \(\mathcal{M}^{\prime}\in\mathcal{A}_{1}(\Delta,\delta)\cap\mathcal{A}_{2}(\Delta,\delta)\) for any \(\Delta>0\).
To show that the complement of \(\mathcal{A}\) is dense, let \(\mathcal{M}\) be a closed subset of \(\mathbb{R}^{D}\) and \(\eta>0\), and let \(\mathcal{M}^{\prime}=\{x+y\,|\,x\in\mathcal{M},y\in[-\eta,\eta]^{D}\}\). Then \(H(\mathcal{M},\mathcal{M}^{\prime})\leqslant\eta\sqrt{D}\) by construction, and for any \(\Delta\leqslant 2\eta\) and \(\varepsilon>0\), \(\mathcal{M}^{\prime}\notin\mathcal{A}_{1}(\Delta,\varepsilon)\), and thus \(\mathcal{M}^{\prime}\in\mathcal{A}^{\complement}\).
Note that if \(\mathcal{M}\) is the support of a random variable \(X\), then \(\mathcal{M}^{\prime}\) is the support of \(X+Y\), where \(Y\) is a uniform random variable on \([-\eta,\eta]^{D}\) that is independent of \(X\). In that case, by Proposition 2 (i), \(X+Y\) is a small perturbation of \(X\) that does not satisfy (Adep).
### Proof of Theorem 3
Let \(\mathcal{M}\) be a compact set of \(\mathbb{R}^{D}\). Since \(\varepsilon\) and \(G\) are independent, and thus \(f^{\varepsilon}\) and \(G(\mathcal{M})\) are independent, writing \(\mu_{G}\) the distribution of \(G(\mathcal{M})\):
\[\mathbb{P}(F(\mathcal{M})\in\mathcal{B}_{1}\cap\mathcal{B}_{2})=\int\mathbb{P} \left(f^{\varepsilon}(\frac{g}{\delta})\in\mathcal{B}_{1}\cap\mathcal{B}_{2} \right)d\mu_{G}(g)=1,\]
provided that for any compact set \(\mathcal{M}^{\prime}\in\mathbb{R}^{D}\), \(f^{\varepsilon}(\mathcal{M}^{\prime})\in\mathcal{B}_{1}\cap\mathcal{B}_{2}\) a.s..
Thus, it suffices to show that for any compact set \(\mathcal{M}\in\mathbb{R}^{D}\), almost surely, \(f^{\varepsilon}(\mathcal{M})\) is in the set \(\mathcal{B}_{1}\) from Proposition 4. The proof for \(\mathcal{B}_{2}\) is identical.
We will show that \(\operatorname{Card}(\operatorname{arg}\,\max_{z\in f^{\varepsilon}(\mathcal{M} )}\pi^{(1)}(z))=1\), where \(\pi^{(1)}(z)\) is the first coordinate of \(z\). First, since \(\mathcal{M}\) is compact and \(f^{\varepsilon}\) is continuous, \(f^{\varepsilon}(\mathcal{M})\) is compact, therefore the supremum of \(\pi^{(1)}\) is reached at least at one point.
**Lemma 11**.: _The two following properties hold almost surely._
1. _Let_ \(F^{\prime}_{I},F^{\prime}_{J}\in\mathcal{P}^{\varepsilon}\) _be two different simplices, then at least one of the two following points holds:_ * \(\sup_{x\in f^{\varepsilon}(\mathcal{M})\cap\text{relint}(F^{\prime}_{J})}\pi^{ (1)}(x)\neq\sup_{x\in f^{\varepsilon}(\mathcal{M})\cap\text{relint}(F^{ \prime}_{J})}\pi^{(1)}(x)\)__ * \(\pi^{(1)}\) _does not reach its maximum on_ \(f^{\varepsilon}(\mathcal{M})\cap\text{relint}(F^{\prime}_{I})\) _or does not reach its maximum on_ \(f^{\varepsilon}(\mathcal{M})\cap\text{relint}(F^{\prime}_{J})\)_._
2. _Let_ \(F^{\prime}_{I}\in\mathcal{P}^{\varepsilon}\)_, then the supremum of_ \(\pi^{(1)}\) _on_ \(f^{\varepsilon}(\mathcal{M})\cap\text{relint}(F^{\prime}_{I})\) _is reached at at most one point of_ \(\text{relint}(F^{\prime}_{I})\)_._
A consequence of this lemma is that almost surely, the maximizer of \(\pi^{(1)}\) on \(f^{\varepsilon}(\mathcal{M})\) is unique, as all maximizers of \(\pi^{(1)}\) on \(f^{\varepsilon}(\mathcal{M})\) belong to the relative interior of one simplex of \(\mathcal{P}^{\varepsilon}\), which shows that \(f^{\varepsilon}(\mathcal{M})\) is almost surely in \(\mathcal{B}_{1}\).
Proof of Lemma 11.: The following functions will be of use in the proof. For any finite \(J\subset\mathbb{N}\) such that \(F^{\prime}_{J}=\{x_{i}+\varepsilon_{i}\}_{i\in J}\in\mathcal{P}^{\varepsilon}\), for any \(j\in J\) and \(\alpha\in(0,1]\), let
\[u_{\alpha,J}:e\in\mathbb{R}\longmapsto\sup\Bigg{\{}\alpha(\pi^{(1)}(x_{j})+e )+\sum_{k\in J\setminus\{j\}}\alpha_{k}\pi^{(1)}(x_{k}+\varepsilon_{k}),\, \text{where}\]
\[z=\alpha x_{j}+\sum_{k\in J\setminus\{j\}}\alpha_{k}x_{k}\in\mathcal{M},\,\, \alpha_{k}\in(0,1]\text{ and }\alpha+\sum_{k}\alpha_{k}=1\Bigg{\}}.\]
In other words, \(u_{\alpha J}\) is the supremum of \(\pi^{(1)}\) on the slice of \(f^{\varepsilon}(\mathcal{M})\cap\text{relint}(F^{\prime}_{J})\) that gives weight \(\alpha\) to the vertex \((x_{j}+\varepsilon_{j})\). To simplify the notations, let \(w_{k}:z\longmapsto\alpha_{k}\) be the "weight" functions. It is straightforward to check that
1. the function \(u_{\alpha,J}\) is linear with slope \(\alpha\),
2. \(\sup_{x\in f^{\varepsilon}(\mathcal{M})\cap\operatorname{relint}(F^{\prime}_{J})} \pi^{(1)}(x)=\sup_{\alpha\in(0,1]}u_{\alpha,J}(\pi^{(1)}(\varepsilon_{j}))\),
3. the function \(h:\pi^{(1)}(\varepsilon_{j})\longmapsto\sup_{x\in f^{\varepsilon}(\mathcal{M} )\cap\operatorname{relint}(F^{\prime}_{J})}\pi^{(1)}(x)\) (all coordinates of all \(\varepsilon_{k}\) other than \(\pi^{(1)}(\varepsilon_{j})\) being fixed) is convex,
4. if the supremum of \(\pi^{(1)}\) on the closure of \(f^{\varepsilon}(\mathcal{M})\cap\operatorname{relint}(F^{\prime}_{J})\) is reached at some point \(z\in F^{\prime}_{J}\) when \(\pi^{(1)}(\varepsilon_{j})=e\), then \(w_{j}(z)\) is a sub-gradient of \(h\) at \(e\),
5. since the number of points where the sub-gradient of a convex function on \(\mathbb{R}\) is not unique is at most countable, almost surely (whether all coordinates of all \(\varepsilon_{k}\) other than \(\pi^{(1)}(\varepsilon_{j})\) are fixed or not), \(h\) has a unique sub-gradient at \(\pi^{(1)}(\varepsilon_{j})\).
Let us now prove the first point of the lemma. Let \(F^{\prime}_{I}=\{x_{i}+\varepsilon_{i}\}_{i\in I}\) and \(F^{\prime}_{J}=\{x_{i}+\varepsilon_{i}\}_{i\in J}\) be two different simplices of \(\mathcal{P}^{\varepsilon}\), and let \(j\in J\setminus I\) (by exchanging the two simplices, we may assume without loss of generality that \(J\) is not a subset of \(I\)).
Consider the following, conditionally to \((\varepsilon_{n})_{n\neq j}\) and \(\pi^{(2:D)}(\varepsilon_{j})\). Assume that \(h(\pi^{(1)}(\varepsilon_{j}))=\sup_{x\in f^{\varepsilon}(\mathcal{M})\cap \operatorname{relint}(F^{\prime}_{I})}\pi^{(1)}(x)\) (otherwise we are in the first case of the first point of the lemma). We may assume without loss of generality (by point 5 above) that the sub-gradient of \(h\) at \(\pi^{(1)}(\varepsilon_{j})\) is unique. Two cases are possible:
* the sub-gradient of \(h\) at \(\pi^{(1)}(\varepsilon_{j})\) is \(0\). Then \(\pi^{(1)}\) does not reach its maximum on \(f^{\varepsilon}(\mathcal{M})\cap\operatorname{relint}(F^{\prime}_{J})\), since if \(z\) is a maximizer of \(\pi^{(1)}\), then \(w_{j}(z)=0\) by point 4,
* the sub-gradient of \(h\) at \(\pi^{(1)}(\varepsilon_{j})\) is positive, so there exists a single point \(e\) such that \(h(e)=\sup_{x\in f^{\varepsilon}(\mathcal{M})\cap\operatorname{relint}(F^{ \prime}_{I})}\pi^{(1)}(x)\). Since \(\pi^{(1)}(\varepsilon_{j})\) is uniform on \([-r,r]\) by construction, we almost surely have \(\pi^{(1)}(\varepsilon_{j})\neq e\), and thus this second case almost surely never happens.
For the second point of the lemma, by points 4 and 5, if the set of maximizers of \(\pi^{(1)}\) on \(f^{\varepsilon}(\mathcal{M})\cap\operatorname{relint}(F^{\prime}_{J})\) is a non-empty set \(\mathcal{Z}\), then for any \(j\in J\), almost surely, \(w_{j}\) is constant on \(\mathcal{Z}\). Since every point \(z\in F^{\prime}_{J}\) is characterized by the vector \((w_{j}(z))_{j\in J}\), this shows that \(\mathcal{Z}\) contains a single point, which concludes the proof.
### Proof of Proposition 6
For any \(\nu>0\) and \(h\in L^{2}([-\nu,\nu]^{D})\) (resp. \(L^{\infty}([-\nu,\nu]^{D})\)), write \(\|h\|_{2,\nu}\) (resp. \(\|h\|_{\infty,\nu}\)) its \(L^{2}\) (resp. \(L^{\infty}\)) norm.
Let \(\rho_{0}\in[1,2)\). Let us start with some preliminary results.
From [9], Section 7.1, for all \(\nu>0\), there exists \(b>0\), \(\eta>0\), \(c_{M}>0\) and \(c_{Z}>0\) such that, writing \(\epsilon(u)=b/\log\log(1/u)\), the following properties hold for any \(\rho^{\prime}\in[1,\rho_{0}]\).
* For all \(\phi\in\Upsilon_{\rho^{\prime},S}\) and for all \(h\in L^{2}([-\nu,\nu]^{D})\) such that \(\phi+h\in\Upsilon_{\rho^{\prime},S}\) and \(\|h\|_{2,\nu}\leqslant\eta\), \[M(\phi+h;\nu|\phi)\geqslant c_{\nu}^{4}\|h\|_{2,\nu}^{2+2\epsilon(\|h\|_{2,\nu} )}.\] (11)
* For all \(n\geqslant 1\), writing \(Z_{n}(t,\phi)=\sqrt{n}(\tilde{\phi}_{n}(t)-\phi(t)\Phi_{\varepsilon^{(1)}}(t_ {1})\Phi_{\varepsilon^{(2)}}(t_{2}))\), one has for all \(\phi\in\Upsilon_{\rho^{\prime},S}\) and \(h\in L^{2}([-\nu,\nu]^{D})\) such that \(\phi+h\in\Upsilon_{\rho^{\prime},S}\), \[|M_{n}(\phi+h)-M(\phi+h;\nu_{\text{est}}|\phi)-(M_{n}(\phi)-M(\phi;\nu_{\text{ est}}|\phi))|\leqslant c_{M}\frac{\|Z_{n}(\cdot,\phi)\|_{\infty,\nu_{\text{est}}} |}{\sqrt{n}}\|h\|_{2,\nu_{\text{est}}}^{1-\epsilon(\|h\|_{2,\nu_{\text{est}}} )}.\] (12)
* For all \(x\in[1,n]\), \[\mathbb{P}(\|Z_{n}(\cdot,\Phi_{X})\|_{\infty,\nu_{\text{est}}}\geqslant c_{Z} \sqrt{x})\leqslant e^{-x}.\] (13)
Moreover, from Lemma H.3 of [20], there exists a constant \(c_{T}>0\) such that for all \(\rho^{\prime}\in[1,\rho_{0}]\), \(m\geqslant\rho^{\prime}D\) and \(\phi\in\Upsilon_{\rho^{\prime},S}\),
\[\|\phi-T_{m}\phi\|_{\infty,\nu_{\text{est}}}\leqslant c_{T}(S\nu_{\text{est}})^ {m}m^{-m/\rho^{\prime}+D}.\]
Let \(\rho^{\prime}\in[1,\rho_{0}]\) and assume that \(m\geqslant 2\rho^{\prime}\frac{\log n}{\log\log n}\), then this equation becomes \(\|\phi-T_{m}\phi\|_{\infty,\nu_{\text{est}}}=O(n^{-2+o_{n}(1)})\), where \(o_{n}(1)\) denotes a sequence tending to \(0\) when \(n\) tends to infinity. In particular, there exists \(n_{0}\) such that for all \(n\geqslant n_{0}\),
\[\sup_{\rho^{\prime}\in[1,\rho_{0}]}\sup_{\nu\in(0,\nu_{\text{est}}]}\sup_{m \geqslant 2\rho^{\prime}\frac{\log n}{\log\log n}}\sup_{\phi\in\Upsilon_{\rho^ {\prime},S}}\|\phi-T_{m}\phi\|_{2,\nu}\leqslant\frac{1}{n} \tag{14}\]
and
\[\sup_{\rho^{\prime}\in[1,\rho_{0}]}\sup_{m\geqslant 2\rho^{\prime}\frac{\log n }{\log\log n}}\sup_{\phi\in\Upsilon_{\rho^{\prime},S}}|M_{n}(\phi)-M_{n}(T_{m }\phi)|\leqslant c\|\phi-T_{m}\phi\|_{\infty,\nu_{\text{est}}}\leqslant\frac{1 }{n}. \tag{15}\]
for some \(c>0\) that depends only on \(\nu_{\text{est}}\), \(\rho_{0}\) and \(S\), using that \(\sup_{\phi\in\Upsilon_{\rho_{0},S}}\|\phi\|_{\infty,\nu_{\text{est}}}<+\infty\). Finally, following the proof of equation (25) of Section A.3 of [19], for any \(\nu^{\prime}\geqslant\nu\), \(m\geqslant 1\) and \(\phi\in\mathbb{C}_{m}[X]\)
\[\|\phi\|_{2,\nu^{\prime}}\leqslant m^{D/2}(4\frac{\nu^{\prime}}{\nu})^{m+D/2} \|\phi\|_{2,\nu}. \tag{16}\]
Let us now prove the proposition. Let \(\rho\in[1,\rho_{0}]\) such that \(\Phi_{X}\in\Upsilon_{\rho,S}\cap\mathcal{H}\). By definition, for any \(m\geqslant 1\) and \(\rho^{\prime}\in[\rho,\rho_{0}]\), \(\widehat{\Phi}_{n,m,\rho^{\prime}}\) is such that \(\widehat{\Phi}_{n,m,\rho^{\prime}}\in\Upsilon_{\rho^{\prime},S}\cap\mathcal{H}\) and
\[M_{n}(T_{m}\widehat{\Phi}_{n,m,\rho^{\prime}}) \leqslant\inf_{\phi\in\Upsilon_{\rho^{\prime},S}\cap\mathcal{H}} M_{n}(T_{m}\phi)+\frac{1}{n}\] \[\leqslant\inf_{\phi\in\Upsilon_{\rho,S}\cap\mathcal{H}}M_{n}(T_{m }\phi)+\frac{1}{n}\] \[\leqslant M_{n}(T_{m}\Phi_{X})+\frac{1}{n}\]
and thus, by (15),
\[\sup_{\rho^{\prime}\in[\rho,\rho_{0}]}\sup_{m\geqslant 2\rho^{\prime}\frac{ \log n}{\log\log n}}M_{n}(\widehat{\Phi}_{n,m,\rho^{\prime}})\leqslant M_{n} (\Phi_{X})+\frac{3}{n}. \tag{17}\]
Therefore, by (12), for any \(\nu\in(0,\nu_{\text{est}}]\), writing \(h_{m,\rho^{\prime}}=\widehat{\Phi}_{n,m,\rho^{\prime}}-\Phi_{X}\),
\[M(\widehat{\Phi}_{n,m,\rho^{\prime}};\nu|\Phi_{X}) \leqslant M(\widehat{\Phi}_{n,m,\rho^{\prime}};\nu_{\text{est}}| \Phi_{X})\] \[\leqslant c_{M}\frac{\|Z_{n}(\cdot,\Phi_{X})\|_{\infty,\nu_{\text{ est}}}}{\sqrt{n}}\|h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}}}^{1-\epsilon(\|h_{m,\rho^{ \prime}}\|_{2,\nu_{\text{est}}})}+\frac{3}{n}. \tag{18}\]
Let us show that we may apply (11). Combining (17) with Lemma A.1 of [19] shows that for any \(\delta>0\), there exist \(c_{\eta}>0\) and \(n_{0}\) (which do not depend on \(\rho\)) such that for all \(n\geqslant n_{0}\), with probability at least \(1-4e^{-c_{\eta}n}\),
\[\sup_{\rho^{\prime}\in[\rho,\rho_{0}]}\sup_{m\geqslant 2\rho^{\prime}\frac{\log n }{\log\log n}}M(\widehat{\Phi}_{n,m,\rho^{\prime}};\nu_{\text{est}}|\Phi_{X}) \leqslant\delta.\]
In addition, since \(\Upsilon_{\rho_{0},S}\cap\mathcal{H}\) is compact in \(L^{2}([-\nu_{\text{est}},\nu_{\text{est}}]^{D})\), \(\phi\mapsto M(\phi;\nu_{\text{est}}|\Phi_{X})\) is continuous on \(L^{2}([-\nu_{\text{est}},\nu_{\text{est}}]^{D})\), and \(M(\phi;\nu_{\text{est}}|\Phi_{X})=0\) implies \(\phi=\Phi_{X}\) for all \(\phi\in\mathcal{H}\cap\Upsilon_{\rho_{0},S}\) by Theorem 1, there exists \(\delta>0\) such that
\[\inf_{\phi\in\Upsilon_{\rho_{0},S}\cap\mathcal{H}\text{ s.t. }\|\phi-\Phi_{X}\|_{2,\nu_{\text{est}}} \geqslant\eta}M(\phi;\nu_{\text{est}}|\Phi_{X})>\delta.\]
Therefore, there exist \(c_{\eta}>0\) and \(n_{0}\) (which do not depend on \(\rho\)) such that for all \(n\geqslant n_{0}\), with probability at least \(1-4e^{-c_{\eta}n}\),
\[\sup_{\rho^{\prime}\in[\rho,\rho_{0}]}\sup_{m\geqslant 2\rho\frac{\log n}{\log \log n}}\|h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}}}\leqslant\eta, \tag{19}\]
which is what we need to apply (11).
Fix now \(\nu\in(0,\nu_{\text{est}}]\), \(c(\nu)>0\) and \(E>0\) such that \(\mathbb{Q}\in\mathcal{Q}^{(D)}(\nu,c(\nu),E)\). In particular, \(c_{\nu}\geqslant c(\nu)>0\). Then, by (11) and (18),
\[\|h_{m,\rho^{\prime}}\|_{2,\nu}^{2+2\epsilon(\|h_{m,\rho^{\prime}}\|_{2,\nu})} \leqslant\frac{2}{c(\nu)^{4}}\max\left(c_{M}\frac{\|Z_{n}(\cdot,\Phi_{X})\|_{ \infty,\nu_{\text{est}}}}{\sqrt{n}}\|h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}} }^{1-\epsilon(\|h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}}})},\frac{3}{n} \right). \tag{20}\]
By (14) and (16), assuming \(m\in[2\rho^{\prime}\frac{\log n}{\log\log n},C\frac{\log n}{\log\log n}]\) in the following series of inequalities (for some fixed \(C>2\rho^{\prime}\)),
\[\|h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}}} \leqslant 2\max(\|T_{m}h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}}}, \frac{3}{n}) \text{by (\ref{eq:14})}\] \[\leqslant 2\max\left(\|T_{m}h_{m,\rho^{\prime}}\|_{2,\nu}m^{\frac{D }{2}}(4\frac{\nu_{\text{est}}}{\nu})^{m+\frac{D}{2}},\frac{3}{n}\right) \text{by (\ref{eq:14})}\] \[\leqslant 4\max\left(\|h_{m,\rho^{\prime}}\|_{2,\nu}m^{\frac{D}{2}} (4\frac{\nu_{\text{est}}}{\nu})^{m+\frac{D}{2}},\frac{3m^{\frac{D}{2}}(4\frac{ \nu_{\text{est}}}{\nu})^{m+\frac{D}{2}}}{n}\right) \text{by (\ref{eq:14})}\] \[\leqslant n^{\epsilon(1/n)}\max\left(\|h_{m,\rho^{\prime}}\|_{2, \nu},\frac{1}{n}\right), \tag{21}\]
up to increasing the constant \(b\) in the definition of \(u\mapsto\epsilon(u)\), which can be done without loss of generality. Together with (20) and (13), one gets for all \(x\in[1,c_{\eta}n]\) (assuming \(c_{\eta}\leqslant 1\) without loss of generality), with probability at least \(1-4e^{-c_{\eta}n}-e^{-x}\geqslant 1-5e^{-x}\) (on the event where \(\|Z(\cdot,\Phi_{X})\|_{\infty,\nu_{\text{est}}}\leqslant c_{Z}\sqrt{x}\) and (19) holds) that for all \(\rho^{\prime}\in[\rho,\rho_{0}]\) and \(m\in[2\rho^{\prime}\frac{\log n}{\log\log n},C\frac{\log n}{\log\log n}]\),
\[\|h_{m,\rho^{\prime}}\|_{2,\nu}^{2+2\epsilon(\|h_{m,\rho^{\prime}}\|_{2,\nu}) }\leqslant c\max\left(\sqrt{\frac{x}{n}}n^{\epsilon(1/n)}\left(\|h_{m,\rho^{ \prime}}\|_{2,\nu}\vee\frac{1}{n}\right)^{1-\epsilon(\|h_{m,\rho^{\prime}}\|_ {2,\nu_{\text{est}}})},\frac{1}{n}\right)\]
for some constant \(c>0\) that does not depend on \(\rho\), \(\rho^{\prime}\) or \(m\). Since \(\epsilon\) is increasing, recalling that \(\|h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}}}\leqslant\eta\) on the event considered, by (21),
\[\epsilon(\|h_{m,\rho^{\prime}}\|_{2,\nu_{\text{est}}}) \leqslant\begin{cases}\max(\epsilon(\|h_{m,\rho^{\prime}}\|_{2,\nu} n^{\epsilon(1/n)}),\epsilon(n^{-1+\epsilon(1/n)}))&\text{if }\|h_{m,\rho^{\prime}}\|_{2,\nu}\leqslant n^{-2\epsilon(1/n)},\\ \epsilon(\eta)&\text{always},\end{cases}\] \[\leqslant\begin{cases}2\epsilon(1/n)&\text{if }\|h_{m,\rho^{\prime}}\|_{2,\nu} \leqslant n^{-2\epsilon(1/n)},\\ \epsilon(\eta)&\text{always},\end{cases}\] \[\leqslant[\epsilon(\eta)\text{ {or }}2\epsilon(1/n)]\]
for \(n\) large enough (depending on \(b\)), up to decreasing \(\eta\), where for compactness of notations, \([A\text{ {or }}B]\) means \(\min(A,B)\) if \(\|h_{m,\rho^{\prime}}\|_{2,\nu}\leqslant n^{-2\epsilon(1/n)}\) and \(A\) otherwise in the following. Gathering the two previous equations shows that either
\[\|h_{m,\rho^{\prime}}\|_{2,\nu}^{1+3[\epsilon(\eta)\text{ {or }}2\epsilon(1/n)]}\leqslant c\sqrt{\frac{x}{n^{1-2 \epsilon(1/n)}}}\]
or
\[\|h_{m,\rho^{\prime}}\|_{2,\nu}^{2+2[\epsilon(\eta)\text{ {or }}2\epsilon(1/n)]}\leqslant\frac{c}{n}.\]
Therefore, assuming \(3\epsilon(\eta)\leqslant 1\) without loss of generality, \(\|h_{m,\rho^{\prime}}\|_{2,\nu}\leqslant n^{-\epsilon(1/n)}\) as soon as \(x\leqslant n^{1-10\epsilon(1/n)}/c^{2}\) and thus, up to changing the constant \(c\), for \(n\) large enough and for all
\(x\in[1,n^{1-10\epsilon(1/n)}/c^{2}]\), with probability at least \(1-4e^{-c_{\eta}n}-e^{-x}\), for all \(\rho^{\prime}\in[\rho,\rho_{0}]\) and \(m\in[2\rho^{\prime}\frac{\log n}{\log\log n},C\frac{\log n}{\log\log n}]\),
\[\|h_{m,\rho^{\prime}}\|_{2,\nu}^{2}\leqslant c\left(\frac{x}{n^{1-2\epsilon(1/ n)}}\right)^{1-6\epsilon(1/n)}.\]
Finally, note that \(4e^{-c_{\eta}n}e^{n^{1-10\epsilon(1/n)}}\longrightarrow 0\), so that the probability that the last equation holds is larger than \(1-2e^{-x}\) for \(n\) large enough, which concludes the proof for the version with \(\widehat{\Phi}_{n,m,\rho^{\prime}}\). The version for \(T_{m}\widehat{\Phi}_{n,m,\rho^{\prime}}\) follows from this and (14).
### Proof of Lemma 1
Let \(y\in\mathcal{M}_{G}\cap\mathcal{K}\). By property (III) of \(\psi_{A}\),
\[\bar{g}(y) =\frac{1}{h^{D}}\int\psi_{A}\left(\frac{\|y-u\|}{h}\right)dG(u)\] \[\geqslant\frac{1}{h^{D}}\int_{\|u-y\|_{2}\leqslant c_{A}h}\psi_{ A}(\frac{\|y-u\|}{h})dG(u)\] \[\geqslant\frac{1}{h^{D}}d_{A}G^{*}(B(y,c_{A}h))\] \[\geqslant\frac{1}{h^{D}}d_{A}a(c_{A}h)^{d}.\]
### Proof of Lemma 2
Recall the definition of \(\bar{g}\): for all \(y\in\mathbb{R}^{D}\),
\[\bar{g}(y)=\frac{1}{h^{D}}\int\psi_{A}(\frac{\|y-u\|}{h})dG(u).\]
Let \(C_{1}>0\) and \(\epsilon>0\). By Property (V) of \(\psi_{A}\) and (4), there exists \(T>0\) (depending on \(A\) and \(C_{1}\)) such that for any \(t\geqslant T\), \(\psi_{A}(t)\leqslant C_{1}\exp(-\beta_{A}t^{A/(A+1)})\). Take \(y\in\mathbb{R}^{D}\) such that \(d(y,\mathcal{M}_{G})>(\frac{\epsilon}{\beta_{A}})^{\frac{A+1}{A}}h\log(\frac{ 1}{h})^{\frac{A+1}{A}}\), then for all \(u\in\mathcal{M}_{G}\), \(\frac{\|y-u\|}{h}\geqslant(\beta_{A}^{-1}\log(\frac{1}{h^{\epsilon}})^{\frac{ A+1}{A}}\), therefore there exists \(h_{0}>0\) depending only on \(\varepsilon\), \(D\), \(A\) and \(T\) (thus \(C_{1}\) such that \(h\leqslant h_{0}\) implies \(\frac{\|y-u\|}{h}\geqslant T\) and thus
\[\psi_{A}(\frac{\|y-u\|}{h}) \leqslant C_{1}\exp\left\{-\beta_{A}(\frac{\|y-u\|}{h})^{A/(A+1)}\right\}\] \[\leqslant C_{1}\exp\left\{-\log(\frac{1}{h^{\epsilon}})\right\}\] \[=C_{1}h^{\epsilon},\]
and finally \(\bar{g}(y)\leqslant C_{1}(\frac{1}{h})^{D-\varepsilon}\) since \(G\) is a probability distribution. Lemma 2 follows by taking \(\varepsilon=D\).
### Proof of Lemma 3
For \(y\in\mathbb{R}^{D}\),
\[\widehat{g}_{n,\kappa}(y)-\bar{g}(y)=(\frac{1}{2\pi})^{D}\int e^{-it^{\top}y} \mathcal{F}[\psi_{A}](th)(T_{m_{\kappa}}\widehat{\Phi}_{n,1/\kappa}(t)-\Phi_ {X}(t))dt.\]
Since \(\mathcal{F}[\psi_{A}](th)\) is \(0\) for \(\|t\|_{2}>1/h\),
\[\widehat{g}_{n,\kappa}(y)-\bar{g}(y) =(\frac{1}{2\pi})^{D}\int e^{-it^{\top}y}\mathcal{F}[\psi_{A}](th) (T_{m_{\kappa}}\widehat{\Phi}_{n,1/\kappa}(t)-\Phi_{X}(t))1_{\|t\|_{2}\leqslant 1 /h}dt\] \[=\mathcal{F}^{-1}[\mathcal{F}[\psi_{h}]\{(T_{m_{\kappa}}\widehat{ \Phi}_{n,1/\kappa}-\Phi_{X})1_{\|t\|_{2}\leqslant 1/h}\}](y)\] \[=\mathcal{F}^{-1}[\mathcal{F}[\psi_{A,h}]]*\mathcal{F}^{-1}[(T_{m _{\kappa}}\widehat{\Phi}_{n,1/\kappa}-\Phi_{X})1_{\|t\|_{2}\leqslant 1/h}](y). \tag{22}\]
By Young's convolution inequality,
\[\|\widehat{g}_{n,\kappa}-\bar{g}\|_{\infty}\leqslant\|\mathcal{F}^{-1}[\mathcal{F} [\psi_{A,h}]]\|_{2}\|\mathcal{F}^{-1}[(T_{m_{\kappa}}\widehat{\Phi}_{n,1/\kappa }-\Phi_{X})1]_{\|t\|_{2}\leqslant 1/h}\|_{2}.\]
Finally, using Parseval's equality and the fact that \(\mathcal{F}^{-1}[\mathcal{F}[\psi_{A,h}]]=\psi_{A,h}\),
\[\|\widehat{g}_{n,\kappa}-\bar{g}\|_{\infty}\leqslant\|\psi_{A,h}\|_{2}\|T_{m_ {\kappa}}\widehat{\Phi}_{n,1/\kappa}-\Phi_{X}\|_{2,1/h},\]
and use (5) to conclude the proof.
### Proof of Theorem 4
Let \(\kappa_{0}\in(1/2,1]\), \(\nu\in(0,\nu_{\text{est}}]\), \(c(\nu)>0\), \(E>0\), \(S>0\) and \(C>0\). Let \(\kappa\in[\kappa_{0},1]\), \(\mathbb{Q}\in\mathcal{Q}^{(D)}(\nu,c(\nu),E)\) and \(G\in St_{\mathcal{K}}(a,d,r_{0})\cap\mathcal{L}(\kappa,S,\mathcal{H})\).
Using inequalities analogous to (28)-(29) p.17 of [19], we get that for all \(\kappa^{\prime}\in[\kappa_{0},\kappa]\) and all integer \(m\),
\[\|T_{m}\widehat{\Phi}_{n,1/\kappa^{\prime}}-\Phi_{X}\|_{2,1/h}^{2}\leqslant 4 U(h)+4m^{D}(2+2\frac{1}{h\nu})^{2m+D}\Bigg{(}2V(\nu)+\|\widehat{\Phi}_{n,1/ \kappa^{\prime}}-\Phi_{X}\|_{2,\nu}^{2}\Bigg{)}, \tag{23}\]
where
\[U(h)=ch^{-D-2m-2/\kappa^{\prime}}S^{2m}m^{-2\kappa^{\prime}m+2D} \exp(2\kappa^{\prime}(S/h)^{1/\kappa^{\prime}})\] \[\text{and}\quad V(\nu)=c(S\nu)^{2m+2/\kappa^{\prime}}m^{-2\kappa ^{\prime}m+2D}.\]
Thus, applying Lemma 3 and using \(h=c_{h}Sm_{\kappa^{\prime}}^{-\kappa^{\prime}}\), there exists \(C>0\) such that on the event where (23) holds:
\[\Gamma_{n,\kappa^{\prime}}^{2}\leqslant C(c_{h}S)^{-2D-2m_{\kappa ^{\prime}}-2/\kappa^{\prime}}m_{\kappa^{\prime}}^{2D(\kappa^{\prime}+1)+2}S^{ 2m_{\kappa^{\prime}}}\exp(2\kappa^{\prime}c_{h}^{-1/\kappa^{\prime}}m_{\kappa^ {\prime}})\\ +Cm_{\kappa^{\prime}}^{D(1+\kappa^{\prime})}(2+2\frac{m_{\kappa^ {\prime}}^{\kappa^{\prime}}}{c_{h}S\nu})^{2m_{\kappa^{\prime}}+D}\Bigg{(}(S\nu )^{2m_{\kappa^{\prime}}+2/\kappa^{\prime}}m_{\kappa^{\prime}}^{-2\kappa^{ \prime}m_{\kappa^{\prime}}+2D}+\|\widehat{\Phi}_{n,1/\kappa^{\prime}}-\Phi_{X} \|_{2,\nu}^{2}\Bigg{)}. \tag{24}\]
The first term of the upper bound is upper bounded as follows.
\[(c_{h}S)^{-2D-2m_{\kappa^{\prime}}-2/\kappa^{\prime}}m_{\kappa^{ \prime}}^{2D(\kappa^{\prime}+1)+2}S^{2m_{\kappa^{\prime}}}\exp(2\kappa^{ \prime}c_{h}^{-1/\kappa^{\prime}}m_{\kappa^{\prime}})\] \[=S^{-2D-2/\kappa^{\prime}}\exp\left\{(-2D-2m_{\kappa^{\prime}}-2/ \kappa^{\prime})\log(c_{h})+(2D(\kappa^{\prime}+1)+2)\log(m_{\kappa^{\prime}} )+2\kappa^{\prime}c_{h}^{-1/\kappa^{\prime}}m_{\kappa^{\prime}}\right\}\] \[\leqslant C\exp\left\{(-2\log(c_{h})+1)m_{\kappa^{\prime}}+2(D( \kappa^{\prime}+1)+1)\log(m_{\kappa^{\prime}})\right\} \tag{25}\] \[\leqslant C\exp\left\{(-2\log(c_{h})+3+2D(\kappa^{\prime}+1))m_{ \kappa^{\prime}}\right\}, \tag{26}\]
for another constant \(C>0\), where inequality (25) holds because \(2\kappa c_{h}^{1/\kappa}>1\) and inequality (26) holds because \(\log(m_{\kappa})\leqslant m_{\kappa}\). The second term of the upper bound is upper bounded by
\[m_{\kappa^{\prime}}^{D(1+\kappa^{\prime})}(2+2\frac{m_{\kappa^{ \prime}}^{\kappa^{\prime}}}{c_{h}S\nu})^{2m_{\kappa^{\prime}}+D}\Bigg{(}(S\nu )^{2m_{\kappa^{\prime}}+2/\kappa^{\prime}}m_{\kappa^{\prime}}^{-2\kappa^{ \prime}m_{\kappa^{\prime}}+2D}+\|\widehat{\Phi}_{n,1/\kappa^{\prime}}-\Phi_{X} \|_{2,\nu}^{2}\Bigg{)}\\ \leqslant C\ m_{\kappa^{\prime}}^{D(1+2\kappa^{\prime})}(2\kappa^ {\prime}m_{\kappa^{\prime}})^{2\kappa^{\prime}m_{\kappa^{\prime}}}(2\kappa^{ \prime})^{-2\kappa^{\prime}m_{\kappa^{\prime}}}(c_{h}S\nu)^{-2m_{\kappa^{ \prime}}-D}\\ \times\Bigg{(}(S\nu)^{2m_{\kappa^{\prime}}+2/\kappa^{\prime}}m_{ \kappa^{\prime}}^{-2\kappa^{\prime}m_{\kappa^{\prime}}+2D}+\|\widehat{\Phi}_{ n,1/\kappa^{\prime}}-\Phi_{X}\|_{2,\nu}^{2}\Bigg{)}\\ \leqslant C\Bigg{(}\exp\left\{(-2\log(c_{h})+(3D+2\kappa^{\prime}))m _{\kappa^{\prime}}\right\}\\ +(2\kappa^{\prime}m_{\kappa^{\prime}})^{2\kappa^{\prime}m_{\kappa^{ \prime}}}\exp\Bigg{\{}(-2\log(c_{h})+D(1+2\kappa^{\prime}))m_{\kappa^{ \prime}}\Bigg{\}}\|\widehat{\Phi}_{n,1/\kappa^{\prime}}-\Phi_{X}\|_{2,\nu}^{2} \Bigg{)}\]
for another constant \(C>0\). Putting all together, we get that for yet another constant \(C>0\),
\[\Gamma^{2}_{n,\kappa^{\prime}}\leqslant C\max \Bigg{\{}(-2\log(c_{h})+3+2D(\kappa^{\prime}+1))m_{\kappa^{\prime}} \Bigg{\}},\exp\Bigg{\{}(-2\log(c_{h})+(3D+2\kappa^{\prime}))m_{\kappa^{\prime}} \Bigg{\}},\] \[(2\kappa^{\prime}m_{\kappa^{\prime}})^{2\kappa^{\prime}m_{\kappa^ {\prime}}}\exp\Bigg{\{}(-2\log(c_{h})+D(1+2\kappa^{\prime}))m_{\kappa^{\prime}} \Bigg{\}}\|\widehat{\Phi}_{n,1/\kappa^{\prime}}-\Phi_{X}\|^{2}_{2,\nu}\Bigg{\}}.\]
Choosing \(c_{h}\geqslant\exp\left\{2D+2\right\}\) and \(m_{\kappa^{\prime}}=\frac{1}{4\kappa^{\prime}}\frac{\log n}{\log\log n}\) for some \(\gamma\in(0,1)\), it follows that
\[\Gamma^{2}_{n,\kappa^{\prime}}\leqslant Ce^{-m_{\kappa^{\prime}}}\Bigg{[}1 \lor n^{1/2}\|\widehat{\Phi}_{n,1/\kappa^{\prime}}-\Phi_{X}\|^{2}_{2,\nu} \Bigg{]}. \tag{27}\]
By Proposition 6, taking \(x=\log n\) and \(\delta,\delta^{\prime\prime}\) such that \((1-\delta)(1-\delta^{\prime\prime})>1/2\), we obtain that with probability at least \(1-2/n\), for all \(\kappa^{\prime}\leqslant\kappa\), \(\Gamma^{2}_{n,\kappa^{\prime}}\leqslant Ce^{-m_{\kappa^{\prime}}}\longrightarrow 0\). Note that we could also take \(x=n^{1/2-\delta^{\prime\prime\prime}}\) for any \(\delta^{\prime\prime\prime}>0\) and still have \(\Gamma^{2}_{n,\kappa^{\prime}}\leqslant Ce^{-m_{\kappa^{\prime}}}\) with probability at least \(1-2e^{-x}\), up to changing the constant \(C\), by picking \(\delta\) and \(\delta^{\prime\prime}\) small enough in Proposition 6.
Now, by Lemma 1, for any \(h\leqslant(r_{0}/c_{A})\wedge 1\),
\[\inf_{y\in\mathcal{M}_{G}\cap\mathcal{K}}\widehat{g}_{n,\kappa^{ \prime}}(y) \geqslant\inf_{y\in\mathcal{M}_{G}\cap\mathcal{K}}\widehat{g}(y)- \Gamma_{n,\kappa^{\prime}}\] \[\geqslant c_{A}^{d}d_{A}a\left(\frac{1}{h}\right)^{D-d}-\Gamma_{n,\kappa^{\prime}}\] \[\geqslant\frac{c_{A}^{d}d_{A}a}{2}\left(\frac{1}{h}\right)^{D-d}\]
as soon as \(\Gamma_{n,\kappa^{\prime}}\leqslant\frac{c_{A}^{d}d_{A}a}{2}\), and this lower bound is strictly larger than \(\lambda_{n,\kappa}\) for any \(d\). This implies that on the event where \(\Gamma_{n,\kappa^{\prime}}\leqslant\frac{c_{A}^{d}d_{A}a}{2}\), \(\mathcal{M}_{G}\cap\mathcal{K}\subset\widehat{\mathcal{M}}_{\kappa^{\prime}} \cap\mathcal{K}\). Next,
\[\sup_{y\in\mathcal{K},d(y,\mathcal{M}_{G})\geqslant h\left[\frac{D}{\beta_{A }}\log\left(\frac{1}{h}\right)\right]^{\frac{A+1}{A}}}\widehat{g}_{n,\kappa^{ \prime}}(y)\leqslant\sup_{y\in\mathcal{K},d(y,\mathcal{M}_{G})\geqslant h \left[\frac{D}{\beta_{A}}\log\left(\frac{1}{h}\right)\right]^{\frac{A+1}{A}}} \bar{g}(y)+\Gamma_{n,\kappa^{\prime}}.\]
Choosing \(C_{1}=\frac{c_{A}^{d}d_{A}a}{16}\) and applying Lemma 2 we get that, on the event where \(\Gamma_{n,\kappa^{\prime}}\leqslant\frac{c_{A}^{d}d_{A}a}{16}\),
\[\sup_{y\in\mathcal{K},d(y,\mathcal{M}_{G})\geqslant h\left[\frac{D}{\beta_{A }}\log\left(\frac{1}{h}\right)\right]^{\frac{A+1}{A}}}\widehat{g}(y)\leqslant 2C_{1}\]
for \(n\) large enough, and this upper bound is strictly less than \(\lambda_{n,\kappa^{\prime}}\) for any \(d\). This implies that
\[\left\{y:y\in\mathcal{K},d(y,\mathcal{M}_{G})>h\left[\frac{D}{\beta_{A}}\log \left(\frac{1}{h}\right)\right]^{\frac{A+1}{A}}\right\}\cap\widehat{\mathcal{M }}_{\kappa^{\prime}}=\varnothing.\]
We may now take \(h\) as in the statement of the Theorem. As a result, we have proved that: for all \(\kappa_{0}\in(1/2,1]\), \(S>0\), \(a>0\)\(d\leqslant D\), \(\nu\in(0,\nu_{\text{est}}]\), \(c(\nu)>0\) and \(E>0\), there exists \(c^{\prime}>0\) and \(n_{0}\) such that for all \(n\geqslant n_{0}\), for all \(\kappa\in[\kappa_{0},1]\), \(G\in St_{\mathcal{K}}(a,d,r_{0})\cap\mathcal{L}(\kappa,S,\mathcal{H})\) and \(\mathbb{Q}\in\mathcal{Q}^{(d)}(\nu,c(\nu),E)\), with \((G*\mathbb{Q})^{\otimes n}\)-probability at least \(1-\frac{2}{n}\),
\[\sup_{\kappa^{\prime}\in[\kappa_{0},\kappa]}\frac{\log(n)^{\kappa^{\prime}}}{ \log(\log(n))^{\kappa^{\prime}+\frac{A+1}{A}}}H_{\mathcal{K}}(\mathcal{M}_{G}, \widehat{\mathcal{M}}_{\kappa^{\prime}})\leqslant c^{\prime}. \tag{28}\]
Using the fact that \(H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_{\kappa})\) is uniformly upper bounded, the theorem follows.
### Proof of Lemma 4
**Case \(\kappa\neq 1\).** This case is based on [26]. In the following, we will note constants that can change with upper case \(A\), \(B\) and \(C\). In [26], Theorem 2, the author defines for any positive constants \(\mu>0\), \(q>1\) and \(a>0\) a function \(\zeta_{q,\mu,a}\), such that for \(x\in\mathbb{R}\),
\[\zeta_{q,\mu,a}(x)=-i\int_{\mathcal{C}}z^{\mu}\exp(z^{q}-qax^{2}z)dz,\]
where \(\mathcal{C}\) is a curve in the complex plane so that the maximum of \(|z^{\mu}\exp(z^{q}-qax^{2}z)|\) for \(z\in\mathcal{C}\) is attained on the positive real line. The author shows that \(\zeta_{q,\mu,a}\) and \(\zeta_{q,\mu,a}^{2}\) are integrable functions.
The author uses the saddle-point integration method to show that there exist \(A>0\) and \(B>0\) which depend on \(q\), \(\mu\) and \(a\) such that
\[|\mathcal{F}[\zeta_{q,\mu,a}](t)|\leqslant A\exp(-Bx^{\frac{2q}{ \kappa+1}}). \tag{29}\]
Finally, for \(\kappa\in(1/2,1)\), fix \(\mu>0\), \(a>0\), and define
\[f_{\kappa}=c_{f_{\kappa}}\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}]^{2}* u_{1},\]
where \(u_{1}:x\in\mathbb{R}\mapsto\exp(-\frac{1}{1-4x^{2}})1|_{(-1/2,1/2)}(x)\) and \(c_{f_{\kappa}}\) is a constant that ensures that \(f_{\kappa}\) is a density.
Let us first prove that there exist \(A>0\) and \(B>0\) positive constants such that \(|\mathcal{F}[\mathrm{Re}[\zeta_{q,\mu,a}]^{2}](t)|\leqslant A\exp(-B|t|^{1/ \kappa})\).
\[|\mathcal{F}[\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}]^{2} ](t)| =|\mathcal{F}[\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}]]* \mathcal{F}[\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}]](t)|\] \[\leqslant A\int_{\mathbb{R}}\exp(-B|x-y|^{1/\kappa}-B|y|^{1/\kappa })dy\] \[=\int_{|y-x|\geqslant|x|/2}\exp(-B|x-y|^{1/\kappa}-B|y|^{1/\kappa})dy\] \[\quad+\int_{|y-x|<|x|/2}\exp(-B|x-y|^{1/\kappa}-B|y|^{1/\kappa})dy\] \[\leqslant A\exp(-B|x|^{1/\kappa}). \tag{30}\]
Finally, for all \(t\in\mathbb{R}\), using that \(|\mathcal{F}[u](t)|\leqslant\|u\|_{1,1}\),
\[|\mathcal{F}[f_{\kappa}](t)| =|\mathcal{F}[\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}]^{2} ](t)|\ |\mathcal{F}[u_{1}](t)|\] \[\leqslant A\exp(-B|x|^{\frac{1}{\kappa}}).\]
For \(x\in\mathbb{R}\), \(\mathcal{F}[f_{\kappa}(x)]^{\prime}=\mathcal{F}[x\mapsto xf_{\kappa}(x)]\) and
\[xf_{\kappa}(x)=c_{f_{\kappa}}v*\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}] ^{2}(x)+c_{f_{\kappa}}u_{1}*\tilde{\zeta}(x),\]
where \(v:x\in\mathbb{R}\mapsto xu_{1}(x)\) and \(\tilde{\zeta}:x\in\mathbb{R}\mapsto x\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1}, \mu,a}]^{2}(x)\).
Following the same proof as Theorem 2 of [26], there exists \(A>0\) and \(B>0\) such that for all \(t\in\mathbb{R}\), \(|\mathcal{F}[x\mapsto x\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}](t)]| \leqslant A\exp(-B|t|^{1/\kappa})\), so that, following the proof of (30), \(|\mathcal{F}[\tilde{\zeta}]|(t)\leqslant A\exp(-B|t|^{1/\kappa})\). Hence, there exists \(A>0\) and \(B>0\) such that \(|\mathcal{F}[f_{\kappa}]^{\prime}(t)|\leqslant A\exp(-B|t|^{1/\kappa})\).
Finally, note that \(f_{\kappa}\) is continuous as a convolution of an integrable function with a smooth function, and that for all \(x\in\mathbb{R}\), \(f_{\kappa}(x)>0\) since \(\mathrm{Re}[\zeta_{\frac{1}{2\kappa-1},\mu,a}]\) and \(u\) are not the null function almost everywhere.
Case \(\kappa=1\).Let \(\delta\in(0,1)\) and define \(f_{1}:x\in\mathbb{R}\mapsto c_{f_{1}}(u_{\frac{1}{1-\delta}}*u_{\frac{1}{1-\delta} })(x)\), where \(c_{f_{1}}\) is a constant that ensures that \(f_{1}\) is a probability density.
There exist \(A>0\) and \(B>0\) such that \(\mathcal{F}[f_{1}](x)\leqslant A\exp(-B|x|^{\delta})\), see Lemma in [30]. Moreover, \(\mathcal{F}[f_{1}]^{\prime}(x)=2c_{f_{1}}\mathcal{F}[u_{\frac{1}{1-\delta}}](x )\mathcal{F}[u_{\frac{1}{1-\delta}}]^{\prime}(x)\leqslant A\|x\mapsto xu_{ \frac{1}{1-\delta}}(x)\|_{1}\exp(-B|x|^{\delta})\).
Finally, note that \(f_{1}\) is continuous and does not vanish on its support.
### Proof of Lemma 5
First, by Lemma 4, for any \(\kappa\in(1/2,1]\), \(U(\kappa)\) satisfies \(A(1/\kappa)\).
Let \(i\in\{0,1\}\). For any \(\lambda=(\lambda_{1},\ldots,\lambda_{D})\in\mathbb{R}^{D}\),
\[\mathbb{E}[\exp(\lambda\top X_{i}(\kappa))] =\mathbb{E}\left[\exp\left(\left(\alpha\lambda_{1}+\frac{\alpha} {2}\lambda_{2}\right)U(\kappa)+(-1)^{i}\gamma\lambda_{2}\frac{\alpha}{2}\cos \left(\frac{U(\kappa)}{\gamma}\right)\right)\right]\] \[\leqslant e^{\gamma\frac{\alpha}{2}|\lambda_{2}|}\mathbb{E}\left[ \exp\left(\left(\alpha\lambda_{1}+\frac{\alpha}{2}\lambda_{2}\right)U(\kappa )\right)\right]. \tag{31}\]
Since \(U(\kappa)\) satisfies \(A(1/\kappa)\), there exist positive constants \(A>0\) and \(B>0\) such that for all \(\lambda=(\lambda_{1},\ldots,\lambda_{D})\in\mathbb{R}^{D}\), \(\mathbb{E}[\exp(\lambda\top X_{i}(\kappa))]\leqslant A\exp(B|\lambda|^{\frac{ 1}{\kappa}})\). Applying this in (31),
\[\mathbb{E}[\exp(\lambda\top X_{i}(\kappa))] \leqslant A\exp\left(\gamma\frac{\alpha}{2}|\lambda_{2}|+B\left| \left(\alpha\lambda_{1}+\frac{\alpha}{2}\lambda_{2}\right)\right|^{\frac{1}{ \kappa}}\right)\] \[\leqslant A^{\prime}\exp(B^{\prime}|\lambda|^{\frac{1}{\kappa}})\]
for some other constants \(A^{\prime}\) and \(B^{\prime}\) since \(1\leqslant\kappa\), so that \(X_{i}(\kappa)\) satisfies \(A(1/\kappa)\).
### Proof of Lemma 6
The proof is done in five steps.
1. We show that \(\gamma g_{\gamma}\) is \(1\)-lipschitz.
2. For \(i\in\{0,1\}\) and \(\kappa\in(\frac{1}{2},1]\), we compute the density \(p_{i}\) of \(T_{i}(\kappa)\) with respect to the \(1\)-dimensional Hausdorff measure \(\mu_{H}\) and we show that for any compact set \(\mathcal{K}\), there exists \(b(\kappa,\mathcal{K})>0\) such that, for all \(u\in M_{i}(\gamma)\cap\mathcal{K}\), \(|p_{i}(u)|\geqslant b(\kappa,\mathcal{K})\).
3. We show that for \(i\in\{0,1\}\), \(\mu_{H}(\cdot\cap M_{i}(\gamma))\) is in \(St_{\mathcal{K}}(2,d,r_{0})\).
4. We deduce that for \(i\in\{0,1\}\) and \(d\geqslant 1\), \(T_{i}\) is in \(St_{\mathcal{K}}(2b(\kappa,\mathcal{K}),d,r_{0})\).
5. Finally, we show that for \(i\in\{0,1\}\), \(d\geqslant 1\) and \(a\) small enough, \(G_{i}(\kappa)\in St_{\mathcal{K}}(a,d,r_{0})\).
Proof of 1For all \(x\in\mathbb{R}\), \(|\gamma\tilde{g}_{\gamma}^{\prime}(x)|=|\sin(\frac{x}{\gamma})|\leqslant 1\), which implies that \(\gamma g_{\gamma}\) is \(1\)-Lipschitz.
Proof of 2Let us first compute the density \(p_{i}\) of \(T_{i}(\kappa)\) with respect to \(\mu_{H}\). For \(i\in\{0,1\}\), denote \(\zeta_{i}:x\in\mathbb{R}\mapsto(x,(-1)^{i}\gamma g_{\gamma}(x))\). Let \(\mathcal{B}\) be an open subset of \(\mathbb{R}^{D}\). For any \(\kappa\in(\frac{1}{2},1]\),
\[T_{i}(\kappa)(\mathcal{B})=\mathbb{P}[\zeta_{i}(U(\kappa))\in\mathcal{B}]= \mathbb{P}[U(\kappa)\in\zeta_{i}^{-1}(\mathcal{B})]=\int_{\zeta_{i}^{-1}( \mathcal{B})}f_{\kappa}(u)du.\]
Let \(J\zeta_{i}:u\in\mathbb{R}\mapsto\sqrt{1+\gamma^{2}\tilde{g}_{\gamma}(u)^{2}}\) be the Jacobian of \(\zeta_{i}\). By the Area Formula (see equation (2.47) in [4]),
\[s_{i}(\kappa)(\mathcal{B})=\int_{\zeta_{i}^{-1}(\mathcal{B})}\frac{f_{\kappa}(u )}{J\zeta_{i}(u)}J\zeta_{i}(u)du=\int_{\mathcal{B}\cap M_{i}(\gamma)}\frac{f_{ \kappa}(\pi^{(1)}(u))}{J\zeta_{i}(\pi^{(1)}(u))}d\mu_{H}(u).\]
We then have that for all \(x\in\mathbb{R}^{D}\),
\[p_{i}(x)=\frac{f_{\kappa}(\pi^{(1)}(x))}{J\zeta_{i}(\pi^{(1)}(x))}1|_{M_{i}( \gamma)}(x).\]
Since \(f_{\kappa}\) is continuous and does not vanish on its support, for any compact set \(\mathcal{K}\), \(M_{i}(\gamma)\cap\mathcal{K}\) is a compact subset of the support of \(f_{\kappa}\). Thus, since \(f_{\kappa}\) is continuous and does not vanish on its support, for any compact set \(\mathcal{K}\), there exists \(c(\kappa,\mathcal{K})>0\) such that for all \(u\in M_{i}(\gamma)\cap\mathcal{K}\), \(f_{\kappa}(u)\geqslant c(\kappa,\mathcal{K})\). Moreover, for \(i\in\{0,1\}\), \(J_{\zeta_{i}}(u)\leqslant\sqrt{2}\). Therefore, for all \(x\in M_{i}(\gamma)\cap\mathcal{K}\), \(|p_{i}(x)|\geqslant\frac{c(\kappa,\mathcal{K})}{\sqrt{2}}\).
**Proof of 3** Recall that the \(1\)-dimensional Hausdorff measure \(\mu_{H}\) is defined as the limit \(\lim_{\eta\to 0}\mu_{H}^{\eta}\), where for any set \(Z\)
\[\mu_{H}^{\eta}(Z)=\inf\left\{\sum_{i\in\mathbb{N}}\mathrm{Diam}(A_{i}):X \subset\bigcup_{i}A_{i}\text{ and }\forall i,\mathrm{Diam}(A_{i})\leqslant\eta \right\}.\]
For any \(z\in M_{i}(\gamma)\), there exists \(x_{0}\in\mathbb{R}\) such that \(z=(x_{0},(-1)^{i}\gamma g_{\gamma}(x_{0}))\) and, for any \(r>0\),
\[B(z,r)\cap M_{i}(\gamma)\supset\{(x,(-1)^{i}\gamma g_{\gamma}(x)),x\in B(x_{0},r)\}\]
since \(|x-x_{0}|\leqslant r\) implies \(\|\gamma g_{\gamma}(x)-\gamma g_{\gamma}(x_{0})\|_{\infty}\leqslant r\).
Let \((A_{i})_{i\in\mathbb{N}}\) be a covering of \(\{(x,(-1)^{i}\gamma g_{\gamma}(x)),x\in B(x_{0},r)\}\), and \(B_{i}=\pi^{(1)}(A_{i})\), then \(B_{i}\) is a covering of \(B(x_{0},r)\). For all \(\eta>0\),
\[\mu_{H}^{\eta}(\{(x,(-1)^{i}\gamma g_{\gamma}(x)),x\in B(x_{0},r)\})\geqslant \mu_{H}^{\eta}(B(x_{0},r)),\]
thus \(\mu_{H}(B(z,r)\cap M_{i}(\gamma))\geqslant\mu_{H}(B(x_{0},r))=2r\). If \(r_{0}\leqslant 1\), then for any \(r\leqslant r_{0}\),
\[\mu_{H}(B(z,r)\cap M_{i}(\gamma))\geqslant 2r^{d},\]
which proves 3.
**Proof of 4** Let \(x_{i}\in M_{i}(\gamma)\cap\mathcal{K}\) and \(r_{0}<1\). Then for all \(r\leqslant r_{0}\),
\[T_{i}(B(x_{i},r)\cap M_{i}(\gamma))=\int_{B(x_{i},r)\cap M_{i}(\gamma)}p_{i}( u)d\mu_{H}(u)\geqslant b(\kappa,\mathcal{K})\mu_{H}(B(x_{i},r)\cap M_{i}(\gamma)) \geqslant 2b(\kappa,\mathcal{K})r^{d}.\]
**Proof of 5** For \(i\in\{0,1\}\), let \(x_{i}\in A_{\alpha}M_{i}(\gamma)\cap\mathcal{K}\), \(r_{0}<1\), and take \(\tilde{\mathcal{K}}\) such that \(A_{\alpha}^{-1}\mathcal{K}\subset\tilde{\mathcal{K}}\). For all \(r\leqslant r_{0}\),
\[G_{i}(\kappa)(B(x_{i},r))=\mathbb{P}[A_{\alpha}S_{i}(\kappa)\in B (x_{i},r)] \geqslant\mathbb{P}\left[S_{i}(\kappa)\in B\left(A_{\alpha}^{-1}x_{ i},\frac{r}{\|A_{\alpha}\|_{\mathrm{op}}}\right)\right]\] \[\geqslant\frac{2b(\kappa,\Lambda_{\alpha}^{-1}\mathcal{K})}{\|A_ {\alpha}\|_{\mathrm{op}}}r\] \[\geqslant\frac{2b(\kappa,\tilde{\mathcal{K}})}{\alpha\|A_{1}\|_{ \mathrm{op}}}r,\]
so that for some \(a_{0}\) depending on \(\alpha\) and all \(a\leqslant a_{0}\), \(G_{i}(\kappa)(B(x_{i},r))\geqslant ar^{d}\).
### Proof of Lemma 8
Let \((\phi_{n})_{n}\) be a sequence in \(\mathcal{H}^{\star}(\kappa,S,(c_{\Delta},A_{\Delta}^{(1)},B_{\Delta}^{(1)},A_ {\Delta}^{(2)},B_{\Delta}^{(2)})_{\Delta>0})\) and \(\phi^{\star}\in L_{2}([-\nu,\nu]^{D})\) such that \(\|\phi_{n}-\phi^{\star}\|_{2,\nu}\to 0\). For each \(n\), there exists a random variable \(X_{n}\) such that \(\phi=\Phi_{X_{n}}\). Without loss of generality, we can assume that \(\phi_{n}\) converges almost everywhere to \(\phi^{\star}\) on \([-\nu,\nu]^{D}\). Since \(\Upsilon_{\kappa,S}\) is closed in \(L_{2}([-\nu,\nu]^{D})\), \(\phi^{\star}\in\Upsilon_{\kappa,S}\). Let us show that \(\phi^{\star}\) is the characteristic function of some random variable \(X^{\star}\). Let \(N\geqslant 1\), \((t_{k})_{1\leqslant k\leqslant N}\subset\mathbb{R}^{d}\) and \((\lambda_{k})_{1\leqslant k\leqslant N}\subset\mathbb{C}^{D}\), then
\[\sum_{k,l=1}^{N}\phi^{\star}(t_{k}-t_{l})\lambda_{k}\bar{\lambda}_{l}=\lim_{n \rightarrow\infty}\sum_{k,l=1}^{N}\phi_{n}(t_{k}-t_{l})\lambda_{k}\bar{\lambda}_{l} \geqslant 0.\]
Since \(\phi^{\star}\) is continuous and \(\phi^{\star}(0)=1\), according to Bochner's Theorem, there exists \(X^{\star}\) such that \(\phi^{\star}=\Phi_{X^{\star}}\). Applying the Identity Theorem component-wise shows that for every \(t\in\mathbb{R}^{D}\), \(\phi_{n}(t)\to\Phi_{X^{\star}}(t)\), so that \(X_{n}\) converges in distribution to \(X^{\star}\). Now, we have to show that \(\Phi_{X^{\star}}\in\mathcal{H}^{\star}(\kappa,S,M,(c_{\Delta},A^{(1)}_{\Delta},B^{(1)}_{\Delta},A^{(2)}_{\Delta},B^{(2)}_{\Delta>0})\).
Convergence in distribution of random vectors implies convergence in distribution of each coordinate, thus all the coordinates of \(X^{\star,(2)}\) are null except the first one. Moreover, by Theorem 1 of [29], \(X^{\star}\) satisfies (v). Let us prove that \(X^{\star}\) satisfies (iii) using the Portmanteau Theorem. Since \(A^{(1)}_{\Delta}\) is closed,
\[\mathbb{P}[(X^{\star})^{(1)}\in A^{(1)}_{\Delta}]\geqslant\limsup\mathbb{P}[X ^{(1)}_{n}\in A^{(1)}_{\Delta}]\geqslant c_{\Delta},\]
and
\[\mathbb{P}[(X^{\star})^{(2)}\in B^{(2)}_{\Delta}|(X^{\star})^{(1) }\in A^{(1)}_{\Delta}] =\frac{\mathbb{P}[X^{\star}\in A^{(1)}_{\Delta}\times B^{(2)}]}{ \mathbb{P}[(X^{\star})^{(1)}\in A^{(1)}_{\Delta}]}\] \[\geqslant\frac{\limsup\mathbb{P}[X_{n}\in A^{(1)}_{\Delta}\times B ^{(2)}_{\Delta}]}{\lim\mathbb{P}[(X_{n})^{(1)}\in A^{(1)}_{\Delta}]}\text{ since }\mathbb{P}[(X^{\star})^{(1)}\in\partial A^{(1)}_{\Delta}]=0\] \[\geqslant\limsup\frac{\mathbb{P}[X_{n}\in A^{(1)}_{\Delta}\times B ^{(2)}_{\Delta}]}{\mathbb{P}[(X_{n})^{(1)}\in A^{(1)}_{\Delta}]}=1.\]
Let us prove that \(\Phi_{X^{\star}}\) satisfies (iv). Since \(A^{(2)}_{\Delta}\) is closed,
\[\mathbb{P}[(X^{\star})^{(2)}\in A^{(2)}_{\Delta}]\geqslant\limsup\mathbb{P}[X ^{(2)}_{n}\in A^{(2)}_{\Delta}]\geqslant c_{\Delta}.\]
Moreover,
\[\mathbb{P}[(X^{\star})^{(1)}\in B^{(1)}_{\Delta}|(X^{\star})^{(2 )}\in A^{(2)}_{\Delta}] =\frac{\mathbb{P}[X^{\star}\in B^{(1)}_{\Delta}\times A^{(2)}]}{ \mathbb{P}[(X^{\star})^{(2)}\in A^{(2)}_{\Delta}]}\] \[\geqslant\frac{\limsup\mathbb{P}[X_{n}\in B^{(1)}_{\Delta}\times A ^{(2)}_{\Delta}]}{\lim\mathbb{P}[(X_{n})^{(2)}\in A^{(2)}_{\Delta}]}\text{ since }\mathbb{P}[(X_{n})^{(2)}\in\partial A^{(2)}_{\Delta}]=0\] \[\geqslant\limsup\frac{\mathbb{P}[X_{n}\in B^{(1)}_{\Delta}\times A ^{(2)}_{\Delta}]}{\mathbb{P}[(X_{n})^{(2)}\in A^{(2)}_{\Delta}]}=1.\]
Therefore, \(\Phi_{X^{\star}}\in\mathcal{H}^{\star}(\kappa,S,M,(c_{\Delta},A^{(1)}_{\Delta},B^{(1)}_{\Delta},A^{(2)}_{\Delta},B^{(2)}_{\Delta})_{\Delta>0})\).
### Proof of Lemma 9
Let us write \(m_{i,\gamma}(x)=(x+(-1)^{i}\gamma\frac{\alpha}{2}\cos(\frac{x}{\alpha\gamma}), 0,\ldots,0)\), so that \(X_{i}(\kappa)=(\alpha U(\kappa),m_{i,\gamma}(\alpha U(\kappa))\). For \(i\in\{0,1\}\), let \(w_{i,\kappa,\gamma}\) be the density of the first coordinate of \(m_{i,\gamma}(\alpha U(\kappa))\), then
\[M_{\kappa}=\sup_{x\in\mathbb{R},\gamma\in[0,1],i\in\{0,1\}}\{w_{i,\kappa, \gamma}(x)\vee\frac{1}{\alpha}f_{\kappa}(\frac{x}{\alpha})\}\]
is an upper bound of the density of \(X_{i}(\kappa)^{(1)}\) and of the first coordinate of \(X_{i}(\kappa)^{(2)}\) with respect to the Lebesgue measure. Let us show that \(M_{\kappa}\) is finite. First, note that \(m_{i,\gamma}\) is one-to-one from \(\mathbb{R}\) to \(\mathbb{R}\times\{0\}^{D-2}\) and \(m_{i,\gamma}^{-1}\) is Lipschitz with Lipschitz constant upper bounded by \(1/2\). One can easily check that for all \(x\in\mathbb{R}\),
\[w_{i,\kappa,\gamma}(x)=\frac{1}{\alpha}f_{\kappa}\left(\frac{(m_{i,\gamma})_{1 }(x,0,\ldots,0)}{\alpha}\right)\frac{1}{(m_{i,\gamma})_{1}^{\prime}(m_{i, \gamma}^{-1}(x,0,\ldots,0))},\]
where \((m_{i,\gamma})_{1}(x)\) is the first coordinate of \(m_{i,\gamma}(x)\). Since \((m_{i,\gamma})_{1}^{\prime}\) is lower bounded by \(1/2\), \(M_{\kappa}\leqslant\sup_{x\in\mathbb{R}}\frac{1}{\alpha}f_{\kappa}(\frac{x}{ \alpha})\), which is finite.
For any \(\Delta>0\), define the sets:
\[A^{(1)}_{\Delta}=[-\Delta,\Delta]\quad\text{ and }\quad B^{(2)}_{\Delta} =\bar{B}\left(0,(\frac{\alpha}{2}+2)\Delta\right)\cap(\mathbb{R} \times\{0\}^{D-2}),\] \[A^{(2)}_{\Delta}=[-\Delta,\Delta]\times\{0\}^{D-2}\quad\text{ and }\quad B^{(1)}_{\Delta} =\bar{B}(0,\Delta)\cap\mathbb{R}.\]
Define \(c_{\Delta,\kappa,\alpha}=\mathbb{P}[\alpha U(\kappa)\in A^{(1)}_{\Delta}] \wedge\inf_{\gamma\in[0,1],i\in\{0,1\}}\mathbb{P}[m_{i,\gamma}(\alpha U(\kappa ))\in A^{(2)}_{\Delta}]\), and let us prove that \(c_{\Delta,\kappa,\alpha}>0\).
First, \(\mathbb{P}[\alpha U(\kappa)\in A^{(1)}_{\Delta}]>0\) since the density of \(\alpha U(\kappa)\) is positive everywhere on its support. Then, for \(i\in\{0,1\}\),
\[\mathbb{P}[m_{i,\gamma}(\alpha U(\kappa))\in A^{(2)}_{\Delta}] =\mathbb{P}\left(\alpha U(\kappa)+(-1)^{i}\frac{\alpha}{2}\gamma \cos\left(\frac{U(\kappa)}{\gamma}\right)\in[-\Delta,\Delta]\right)\] \[\geqslant\mathbb{P}\left(\alpha U(\kappa)\in[-\Delta/2,\Delta/2],(-1)^{i}\frac{\alpha}{2}\gamma\cos\left(\frac{U(\kappa)}{\gamma}\right)\in[- \Delta/2,\Delta/2]\right)\] \[\geqslant\mathbb{P}\left(\alpha U(\kappa)\in[-\Delta/2,\Delta/2],\cos\left(\frac{U(\kappa)}{\gamma}\right)\in[-\Delta/\gamma\alpha,\Delta/ \gamma\alpha]\right)\] \[\geqslant\mathbb{P}\left(U(\kappa)\in\left[-\frac{\Delta}{2 \alpha},\frac{\Delta}{2\alpha}\right]\cap\left[\arccos\left(\frac{\Delta}{ \alpha}\right),\pi-\arccos(\frac{\Delta}{\alpha})\right]\right),\]
which is positive.
It is clear that the sets satisfy (i) and (ii). It remains to prove that \(X_{i}(\kappa)\) satisfies (iii) and (iv).
For any \(\Delta>0\) define \(B^{(1)}_{\Delta,i,\gamma}=m^{-1}_{i,\gamma}(A^{(2)}_{\Delta})\). Then
\[\operatorname{Diam}(B^{(1)}_{\Delta,i,\gamma}) =\sup_{x,y\in B^{(1)}_{\Delta,i,\gamma}}|x-y|\] \[=\sup_{x,y\in A^{(2)}_{\Delta}}|m^{-1}_{i,\gamma}(x)-m^{-1}_{i, \gamma}(y)|\] \[\leqslant\frac{1}{2}\sup_{x,y\in A^{(2)}_{\Delta}}\|x-y\|\leqslant\Delta.\]
Thus, \(B^{(1)}_{\Delta,i,\gamma}\subset B^{(1)}_{\Delta}\), and
\[\mathbb{P}[(X_{i}(\kappa))^{(1)}\in B^{(1)}_{\Delta,i}|(X_{i}(\kappa))^{(2)} \in A^{(2)}_{\Delta}]\geqslant\mathbb{P}[(X_{i}(\kappa))^{(1)}\in B^{(1)}_{ \Delta,i,\gamma}|(X_{i}(\kappa))^{(2)}\in A^{(2)}_{\Delta}]=1.\]
Similarly, define \(B^{(2)}_{\Delta,i,\gamma}=m_{i,\gamma}(A^{(1)}_{\Delta})=\{(x+(-1)^{i}\gamma \frac{\alpha}{2}\cos(\frac{x}{\alpha\gamma}),0,\ldots,0)\,\ x\in[-\Delta,\Delta]\}\), then
\[\operatorname{Diam}(B^{(2)}_{\Delta,i,\gamma}) =\sup_{x,y\in A^{(1)}_{\Delta}}\left|x+(-1)^{i}\gamma\frac{\alpha }{2}\cos\left(\frac{x}{\alpha\gamma}\right)-y-(-1)^{i}\gamma\frac{\alpha}{2} \cos\left(\frac{y}{\alpha\gamma}\right)\right|\] \[\leqslant 2\Delta+\gamma\frac{\alpha}{2}\left|\cos\left(\frac{x}{ \alpha\gamma}\right)-cos\left(\frac{y}{\alpha\gamma}\right)\right|\] \[\leqslant\left(\frac{\alpha}{2}+2\right)\Delta.\]
Thus, \(B^{(2)}_{\Delta,i,\gamma}\subset B^{(2)}_{\Delta,i}\), and
\[\mathbb{P}[(X_{i}(\kappa))^{(2)}\in B^{(2)}_{\Delta,i}|(X_{i}(\kappa))^{(1)} \in A^{(1)}_{\Delta}]\geqslant\mathbb{P}[(X_{i}(\kappa))^{(2)}\in B^{(2)}_{ \Delta,i,\gamma}|(X_{i}(\kappa))^{(1)}\in A^{(1)}_{\Delta}]=1.\]
### Proof of Theorem 5
In the following, we will write \(A\), \(B\), \(C\) (with upper case letters) positive constants that can change from line to line. As in [19] and [21], we use the upper bound:
\[\|(G_{0}(\kappa)*Q)^{\otimes n}-(G_{1}(\kappa)*Q)^{\otimes n}\|_{TV}\leqslant 1 -\left(1-\|(G_{0}(\kappa)*Q)-(G_{1}(\kappa)*Q)\|_{TV}\right)^{n},\]
where \(\|\cdot-\cdot\|_{TV}\) denotes the total variation distance. Using Le Cam's two-points method, the minimax rate will be lower bounded by \(H(A_{\alpha}M_{0}(\gamma),A_{\alpha}M_{1}(\gamma))\), that is \(\gamma\), (see Lemma 7) provided that there exists a constant \(C>0\) such that \(\|(G_{0}(\kappa)*Q)^{\otimes n}-(G_{1}(\kappa)*Q)^{\otimes n}\|_{TV}\leqslant C<1\), so that we only need to find \(C>0\) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)|\leqslant \frac{C}{n}.\]
Since \(Q\) has a density \(q\) over \(\mathbb{R}^{D}\), \(G_{0}(\kappa)*Q\) and \(G_{1}(\kappa)*Q\) also have a density over \(\mathbb{R}^{D}\). We first prove that for \(i\in\{0,1\}\),
\[\int\prod_{j=1}^{D}x_{j}^{2}\left|\frac{d(G_{i}(\kappa)*Q)}{dx}(x)\right|^{2} dx<+\infty. \tag{32}\]
Indeed,
\[\int\prod_{j=1}^{D}x_{j}^{2}\left|\frac{d(G_{i}(\kappa)*Q)}{dx}(x)\right|^{2} dx\leqslant\left\|\frac{d(G_{i}(\kappa)*Q)}{dx}\right\|_{\infty}\int\prod_{j=1}^ {D}x_{j}^{2}d(G_{i}(\kappa)*Q)(x).\]
First, \(\left\|\frac{d(G_{i}(\kappa)*Q)}{dx}\right\|_{\infty}\leqslant\|q\|_{\infty} ^{D}<\infty\). Moreover, for \(k\in\{1,\ldots,D\}\), writing \(X_{i}(\kappa)^{[k]}\) and \(\varepsilon^{[k]}\) for the \(k\)-th coordinate of \(X_{i}(\kappa)\) and \(\varepsilon\),
\[\int\prod_{j=1}^{D}x_{j}^{2}\left|d(G_{i}(\kappa)*Q)(x)\right| =\mathbb{E}[\prod_{k=1}^{D}(X_{i}(\kappa)^{[k]}+\varepsilon^{[k] }))^{2}]\] \[=\mathbb{E}[(X_{i}(\kappa)^{[1]}+\varepsilon^{[1]})^{2}(X_{i}( \kappa)^{[2]}+\varepsilon^{[2]})^{2}]\prod_{k=3}^{D}\mathbb{E}[(\varepsilon^ {[k]})^{2}]. \tag{33}\]
We have that \((X_{i}(\kappa)^{[2]}+\varepsilon^{[2]})^{2}\leqslant a^{2}(X_{i}(\kappa)^{[1] })^{2}+2\gamma X_{i}(\kappa)^{[1]}+2X_{i}(\kappa)^{[1]}\varepsilon^{[2]}+(1+ \gamma)(\varepsilon^{[2]})^{2}+\gamma^{2}\), using (33) and the fact that \(\varepsilon^{[2]}\) is independent of all other variables and that, for \(k\in\{1,2\}\), \(X_{i}^{[1]}\) is independent of \(\varepsilon^{[k]}\), we finally get that \(\int\prod_{j=1}^{D}x_{j}^{2}\left|d(G_{i}(\kappa)*Q)(x)\right|\) is upper bounded by product and sum of expectation of \(((\varepsilon^{[j]})^{2})_{j\in\{1,\ldots,D\}}\), \((X_{i}(\kappa)^{[1]})^{2}\), \((X_{i}(\kappa)^{[1]})^{3}\) and \((X_{i}(\kappa)^{[1]})^{4}\) which are all finite thanks to Lemma 4.
By the Cauchy-Schwarz inequality,
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)| \\ \leqslant\pi^{D/2}\left(\int\prod_{j=1}^{D}(1+x_{j}^{2})\left| \frac{d((G_{0}(\kappa)-G_{1}(\kappa))*Q)}{dx}(x)\right|^{2}dx\right)^{1/2}. \tag{34}\]
By Parseval's identity, for all \(\eta\in\{0,1\}^{D}\),
\[\int_{\mathbb{R}^{D}}\prod_{j=1}^{D}x_{j}^{2\eta_{j}}\left|\frac {d((G_{0}(\kappa)-G_{1}(\kappa))*Q)}{dx}(x)\right|^{2}dx =\int_{\mathbb{R}^{D}}\left|\left(\prod_{j=1}^{D}\partial_{t_{j}}^ {\eta_{j}}\right)(\mathcal{F}[G_{0}(\kappa)]-\mathcal{F}[G_{1}(\kappa)])(t) \mathcal{F}[Q](t)\right|^{2}dt\] \[=\int_{[-c,c]^{D}}\left|\left(\prod_{j=1}^{D}\partial_{t_{j}}^{ \eta_{j}}\right)(\mathcal{F}[G_{0}(\kappa)]-\mathcal{F}[G_{1}(\kappa)])(t) \mathcal{F}[Q](t)\right|^{2}dt,\]
since \(\mathcal{F}[Q]\) and for \(\eta\in\{0,1\}^{D}\), \(\partial^{\eta}\mathcal{F}[Q]\) are supported on \([-c,c]^{D}\). Moreover, they are bounded
functions, so that there exists a constant \(C\) (depending only on \(d\)) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)| \leqslant C\sum_{\eta\in\{0,1\}^{D}}\int_{[-c,c]^{D}}\left|\left( \prod_{j=1}^{D}\partial_{t_{j}}^{\eta_{j}}\right)(\mathcal{F}[G_{0}(\kappa)]- \mathcal{F}[G_{1}(\kappa)])(t)\right|^{2}dt\] \[=\sum_{\eta\in\{0,1\}^{D}}\int_{[-c,c]^{D}}\left|\left(\prod_{j=1 }^{D}\partial_{t_{j}}^{\eta_{j}}\right)(t\mapsto\mathcal{F}[S_{0}]-\mathcal{ F}[S_{1}])(A_{a}^{\top}t)\right|^{2}dt.\]
Using the change of variable \(u=A_{\alpha}^{\top}t\), and noticing that \(\{A_{\alpha}^{\top}t\;;\;t\in[-c,c]^{D}\}\subset[-(1+\alpha)c,(1+\alpha)c]^{D}\), there exists a constant \(C>0\) depending on \(d\) and \(a\) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)|\leqslant C \sum_{\eta\in\{0,1\}^{D}}\int_{[-(1+\alpha)c,(1+\alpha)c]^{D}}\left|\left( \prod_{j=1}^{D}\partial_{t_{j}}^{\eta_{j}}\right)(\mathcal{F}[S_{0}]- \mathcal{F}[S_{1}])(u)\right|^{2}du.\]
For all \(t=(t_{1},\ldots,t_{D})\in\mathbb{R}^{D}\), for \(i\in\{0,1\}\), \(\mathcal{F}[T_{i}](t)=\mathcal{F}[\tilde{T}_{i}](t_{1},t_{2})\), where \(\tilde{T}_{i}\) is the distribution of the 2 first coordinates of \(S_{i}(\kappa)\) under \(T_{i}\). There exists a constant \(C>0\) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)| \\ \leqslant C\sum_{\eta\in\{0,1\}^{2}}\int_{[-(1+\alpha)c,(1+ \alpha)c]^{2}}\left|\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)( \mathcal{F}[\tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t)\right|^{2}dt. \tag{35}\]
Following the same approach as [21], we get that for all \(t=(t_{1},t_{2})\in\mathbb{R}^{2}\),
\[(\mathcal{F}[\tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t) =\int_{\mathbb{R}}\{e^{it_{1}u+i\gamma t_{2}\tilde{g}_{\gamma}(u) }-e^{it_{1}u-i\gamma t_{2}\tilde{g}_{\gamma}(u)}\}f_{\kappa}(u)du\] \[=2i\int_{\mathbb{R}}e^{it_{1}u}\sin(t_{2}\gamma\tilde{g}_{\gamma }(u))f_{\kappa}(u)du\] \[=2i\int_{\mathbb{R}}e^{it_{1}u}\sum_{k=0}^{\infty}\frac{(-1)^{k}t _{2}^{2k+1}\gamma^{2k+1}}{(2k+1)!}\tilde{g}_{\gamma}^{2k+1}(u)f_{\kappa}(u)du.\]
Since \(\sum_{k=0}^{\infty}\int_{\mathbb{R}}\frac{[t_{2}]^{2k+1}\gamma^{2k+1}}{(2k+1 )!}|\tilde{g}_{\gamma}^{2k+1}(u)|f_{\kappa}(u)du\) is finite, we can switch integral and sum thanks to Fubini Theorem, so that
\[(\mathcal{F}[\tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t) =2i\sum_{k=0}^{\infty}\frac{(-1)^{k}t_{2}^{2k+1}\gamma^{2k+1}}{(2 k+1)!}\int_{\mathbb{R}}e^{it_{1}u}\tilde{g}_{\gamma}^{2k+1}(u)f_{\kappa}(u)du\] \[=2i\sum_{k=0}^{\infty}\frac{(-1)^{k}t_{2}^{2k+1}\gamma^{2k+1}}{(2 k+1)!}m_{k}(t_{1}),\]
with for all \(u\in\mathbb{R}\),
\[m_{k}(u)=\mathcal{F}[\tilde{g}^{2k+1}f_{\kappa}](u)=(\underbrace{\mathcal{F}[ \tilde{g}]*\mathcal{F}[\tilde{g}]*\ldots*\mathcal{F}[\tilde{g}]}_{2k+1\text{ times}}*\mathcal{F}[f_{\kappa}])(u). \tag{36}\]
Since
\[\mathcal{F}[x\mapsto\cos(\frac{x}{\gamma})]=\frac{1}{2}\delta_{-\frac{x}{ \gamma}}+\frac{1}{2}\delta_{\frac{x}{\gamma}},\]
for all \(u\in\mathbb{R}\),
\[(\underbrace{\mathcal{F}[\tilde{g}]*\mathcal{F}[\tilde{g}]*\ldots* \mathcal{F}[\tilde{g}]}_{2k+1\text{ times}})(u) =\underbrace{\mathcal{F}[\cos(\frac{\cdot}{\gamma})]*\ldots* \mathcal{F}[\cos(\frac{\cdot}{\gamma})]}_{2k+1\text{ times}}\] \[=\left(\frac{1}{2}\right)^{2k+1}\sum_{j=1}^{2k+1}\binom{2k+1}{j }\delta_{a_{j}},\]
where \(a_{j}=(2j-2k-1)/\gamma\). By (36),
\[m_{k}(u)=\left(\frac{1}{2}\right)^{2k+1}\sum_{j=0}^{2k+1}\binom{2k+1}{j}\mathcal{ F}[f_{\kappa}](u-a_{j}).\]
Therefore,
\[\sup_{|t|\leqslant c}|m_{k}(t)|\leqslant\sup_{|t|\leqslant c,0\leqslant j \leqslant 2k+1}\left|\mathcal{F}[f_{\kappa}]\left(t-\frac{2j-2k-1}{\gamma} \right)\right|\]
and
\[\sup_{|t|\leqslant c}|m^{\prime}_{k}(t)|\leqslant\sup_{|t|\leqslant c,0 \leqslant j\leqslant 2k+1}\left|\mathcal{F}[f_{\kappa}]^{\prime}\left(t- \frac{2j-2k-1}{\gamma}\right)\right|.\]
Assume first that \(\kappa\in(1/2,1)\).For \(\gamma\) that satisfies \(\gamma\leqslant\frac{1}{2c}\), by Lemma 4, there exist two constants \(A\), \(B\) independent of \(\gamma\) and \(k\) such that
\[\sup_{|t|\leqslant c,0\leqslant j\leqslant 2k+1}\left|\mathcal{F}[f_{\kappa}] \left(t-\frac{2j-2k-1}{\gamma}\right)\right|\leqslant A\exp(-B\gamma^{-\frac {1}{\kappa}})\]
and
\[\sup_{|t|\leqslant c,0\leqslant j\leqslant 2k+1}\left|\mathcal{F}[f_{\kappa}]^{ \prime}\left(t-\frac{2j-2k-1}{\gamma}\right)\right|\leqslant A\exp(-B\gamma^{- \frac{1}{\kappa}}).\]
Thus,
\[\sup_{|t|\leqslant c}|m_{k}(t)|\leqslant A\exp(-B\gamma^{-\frac{1}{\kappa}}), \tag{37}\]
and
\[\sup_{|t|\leqslant c}|m^{\prime}_{k}(t)|\leqslant A\exp(-B\gamma^{-\frac{1}{ \kappa}}). \tag{38}\]
For all \(\eta\in\{0,1\}^{2}\), and \(t\in[-c,c]\),
\[\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)( \mathcal{F}[\tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t)=\prod_{j=1}^{2} \partial_{t_{j}}^{\eta_{j}}\left[2i\sum_{k=0}^{\infty}\frac{(-1)^{k}t_{2}^{2k+ 1}\gamma^{2k+1}}{(2k+1)!}m_{k}(t_{1})\right]\\ =2i\eta_{2}\sum_{k=0}^{\infty}\frac{(-1)^{k}t_{2}^{2k}\gamma^{2k+ 1}}{(2k)!}\partial_{t_{1}}^{m}m_{k}(t_{1})+2i(1-\eta_{2})\sum_{k=0}^{\infty} \frac{(-1)^{k}t_{2}^{2k+1}\gamma^{2k+1}}{(2k+1)!}\partial_{t_{1}}^{\eta_{1}}m_ {k}(t_{1}),\]
so that
\[\left|\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)( \mathcal{F}[\tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t)\right|\] \[\leqslant 2\sum_{k=0}^{\infty}\frac{|t_{2}|^{2k}\gamma^{2k+1}}{(2k )!}|\partial_{t_{1}}^{m}m_{k}(t_{1})|+2\sum_{k=0}^{\infty}\frac{|t_{2}|^{2k+1} \gamma^{2k+1}}{(2k+1)!}|\partial_{t_{1}}^{\eta_{1}}m_{k}(t_{1})|.\]
By (37) and (38), there exists a constant \(C>0\) which depends only on \(d\) and \(A\) such that
\[\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)(\mathcal{F}[\tilde{S}_ {0}]-\mathcal{F}[\tilde{S}_{1}])(t)\leqslant C\exp(-B\gamma^{-\frac{1}{\kappa} })\sup_{|t_{2}|\leqslant c}\left(\gamma\cosh(|t_{2}|\gamma)+\sinh(|t_{2}| \gamma)\right).\]
For \(\gamma\) small enough, there exists a constant \(C_{1}>0\) which depends only on \(d\), \(A>0\) and \(C>0\) such that
\[\left|\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)(\mathcal{F}[ \tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t)\right|\leqslant C\exp(-B\gamma^{ -\frac{1}{\kappa}}).\]
Finally, using (35), there exist constants \(C>0\) and \(B>0\) which depend only on \(d\) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)|\leqslant C\exp( -B\gamma^{-\frac{1}{\delta}}).\]
Taking \(\gamma=c_{\gamma}(\log n)^{-\kappa}\) with \(c_{\gamma}\leqslant B_{1}^{\kappa}\) shows that there exists \(C>0\) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)|\leqslant \frac{C}{n}\]
Let us now consider the case \(\kappa=1\).For \(\gamma\) that satisfies \(\gamma\leqslant\frac{1}{2c}\), by Lemma 4, for all \(\delta\in(0,1)\), there exist two constants \(A>0\), \(B>0\) independent of \(\gamma\) and \(k\) such that
\[\sup_{|t|\leqslant c,0\leqslant j\leqslant 2k+1}\left|\mathcal{F}[f_{1}] \left(t-\frac{2j-2k-1}{\gamma}\right)\right|\leqslant A\exp(-B\gamma^{-\delta}),\]
and
\[\sup_{|t|\leqslant c,0\leqslant j\leqslant 2k+1}\left|\mathcal{F}[f_{1}]^{ \prime}\left(t-\frac{2j-2k-1}{\gamma}\right)\right|\leqslant A\exp(-B\gamma^{ -\delta}).\]
Thus, there exists constants \(A>0\) and \(B>0\) independent of \(\gamma\) and \(k\) such that
\[\sup_{|t|\leqslant c}|m_{k}(t)|\leqslant A\exp(-B\gamma^{-\delta}), \tag{39}\]
and
\[\sup_{|t|\leqslant c}|m_{k}^{\prime}(t)|\leqslant A\exp(-B\gamma^{-\delta}). \tag{40}\]
Doing the same computation as in the case \(\kappa\in(1/2,1)\) shows that for all \(\eta\in\{0,1\}^{2}\) and \(t\in[-c,c]\),
\[\left|\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)( \mathcal{F}[\tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t)\right|\] \[\leqslant 2\sum_{k=0}^{\infty}\frac{|t_{2}|^{2k}\gamma^{2k+1}}{(2k)! }|\partial_{t_{1}}^{\eta_{1}}m_{k}(t_{1})|+2\sum_{k=0}^{\infty}\frac{|t_{2}|^ {2k+1}\gamma^{2k+1}}{(2k+1)!}|\partial_{t_{1}}^{\eta_{1}}m_{k}(t_{1})|.\]
By (39) and (40), there exist constants \(C>0\) and \(B>0\) which depend only on \(d\) such that
\[\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)(\mathcal{F}[\tilde{T} _{0}]-\mathcal{F}[\tilde{T}_{1}])(t)\leqslant C\exp(-B\gamma^{-\delta})\sup_{ |t_{2}|\leqslant c}\left(\gamma\cosh(|t_{2}|\gamma)+\sinh(|t_{2}|\gamma)\right).\]
For \(\gamma\) small enough, there exists a constant \(C>0\) which depends only on \(d\) such that
\[\left|\left(\prod_{j=1}^{2}\partial_{t_{j}}^{\eta_{j}}\right)(\mathcal{F}[ \tilde{T}_{0}]-\mathcal{F}[\tilde{T}_{1}])(t)\right|\leqslant C\exp(-B\gamma^ {-\delta}).\]
Finally, using (35), there exists a constant \(C>0\) which depends only on \(d\) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)|\leqslant C \exp(-B\gamma^{-\delta}).\]
Taking \(\gamma=c_{\gamma}(\log n)^{-\frac{1}{\delta}}\) with \(c_{\gamma}\leqslant B^{\frac{1}{\delta+1}}\) shows that there exists \(C>0\) such that
\[\int_{\mathbb{R}^{D}}|d(G_{0}(\kappa)*Q)(x)-d(G_{1}(\kappa)*Q)(x)|\leqslant \frac{C}{n}.\]
### Proof of Theorem 6
Fix \(\kappa_{0}\in(1/2,1]\), \(S>0\), \(a>0\)\(d\leq D\), \(\nu\in(0,\nu_{\text{est}}]\), \(c(\nu)>0\)\(E>0\). Using the end of the proof of Theorem 4, there exists \(n_{0}\) and \(c^{\prime}\) such that for all \(\kappa\in[\kappa_{0},1]\), all \(G\in St_{\mathcal{K}}(a,d)\cap\mathcal{L}(\kappa,S,\mathcal{H})\) and all \(\mathbb{Q}\in\mathcal{Q}^{(d)}(\nu,c(\nu),E)\), with \((G*\mathbb{Q})^{\otimes n}\)-probability at least \(1-\frac{2}{n}\), (28) holds. Let us now choose \(c_{\sigma}=c^{\prime}\) and consider the event where (28) holds. By the triangular inequality, for any \(\kappa\in[\kappa_{0},1]\),
\[H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_{\widehat {\kappa}_{n}}) \leqslant H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_{ \kappa})+H_{\mathcal{K}}(\widehat{\mathcal{M}}_{\kappa},\widehat{\mathcal{M}}_ {\widehat{\kappa}_{n}})\] \[\leqslant\sigma_{n}(\kappa)+H_{\mathcal{K}}(\widehat{\mathcal{M} }_{\kappa},\widehat{\mathcal{M}}_{\widehat{\kappa}_{n}}).\]
Now, using the definition of \(B_{n}(\cdot)\), if \(\kappa\leqslant\widehat{\kappa}_{n}\), then
\[H_{\mathcal{K}}(\widehat{\mathcal{M}}_{\kappa},\widehat{\mathcal{M}}_{ \widehat{\kappa}_{n}})\leqslant B_{n}(\widehat{\kappa}_{n})+\sigma_{n}(\kappa)\]
while if \(\kappa\geqslant\widehat{\kappa}_{n}\), then
\[H_{\mathcal{K}}(\widehat{\mathcal{M}}_{\kappa},\widehat{\mathcal{M}}_{ \widehat{\kappa}_{n}})\leqslant B_{n}(\kappa)+\sigma_{n}(\widehat{\kappa}_{n})\]
so that in all cases,
\[H_{\mathcal{K}}(\widehat{\mathcal{M}}_{\kappa},\widehat{\mathcal{M}}_{ \widehat{\kappa}_{n}}) \leqslant B_{n}(\widehat{\kappa}_{n})+\sigma_{n}(\kappa)+B_{n}( \kappa)+\sigma_{n}(\widehat{\kappa}_{n})\] \[\leqslant 2B_{n}(\kappa)+2\sigma_{n}(\kappa)\]
using the definition of \(\widehat{\kappa}_{n}\), and therefore
\[H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_{\widehat{\kappa}_{n}}) \leqslant 2B_{n}(\kappa)+3\sigma_{n}(\kappa).\]
By the triangular inequality and the definition of \(B_{n}(\cdot)\),
\[B_{n}(\kappa) \leqslant 0\vee\sup_{\kappa^{\prime}\in[\kappa_{0},\kappa]}\left\{H _{\mathcal{K}}(\widehat{\mathcal{M}}_{\kappa},\mathcal{M}_{G})+H_{\mathcal{K}} (\mathcal{M}_{G},\widehat{\mathcal{M}}_{\kappa^{\prime}})-\sigma_{n}(\kappa^{ \prime})\right\}\] \[\leqslant H_{\mathcal{K}}(\widehat{\mathcal{M}}_{\kappa},\mathcal{ M}_{G})+0\vee\sup_{\kappa^{\prime}\in[\kappa_{0},\kappa]}\left\{H_{\mathcal{K}}( \mathcal{M}_{G},\widehat{\mathcal{M}}_{\kappa^{\prime}})-\sigma_{n}(\kappa^{ \prime})\right\}\] \[\leqslant\sigma_{n}(\kappa).\]
Thus, for all \(\kappa\in[\kappa_{0},1]\), all \(G\in St_{\mathcal{K}}(a,d)\cap\mathcal{L}(\kappa,S,\mathcal{H})\) and all \(\mathbb{Q}\in\mathcal{Q}^{(d)}(\nu,c(\nu),E)\), with \((G*\mathbb{Q})^{\otimes n}\)-probability at least \(1-\frac{2}{n}\),
\[H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_{\widehat{\kappa}_{n}}) \leqslant 5\sigma_{n}(\kappa),\]
and using the fact that \(H_{\mathcal{K}}(\mathcal{M}_{G},\widehat{\mathcal{M}}_{\widehat{\kappa}}) \leqslant\sup_{x,x^{\prime}\in\mathcal{K}}d(x,x^{\prime})\) on the event of probability at most \(2/n\) where this doesn't hold, Theorem 6 follows.
### Proof of Theorem 7
We shall need two technical lemmas. The following one is easily proved following the arguments at the end of the proof of Theorem 4.
**Lemma 12**.: _Let \(G\) be a probability measure with compact support \(\mathcal{M}_{G}\). Assume \(G\in St_{\mathcal{M}_{G}}(a,d,r_{0})\) for some constants \(a>0\), \(d>0\) and \(r_{0}>0\). Recall that \(\Gamma_{n}:=\Gamma_{n,1}=\|\hat{g}_{n}-\bar{g}\|_{\infty}\). Then_
1. _For any_ \(C_{1}>0\) _and_ \(c>0\)_, there exists_ \(h_{0}>0\) _such that if_ \(h_{n}\leqslant h_{0}\)_, on the event where_ \[C_{1}+\Gamma_{n}<\lambda_{n}<\ ac_{A}^{d}d_{A}(\frac{1}{h_{n}})^{D-d}-\Gamma_{n},\] _it holds_ \[\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{c}.\]
_._
2. _For_ \(m_{n}\)_,_ \(h_{n}\)_,_ \(\lambda_{n}\) _chosen as in Theorem_ 4_, for all_ \(C_{1}\in(0,ac_{A}^{d}d_{A})\) _and_ \(\delta^{\prime}>0\)_, there exists_ \(C>0\) _and_ \(n_{0}\geqslant 0\) _such that for all_ \(n\geqslant n_{0}\)_, with probability at least_ \(1-2\exp(-n^{1/2-\delta^{\prime}})\)_,_ \[\Gamma_{n}^{2}\leqslant Ce^{-m_{n}}\qquad\text{and}\qquad C_{1}+\Gamma_{n}< \lambda_{n}<\ ac_{A}^{d}d_{A}(\frac{1}{h_{n}})^{D-d}-\Gamma_{n}.\] _and in particular, for all_ \(c>0\) _and_ \(\delta^{\prime}>0\)_, there exists_ \(n_{0}^{\prime}\geqslant 0\) _such that for all_ \(n\geqslant n_{0}^{\prime}\)_, with probability at least_ \(1-2\exp(-n^{1/2-\delta^{\prime}})\)_,_ \[\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{c}.\] _In particular, since_ \(R_{n}\longrightarrow+\infty\) _and_ \(\mathcal{M}_{G}\) _is compact, up to increasing_ \(n_{0}\)_, on this event,_
\[\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\cap\bar{B}(0,R_{n})\subset( \mathcal{M}_{G})_{c}.\]
In the rest of the proof of the Theorem, we lighten the notation \(\widehat{\mathcal{M}}\cap\bar{B}(0,R_{n})\) into \(\widehat{\mathcal{M}}\) (equivalently, we redefine the estimator \(\widehat{\mathcal{M}}\) as the intersection of the estimator of Section 3.2 with the closed euclidean ball of radius \(R_{n}\)).
**Lemma 13**.: _Let \(G\) be a probability measure with compact support \(\mathcal{M}_{G}\). Assume \(G\in St_{\mathcal{M}_{G}}(a,d,r_{0})\) for some constants \(a>0\), \(d>0\) and \(r_{0}>0\). Then for any \(\alpha>0\) and \(c>0\), there exists \(C(\alpha,c)>0\) such that, on the event where_
\[\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{c},\]
_it holds_
\[\|\bar{g}\|_{L_{1}(\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{c})} \leqslant C(\alpha,c)h_{n}^{\alpha}\quad\text{and}\quad\int_{\mathbb{R}^{D} \setminus(\widehat{\mathcal{M}})_{c}}\|x\|^{2}|\bar{g}(x)|dx\leqslant C( \alpha,c)h_{n}^{\alpha}.\]
Proof.: By definition,
\[\|\bar{g}\|_{L_{1}(\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{c})}= \frac{1}{h_{n}^{D}}\int_{x\in\mathcal{M}_{G}}\int_{y\in\mathbb{R}^{D} \setminus(\widehat{\mathcal{M}})_{c}}\psi_{A}\left(\frac{\|y-x\|_{2}}{h_{n}} \right)dydG(x).\]
By (4), for any \(A>0\), there exists \(C>0\) such that for any \(x\in\mathcal{M}_{G}\) and \(y\in\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{c}\),
\[\psi_{A}\left(\frac{\|y-x\|_{2}}{h_{n}}\right)\leqslant C\exp\left(-\beta_{A} \frac{\|y-x\|_{2}^{A/(A+1)}}{h_{n}^{A/(A+1)}}\right)\leqslant C\exp\left(- \beta_{A}\frac{d(y,\mathcal{M}_{G})^{A/(A+1)}}{h_{n}^{A/(A+1)}}\right).\]
Since \(\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\), for all \(y\in\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{c}\), \(d(y,\mathcal{M}_{G})\geqslant c\), so for any \(\alpha>0\), there exists a constant \(\tilde{C}>0\) such that
\[C\exp\left(-\beta_{A}\frac{d(y,\mathcal{M}_{G})^{A/(A+1)}}{h_{n}^{A/(A+1)}} \right)\leqslant\tilde{C}\frac{h_{n}^{D+\alpha}}{d(y,\mathcal{M}_{G})^{D+ \alpha}}.\]
Moreover, since \(\mathcal{M}_{G}\) is compact, \(\mathrm{Diam}(\mathcal{M}_{G})\) is finite, so that on the event where \(\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\),
\[\int_{\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{c}}\frac{1} {d(y,\mathcal{M}_{G})^{D+\alpha}}dy \leqslant\int_{\mathbb{R}^{D}\setminus(\mathcal{M}_{G})_{c}} \frac{1}{d(y,\mathcal{M}_{G})^{D+\alpha}}dy<\infty.\] \[\leqslant\int_{\mathbb{R}^{D}}\left(\frac{1}{c\vee(\|y\|- \mathrm{Diam}(\mathcal{M}_{G})/2)}\right)^{D+\alpha}dy<\infty.\]
Therefore, for all \(c>0\) and \(\alpha>0\), there exists \(C\) depending on \(A\), \(D\), \(c\), \(\alpha\) and \(\mathrm{Diam}(\mathcal{M}_{G})\) such that
\[\|\bar{g}\|_{L_{1}(\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{c})} \leqslant Ch_{n}^{\alpha}.\]
The proof that the same holds for \(\int_{\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{c}}\|x\|^{2}\bar{g}(x)dx\) is similar.
Let \(G\in St_{\mathcal{M}_{G}}(a,d,r_{0})\) be such that if \(X\sim G\), then \(\Phi_{X}\in\mathcal{H}\cap\Upsilon_{1,S}\). We use a bias-variance decomposition of \(W_{2}(G,\widehat{P}_{n,\eta})\) as
\[W_{2}(G,\widehat{P}_{n,\eta})\leqslant W_{2}(G,P_{\psi_{A,h}})+W_{2}(P_{\psi_{ A,h}},\widehat{P}_{n,\eta}).\]
The proof is done is several steps :
* We first show that there exists \(C>0\) depending only on \(A\) and \(D\) such that the bias satisfies \[W_{2}(G,P_{\psi_{A,h_{n}}})\leqslant Ch_{n}.\]
* We prove that for any \(\alpha\geqslant 1\), on the event where \[\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{c},\] there exists \(C^{\prime}>0\) such that \[W_{2}(P_{\psi_{A,h_{n}}},\widehat{P}_{n,\eta})\leqslant C^{\prime}(h_{n}^{ \alpha}+\Gamma_{n}).\]
* We show that the choice of the parameters \(m_{n}\), \(h_{n}\) and \(\lambda_{n}\) gives the result.
Proof of (1)Let \(Y_{\psi}\) be a random variable with density \(\psi_{A,h_{n}}\) and independent of \(X\), so that the distribution of \(X+Y_{\psi}\) is \(P_{\psi_{A,h_{n}}}\). Then, by definition of \(W_{2}\),
\[W_{2}^{2}(G,P_{\psi_{A,h_{n}}})\leqslant\mathbb{E}(\|X+Y_{\psi}-X\|_{2}^{2})= \mathbb{E}(\|Y_{\psi}\|_{2}^{2})=h_{n}^{2}\int_{\mathbb{R}^{D}}\|u\|^{2}\psi_{ A,1}(u)du.\]
Proof of (2)If \(\nu\) and \(\mu\) are probability measures on \(\mathbb{R}^{D}\) having respective densities \(f\) and \(g\) with respect to the Lebesgue measure, Lemma 1 in [8] ensures that
\[W_{2}^{2}(\nu,\mu)\leqslant 2\min_{a\in\mathbb{R}^{D}}\int_{\mathbb{R}^{D}}\|x- a\|^{2}|f(x)-g(x)|dx. \tag{41}\]
This entails
\[W_{2}^{2}(P_{\psi_{A,h_{n}}},\widehat{P}_{n,\eta}) \leqslant 2\min_{a\in\mathbb{R}^{D}}\int_{\mathbb{R}^{D}}\|x-a\|^{2 }|\bar{g}(x)-c_{n}\widehat{g}_{n}^{+}(x)1|_{(\widehat{\mathcal{M}})_{\eta}} )(x)|dx\] \[\leqslant 2\int_{(\widehat{\mathcal{M}})_{\eta}}\|x\|^{2}|\bar{g} (x)-c_{n}\widehat{g}_{n}^{+}(x)|dx+2\int_{\mathbb{R}^{D}\setminus(\widehat{ \mathcal{M}})_{\eta}}\|x\|^{2}\bar{g}(x)dx. \tag{42}\]
For \(S\) compact subset of \(\mathbb{R}^{D}\), write \(M_{S}=\sup_{x\in S}\|x\|^{2}\) and \(\mathrm{Vol}(S)\) for the Lebesgue measure of \(S\). Then
\[\int_{(\widehat{\mathcal{M}})_{\eta}}\|x\|^{2}|\bar{g}(x)-c_{n} \widehat{g}_{n}^{+}(x)|dx \leqslant M_{(\widehat{\mathcal{M}})_{\eta}}\int_{(\widehat{ \mathcal{M}})_{\eta}}|\widehat{g}_{n}^{+}(x)-\bar{g}(x)|dx+2M_{(\widehat{ \mathcal{M}})_{\eta}}\frac{|c_{n}-1|}{c_{n}}\] \[\leqslant M_{(\widehat{\mathcal{M}})_{\eta}}\mathrm{Vol}((\widehat{ \mathcal{M}})_{\eta})\Gamma_{n}+2M_{(\widehat{\mathcal{M}})_{\eta}}\frac{|c_{n} -1|}{c_{n}}.\]
We also have
\[\frac{|c_{n}-1|}{c_{n}}=\left|\frac{1}{c_{n}}-1\right| =\left|\int_{(\widehat{\mathcal{M}})_{\eta}}(\widehat{g}_{n}^{+}( y)-\bar{g}(y))dy-\int_{\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{\eta}} \bar{g}(y)dy\right|\] \[\leqslant\|\widehat{g}_{n}-\bar{g}\|_{L_{1}((\widehat{\mathcal{M} })_{\eta})}+\|\bar{g}\|_{L_{1}(\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{ \eta})}.\]
Using Holder's inequality,
\[\|\widehat{g}_{n}-\bar{g}\|_{L_{1}((\widehat{\mathcal{M}})_{\eta})}\leqslant \mathrm{Vol}((\widehat{\mathcal{M}})_{\eta})\Gamma_{n}.\]
By Lemma 13, for any \(\alpha>0\), there exist \(C\) such that
\[\int_{(\widehat{\mathcal{M}})_{\eta}}\|x\|^{2}|\bar{g}(x)-c_{n}\widehat{g}_{n}^{+ }(x)|dx\leqslant 4M_{(\widehat{\mathcal{M}})_{\eta}}\mathrm{Vol}((\widehat{ \mathcal{M}})_{\eta})\Gamma_{n}+2M_{(\widehat{\mathcal{M}})_{\eta}}Ch_{n}^{ \alpha}. \tag{43}\]
For any \(c>0\), when \(\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{c}\), one has \((\widehat{\mathcal{M}})_{\eta}\subset(\mathcal{M}_{G})_{\eta+c}\). This inclusion entails \(M_{(\widehat{\mathcal{M}})_{\eta}}\leqslant M_{(\mathcal{M}_{G})_{\eta+c}}\) and \(\mathrm{Vol}((\widehat{\mathcal{M}})_{\eta})\leqslant\mathrm{Vol}((\mathcal{ M}_{G})_{\eta+c})\). Therefore, for any \(c>0\),
\[\int_{(\widehat{\mathcal{M}})_{\eta}}\|x\|^{2}|\bar{g}(x)-c_{n}\widehat{g}_{n} ^{+}(x)|dx\leqslant 4M_{(\mathcal{M}_{G})_{\eta+c}}\mathrm{Vol}((\mathcal{ M}_{G})_{\eta+c})\Gamma_{n}+2M_{(\mathcal{M}_{G})_{\eta+c}}Ch_{n}^{\alpha}. \tag{44}\]
Again by Lemma 13, on the event where \(\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{\eta}\),
\[\int_{\mathbb{R}^{D}\setminus(\widehat{\mathcal{M}})_{\eta}}\|x\|^{2}|\bar{g} (x)|dx\leqslant C^{\prime}h_{n}^{\alpha}. \tag{45}\]
Finally, using (42), (44) and (45), for any \(\alpha\geqslant 1\), there exists \(C>0\) such that
\[W_{2}^{2}(P_{\psi_{A,h_{n}}},\widehat{P}_{n,\eta})\leqslant C(h_{n}^{\alpha}+ \Gamma_{n}).\]
Proof of (3)Using **(1)** and **(2)**, for sequences \(h_{n}\), \(m_{n}\) and \(\lambda_{n}\) satisfying the assumptions of Theorem 7, on the event where \(\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{\eta}\), for any \(\alpha\geqslant 2\), there exists \(C>0\) such that
\[W_{2}(G,\widehat{P}_{n,\eta})\leqslant C(h_{n}+\sqrt{h_{n}^{\alpha}+\Gamma_{n }})\leqslant 2C(h_{n}+\sqrt{\Gamma_{n}}).\]
We may assume \(h_{n}\leqslant 1\) for all \(n\) without loss of generality. As stated in Lemma 12, for any \(\delta^{\prime}>0\), there exist \(C^{\prime}\) and \(n_{0}\) such that for all \(n\geqslant n_{0}\), with probability at least \(1-2\exp(-n^{1/2-\delta^{\prime}})\), \(\sqrt{\Gamma_{n}}\leqslant C^{\prime}e^{-m_{n}/4}\) and \(\mathcal{M}_{G}\subset\widehat{\mathcal{M}}\subset(\mathcal{M}_{G})_{\eta}\), and therefore
\[W_{2}(G,\widehat{P}_{n,\eta^{\prime}})\leqslant Cm_{n}^{-1}\]
on this event, up to changing the constant \(C\).
On the event of probability at most \(2\exp(-n^{1/2-\delta^{\prime}})\) where this doesn't hold, since the support of \(\widehat{P}_{n,\eta^{*}}\) is a subset of \(\bar{B}(0,R_{n})\), \(W_{2}(G,\widehat{P}_{n,\eta^{*}})\leqslant 2R_{n}\).
Therefore, taking \(\delta^{\prime}<\delta\) where \(\delta\) is as defined in the statement of the Theorem, there exists \(C>0\) such that for \(n\geqslant n_{0}\),
\[\mathbb{E}_{(G\triangleleft\mathbb{Q})^{\otimes n}}[W_{2}(G,\widehat{P}_{n, \eta^{*}})]\leqslant Cm_{n}^{-1},\]
which concludes the proof.
### Proof of Theorem 8
Let \(\widehat{P}_{n}\) be an estimator of \(G\). According to [31],
\[\sup_{\begin{subarray}{c}G\in St_{K}(a,d,r_{0})\cap\mathcal{L}(1,S,\mathcal{H })\\ \mathbb{Q}\in\mathcal{Q}^{(D)}(\nu,c(\nu,E))\end{subarray}}\mathbb{E}_{(G^{*} Q)^{\otimes n}}[W_{p}(G,\widehat{P}_{n})]\geqslant\frac{1}{2}W_{p}(G_{0}(\kappa),G_{1}( \kappa))(1-\|G_{0}(\kappa)*Q)-G_{1}(\kappa)*Q\|_{1})^{n}.\]
We have shown in Theorem 5 that there exists a constant \(C>0\) such that
\[\|G_{0}(\kappa)*Q-G_{1}(\kappa)*Q\|_{TV}\leqslant\frac{C}{n},\]
taking \(\gamma\) of the form \(c\log(n)^{-1-\delta}\) for any \(\delta>0\) and \(c\) small enough, which implies that the minimax risk is lower bounded by \(W_{p}(G_{0}(\kappa),G_{1}(\kappa))\). We show that there exists a constant \(c>0\) and \(n_{0}>0\) such that for \(n\geqslant n_{0}\)
\[W_{p}(G_{0}(\kappa),G_{1}(\kappa))\geqslant c\gamma.\]
Let \(\mathcal{U}_{\gamma}\) be the set of \(u\in\mathbb{R}\) such that \(|\cos(\frac{u}{\gamma})|\geqslant 1/2\), that is \(\mathcal{U}_{\gamma}=\bigcup_{k\in\mathbb{Z}}[k\pi\gamma-\frac{\pi\gamma}{3},k \pi\gamma+\frac{\pi\gamma}{3}]\). For each \(k\in\mathbb{Z}\), let \(I_{k,\gamma}:=[k\pi\gamma-\frac{\pi\gamma}{2},k\pi\gamma+\frac{\pi\gamma}{2}]\). Let us also define, for any two sets \(A\) and \(B\) of \(\mathbb{R}^{d}\), \(d(A,B)=\inf_{x\in A,y\in B}\|x-y\|_{2}\). We first show that
\[d(M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times\mathbb{R}^{D-1}),M_{1}(\gamma) )\geqslant\gamma(\frac{\alpha}{4\sqrt{2}}\wedge\frac{\pi}{6}).\]
Let \(x\in M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times\mathbb{R}^{D-1})\) and \(y\in M_{1}(\gamma)\). There exists \(k\in\mathbb{Z}\) such that \(x\in M_{0}(\gamma)\cap((\mathcal{U}_{\gamma}\cap I_{k,\gamma})\times\mathbb{ R}^{D-1})\). If \(y\in I_{k,\gamma}\times\mathbb{R}^{D-1}\) (that is, if the first coordinate of \(x\) and \(y\) are in the same interval \(I_{k,\gamma}\)), then
\[\|x-y\|_{2}\geqslant d(M_{0}(\gamma)\cap((\mathcal{U}_{\gamma}\cap I_{k, \gamma})\times\mathbb{R}^{D-1}),M_{1}(\gamma)\cap(I_{k,\gamma}\times\mathbb{ R}^{D-1})).\]
All points of \(M_{0}(\gamma)\) are of the form \((\alpha u,\alpha u+\frac{\alpha}{2}\gamma\cos(\frac{u}{\gamma}),0,\ldots 0)^{\top}\) and the distance between \((\alpha u,\alpha u+\frac{\alpha}{2}\gamma\cos(\frac{u}{\gamma}),0,\ldots,0)^{\top}\) and the diagonal defined by \(\mathcal{D}_{\alpha}:=\{(\alpha u,\alpha u,0,\ldots,0)^{\top}:u\in\mathbb{R}\}\) is \(\frac{\alpha}{4\sqrt{2}}\gamma|\cos(\frac{u}{\gamma})|\). Since the sets \(M_{0}(\gamma)\cap((\mathcal{U}_{\gamma}\cap I_{k,\gamma})\times\mathbb{R}^{D -1})\) and \(M_{1}(\gamma)\cap(I_{k,\gamma}\times\mathbb{R}^{D-1})\) are on opposite sides of the diagonal \(\mathcal{D}_{\alpha}\),
\[d(M_{0}(\gamma)\cap((\mathcal{U}_{\gamma}\cap I_{k,\gamma}) \times\mathbb{R}^{D-1}),M_{1}(\gamma)\cap(I_{k,\gamma}\times\mathbb{R}^{D-1})) \geqslant d(M_{0}(\gamma)\cap((\mathcal{U}_{\gamma}\cap I_{k, \gamma})\times\mathbb{R}^{D-1}),\mathcal{D}_{\alpha})\] \[=\frac{\alpha}{4\sqrt{2}}\gamma,\]
so that \(\|x-y\|_{2}\geqslant\frac{\alpha}{4\sqrt{2}}\gamma\). If now \(y\notin I_{k,\gamma}\times\mathbb{R}^{D-1}\),
\[d(M_{0}(\gamma)\cap((\mathcal{U}_{\gamma}\cap I_{k,\gamma}) \times\mathbb{R}^{D-1}),M_{1}(\gamma)\cap((\mathbb{R}\setminus I_{k,\gamma}) \times\mathbb{R}^{D-1})) \geqslant d(\mathcal{U}_{\gamma}\cap I_{k,\gamma},\mathbb{R} \setminus I_{k,\gamma})\] \[=\frac{\pi\gamma}{6},\]
so that \(\|x-y\|_{2}\geqslant\frac{\pi\gamma}{6}\), and thus \(d(M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times\mathbb{R}^{D-1}),M_{1}(\gamma)) \geqslant\gamma(\frac{\alpha}{4\sqrt{2}}\wedge\frac{\pi}{6})\).
Now, let us show that \(W_{p}(G_{0}(1),G_{1}(1))\geqslant\gamma(\frac{\alpha}{8\sqrt{2}}\wedge\frac{ \pi}{12})\). Let \(\pi\) be a transport plan between \(G_{0}(1)\) and \(G_{1}(1)\), then
\[\int_{M_{0}(\gamma)\times M_{1}(\gamma)}\|x-y\|_{2}^{p}d\pi(x,y) \geqslant\int_{M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times \mathbb{R}^{D-1})\times M_{1}(\gamma)}\|x-y\|_{2}^{p}d\pi(x,y)\] \[\geqslant d(M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times\mathbb{R}^{D -1}),M_{1}(\gamma))^{p}\ \pi(M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times\mathbb{R}^{D-1}) \times M_{1}(\gamma))\] \[=d(M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times\mathbb{R}^{D-1}),M_{1}(\gamma))^{p}\ G_{0}(1)(M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times \mathbb{R}^{D-1}))\] \[=d(M_{0}(\gamma)\cap(\mathcal{U}_{\gamma}\times\mathbb{R}^{D-1}),M_{1}(\gamma))^{p}\ \mathbb{P}[U(1)\in\mathcal{U}_{\gamma}]\]
since \(G_{1}(1)\) has support \(M_{1}(\gamma)\) and by definition of \(G_{0}(1)\). Therefore, by taking the infimum on all transport plans between \(G_{0}(1)\) and \(G_{1}(1)\),
\[W_{p}(G_{0}(1),G_{1}(1))\geqslant\gamma(\frac{\alpha}{4\sqrt{2}}\wedge\frac{ \pi}{6})\mathbb{P}[U(1)\in\mathcal{U}_{\gamma}]^{1/p}.\]
\(U(1)\) admit a density \(f_{1}\) with respect to Lebesgue measure that is supported on \([-1,1]\) and
continuous. Let us write \(\omega\) one of its modulus of continuity. We have
\[\mathbb{P}[U(1)\in\mathcal{U}_{\gamma}] =\int_{[-1,1]}f_{1}(x)1|_{\mathcal{U}_{\gamma}}(x)dx\] \[=\sum_{k\in[-1/(\pi\gamma),1/(\pi\gamma)]}\int_{[k\pi\gamma-\frac{ \pi\gamma}{3},k\pi\gamma+\frac{\pi\gamma}{3}]}f_{1}(x)dx\] \[\leqslant\sum_{k\in[-1/(\pi\gamma),1/(\pi\gamma)]}\left(\int_{[k \pi\gamma-\frac{\pi\gamma}{3},k\pi\gamma+\frac{\pi\gamma}{3}]}f_{1}(k\pi\gamma )dx+\frac{2\pi\gamma}{3}\omega(\pi\gamma/3)\right)\] \[\leqslant\sum_{k\in[-1/(\pi\gamma),1/(\pi\gamma)]}\frac{2}{3}\int _{[k\pi\gamma-\frac{\pi\gamma}{2},k\pi\gamma+\frac{\pi\gamma}{2}]}f_{1}(k\pi \gamma)dx+\frac{3}{\pi\gamma}\frac{2\pi\gamma}{3}\omega(\pi\gamma/3)\] \[\leqslant\sum_{k\in[-1/(\pi\gamma),1/(\pi\gamma)]}\frac{2}{3}\int _{[k\pi\gamma-\frac{\pi\gamma}{2},k\pi\gamma+\frac{\pi\gamma}{2}]}f_{1}(x)dx+ \frac{3}{\pi\gamma}\left(\frac{2\pi\gamma}{3}\omega(\pi\gamma/3)+\pi\gamma \omega(\pi\gamma/2)\right)\] \[\leqslant\frac{2}{3}\int_{[-1,1]}f_{1}(x)dx+3\left(\frac{2}{3} \omega(\pi\gamma/3)+\omega(\pi\gamma/2)\right)\] \[\xrightarrow[\gamma\to 0]{}\frac{2}{3}\int_{[-1,1]}f_{1}(x)dx= \frac{2}{3}.\]
Therefore, there exists \(n_{0}\) such that for all \(n\geqslant n_{0}\), \(W_{p}(G_{0}(1),G_{1}(1))\geqslant\gamma(\frac{\alpha}{8\sqrt{2}}\wedge\frac{ \pi}{12})\).
|
2307.06225 | Practical quantum imaging with undetected photons | Infrared (IR) imaging is invaluable across many scientific disciplines, from
material analysis to diagnostic medicine. However, applications are often
limited by detector cost, resolution and sensitivity, noise caused by the
thermal IR background, and the cost, portability and tunability of infrared
sources. Here, we describe a compact, portable, and low-cost system that is
able to image objects at IR wavelengths without an IR source or IR detector.
This imaging with undetected photons (IUP) approach uses quantum interference
and correlations between entangled photon pairs to transfer image information
from the IR to the visible, where it can be detected with a standard silicon
camera. We also demonstrate a rapid analysis approach to acquire both phase and
transmission image information. These developments provide an important step
towards making IUP a commercially viable technique. | Emma Pearce, Nathan R. Gemmell, Jefferson Flórez, Jiaye Ding, Rupert F. Oulton, Alex S. Clark, Chris C. Phillips | 2023-07-12T15:17:26Z | http://arxiv.org/abs/2307.06225v1 | # Practical quantum imaging with undetected photons
###### Abstract
Infrared (IR) imaging is invaluable across many scientific disciplines, from material analysis to diagnostic medicine. However, applications are often limited by detector cost, resolution and sensitivity, noise caused by the thermal IR background, and the cost, portability and tunability of infrared sources. Here, we describe a compact, portable, and low-cost system that is able to image objects at IR wavelengths without an IR source or IR detector. This imaging with undetected photons (IUP) approach uses quantum interference and correlations between entangled photon pairs to transfer image information from the IR to the visible, where it can be detected with a standard silicon camera. We also demonstrate a rapid analysis approach to acquire both phase and transmission image information. These developments provide an important step towards making IUP a commercially viable technique.
## I Introduction
The infrared (IR) spectral region provides a wealth of information. In the near-IR and shortwave-IR (SWIR), higher harmonics of the vibrational modes of molecules and combinations of them can be probed. In the mid-IR, fundamental vibrational absorption bands occur, which provide both greater molecular specificity [1; 2] and the ability to perform quantitative chemical analysis. Sensing applications include studies of molecular structure, agriculture and food quality control, pharmaceutical monitoring, and biological imaging [3; 4; 5; 6; 7; 8; 9].
However, the IR is technologically poor in comparison with the visible, particularly at longer MIR wavelengths. IR cameras have much lower pixel counts than their visible silicon counterparts and, when operated at room temperature, are orders of magnitude noisier. Even expensive cryogenically-cooled IR detectors [10] are susceptible to noise arising from the ever-present 300 K black-body radiation background (the so-called BLIP limit).
One approach to avoiding IR cameras is to image the sample with IR photons which then undergo frequency up-conversion to visible wavelengths before reaching the camera. However, this requires an IR source of photons and relies on high-power laser sources and/or cavities to combat the low conversion efficiencies [11; 12]. This also leads to the sample being exposed to far more photons than are detected, which can be detrimental to photosensitive samples [13; 14]. So-called 'ghost imaging' uses non-degenerate correlated photon pairs. The IR photon probes the sample before being detected with a single-channel IR detector whilst its visible partner is logged by a camera [15; 16]. This circumvents some of the limitations of IR cameras, but it still suffers from the poor IR detector sensitivity, and from the influence of thermal IR background.
In contrast, the IUP approach circumvents both the requirement to have a direct source of IR photons and the ability to detect them [17; 18]. This works by generating photon pairs via spontaneous parametric down-conversion (SPDC) in a nonlinear crystal, where each pair consists of one visible photon (signal) and one infrared photon (idler). By passing the pump through this crystal twice, it is possible to generate a pair in the first and/or second pass of the pump. If the signal photons from these two passes are precisely overlapped, and the idler photons are similarly overlapped, it is impossible to determine which pass generated the photon pair. This lack of 'which-way' (or indeed, 'which-pass') information means that optical interference is seen in the count rate of the signal photons. Blocking one of the idler beams with an object effectively restores this 'which-way' information, destroying the interference. Note that coincident detection is not required, as merely the possibility to detect distinguishing information will impact the interference. The presence or absence of interference due to an object can therefore be readily recorded in the visible photon channel, i.e. using photons that have not themselves interacted with the object. Crucially, the image transfer process leaves the thermal background behind, allowing for detection sensitivities that are considerably improved over direct IR detection [19]. This principle has since been demonstrated for a variety of applications, including microscopy [20; 21], hyperspectral imaging[22], spectroscopy [23; 24; 25; 26], optical coherence tomography [27; 28], and holography [29].
IUP certainly offers a promising alternative to direct infrared imaging, but it is important to address the potential barriers to practical implementation, and a literature is emerging that tackles issues of size, stability, and speed [29; 30; 31], with compact near- and mid-IR technologies already seeing use in environmental and agricultural
studies [32; 33].
Here, we demonstrate two generations of compact, fully self-contained, wavelength-tunable, and low cost devices for IUP, both of which enable SWIR imaging using only a basic silicon CMOS camera. We also discuss a rapid analysis approach which uses a pixel-wise Fourier transform to extract both transmission and phase information from as few as three image frames. These developments have allowed us to make dramatic reductions to the size, weight, cost, and power (SWaP-C) of IUP.
## II Methods
Figure 1(a) shows the experimental setup of the first generation of our compact device. A 532 nm diode-pumped solid-state continuous-wave laser (CrystaLaser CL532-050) pumps a periodically-poled lithium niobate (PPLN) crystal with 35 mW of input power to produce signal (visible) and idler (IR) photons by SPDC. The dichroic mirror DM2 splits the idler photons from the signal and pump photons. The IR idler photons are sent to the sample, mounted on the sample mirror, while visible and pump photons propagate towards the scanning mirror, which is scanned to generate the interference fringes. All three wavelengths are then reflected back through the crystal, and as the pump makes its second pass, there is again a probability of generating a signal-idler photon pair.
As previously discussed, if the optical modes from the second pass are perfectly overlapped with those from the first then a sinusoidal modulation appears in the signal photon flux detected at the camera as the scanning mirror is moved. At these powers, the actual probability of generating a photon pair at each pass is low, allowing us to neglect the possibility of any stimulated down-conversion in the second process by photons originating from the first.
An object placed on the sample mirror can introduce a loss and/or phase change to the idler from the first pair, introducing distinguishability and a proportional change in the amplitude and/or phase of the visible interference fringes. The amplitude variations can be imaged directly, but the phase changes are only detectable by moving the scanning mirror.
Both signal and idler photons are separated from the pump by dichroic mirror DM1 and sent to another dichroic mirror DM3, which sends only the visible photons to the camera. The idler photons go completely undetected. In fact, the silicon camera we use would not see them even if they were not removed by DM3.
To form the imaging system, each arm of the interferometer contains an \(f\,=\,\)50 mm focal length lens (L2 and L3) such that both sample and scanning mirrors are in the image plane of the PPLN crystal, and the interferometer output is subsequently imaged onto the camera with a \(f\,=\,\)75 mm focal length lens. A number of camera frames are recorded at different scanning mirror positions, and the pixel-wise intensity variations are Fourier transformed [34], allowing the transmission and phase information of the object's IR response to be extracted.
The PPLN crystal can be translated perpendicular to the pump beam to access regions with different poling periods, in a way that tunes the signal and idler wavelengths. Also, it can be temperature tuned to further extend the overall wavelength coverage, allowing us to generate signal photons from 706-839 nm and idler photons from 1455-2159 nm.
The whole system sits within a 60 cm \(\times\,\)45 cm footprint, and is 40 cm high (including all the laser and the temperature controlling and scanning electronics, Figure 1(b)). An additional enclosure can be added to reduce background light and allow safe operation outside of a lab environment, as demonstrated in Fig. 1(c). In this case, the idler beam passes through an AR-coated silicon window (which is opaque to both pump and signal) to reach the sample which sits outside of the enclosure. Typically, no realignment was required after local transportation and, once aligned, the system can be operated by an untrained user assuming familiarity with the image acquisition software. The whole system is assembled from standard off the shelf components for \(\sim\)\(\mathcal{L}\)7000 (excluding the laser).
## III Results
Figure 1: (a) Schematic of a compact setup for imaging with undetected photons. Green indicates the pump beam. Blue and red represent signal (visible) and idler (SWIR) beams, respectively. (b) Implementation of the compact device. The top breadboard (\(45\times 30\) cm) contains the imaging system, with an additional \(60\times 45\) cm breadboard to house system controls. (c) Demonstration of imaging with undetected photons outside of a laboratory, featuring system enclosure and real-time analysis output to a graphical user interface.
Figure 2 shows analysis of a thin-film gold interdigitated ring array microelectrode (Figure 2(a), Micrux IDRA1) using 3 (d), 4 (e), 8 (f), and 15 (g) acquired images. In each case, the images are taken with a \(200\,\mathrm{ms}\) exposure time at equally spaced piezo voltages over one oscillation of the interference pattern. The period of this oscillation is determined solely by the idler wavelength, as both signal and pump fields are scanned together, as shown in Figure 2(b). The scanning mirror moves by close to half the idler wavelength, as the path is travelled twice. The detected wavelength is \(808\,\mathrm{nm}\), which is filtered using a \(10\,\mathrm{nm}\) wide \(810\,\mathrm{nm}\) bandpass filter, while the probe wavelength is \(1559\,\mathrm{nm}\). The images have a pixel size of \(5.2\times 5.2\,\mathrm{\mu m}\) and are \(1024\times 1280\) pixels, i.e. somewhat greater than those available with current IR cameras.
From the analysed images in Fig. 2(d), it is clear that both the phase and amplitude features of the sample can be identified reliably, even when working right at the Nyquist limit when as few as 3 recorded images are used for the analysis. This approach drastically reduces both acquisition and processing times. We define visibility as
\[\mathcal{V}=\frac{N_{max}-N_{min}}{N_{max}+N_{min}}=2\frac{F_{1}}{F_{0}} \tag{1}\]
where \(N_{max}\) (\(N_{min}\)) are the maximum (minimum) pixel values recorded on the camera during a phase scan, \(F_{1}\) is the amplitude of the Fourier component which corre
Figure 2: (a) Visible image of an epoxy/gold interdigitated ring array microelectrode used as a test sample. (b) Typical interference fringe profile seen at one pixel of the camera as the scanning mirror is moved. (c) Time taken to calculate contrast, visibility, and phase from varying numbers of input images—times are averaged over 100 runs with error bars given by the standard deviation of these runs. Visibility, contrast, and phase profiles of the microelectrode shown in (a), generated from a pixel-wise Fourier transform of (d) 3, (e) 4, (f) 8, and (g) 15 images acquired while scanning the interferometer. Images were acquired with \(200\,\mathrm{ms}\) exposure.
sponds to the frequency of the interference oscillation, and \(F_{0}\) is the amplitude of the DC Fourier component. Contrast is defined as
\[\mathcal{C}=N_{max}-N_{min}=F_{1}\,. \tag{2}\]
Contrast is dependent on the overall brightness of a given system but can be a useful metric if there is a high detector noise floor as this will be subtracted, unlike the case with visibility. Phase is defined as
\[\phi=\arctan\frac{\mathrm{Im}(F_{1})}{\mathrm{Re}(F_{1})}\,. \tag{3}\]
Although as few as 3 images are sufficient for a qualitative analysis of the relative phase and transmission across a sample, unsurprisingly, more images improve accuracy if further parameter extraction is desired.
Features appear brighter in Figure 2(d) compared to Figure 2(e) due to where the frequency of the interference oscillation occurs on the Fourier transform sampling frequencies. Leakage into more than one Fourier component will be seen as a loss of signal and thus visibility will be reduced. This could be avoided by altering the Fourier transform length (via zero-padding) to better match the interference frequency to one of the sampled Fourier frequencies.
The time taken to perform the Fourier transform and calculate the above parameters is plotted against the number of input images in Figure 2(c). The Fourier transform is implemented using a Python wrapper of the Fastest Fourier Transform in the West (FFTW)[35; 36] on a typical laboratory machine (Intel Xeon W-2102 processor, 4 cores). Processing times do not include saving or displaying the data. There is no delay between acquisitions required for image postprocessing [29] and we require both fewer images and shorter exposure times than seen in Ref. [30].
Another example of SWIR imaging is shown in Figure 3, this time with an organic sample of a fly wing. This provides an object with continuously varying IR transmission across the image, rather than the example of the electrode which only has regions of either total transmission or no transmission. Both samples shown thus far have been measured in transmission. Regions of high visibility are seen where SWIR light passes through the sample to be reflected by the sample mirror. This requires that the path from the crystal to the sample mirror and the path from the crystal to the scanning mirror must be equal to within the SPDC coherence length (\(\approx\) 0.1 mm). Any optical path length introduced by transmission through the sample must also be considered.
Figure 4 shows the wavelength tunability of the system while imaging the gold contacts of the electrode, shown in the bottom half of Figure 2(a). In Fig. 4(a), the crystal is kept at 125\({}^{\circ}\)C and the pump beam enters a region with a poling period of 7.40 \(\mu\)m. These conditions result in a probe wavelength of 1558 nm and a detected wavelength of 808 nm (filtered with a 10 nm wide 810 nm bandpass). The crystal is then translated perpendicular to the pump beam to access a poling period of 7.71 \(\mu\)m and heated to 200\({}^{\circ}\)C. This extends the probe wavelength to 1818 nm, beyond the sensitivity of a typical InGaAs camera, with the visible detection at 752 nm (filtered with a 10 nm wide 750 nm bandpass). The interferometer remains aligned throughout these types of wavelength sweep.
Both the spatial resolution (\(\Delta x=f_{u}\lambda_{u}/\sqrt{2}\pi w_{p}\)) and magnification (\(M=f_{c}\lambda_{d}/f_{c}\lambda_{d}\)) are reduced as the idler probe wavelength increases. Here, \(f_{u}\) is the focal length of the lens in the undetected path, \(f_{c}\) is the focal length of the lens in front of the camera, \(\lambda_{d}\) is the detected wavelength, \(\lambda_{u}\) is the undetected probe wavelength, and \(w_{p}\) is the beam waist of the pump [37]. The focal lengths of the lenses are also likely to vary from their nominal values as the wavelengths change. This change in magnification leads to all 4 contacts being visible in Figure 4(b), while only 3 can be seen in Figure 4(a), although chromatic aberrations in the lenses may also be limiting the resolution. Here the sample is being imaged in reflection, with high visibility in regions where the idler probe is reflected by the gold features, in which case it is the path from the crystal to the front face of the sample (rather than the sample mirror) that must be matched to the path from the crystal to the scanning mirror.
Figure 4: Visibility, contrast, and phase of gold electrodes at different probe wavelengths, reached by translating crystal and adjusting temperature. (a) Probe wavelength 1558 nm and detected wavelength 808 nm. (b) Probe wavelength 1818 nm and detected wavelength 752 nm.
Figure 3: (a) Visibility, (b) contrast, and (c) phase images of a fly wing, probed at 1559 nm and detected at 808 nm.
Further development: 'Entanglement'
Figure 5 shows the next design iteration of the system, the so-called 'EntangleCam', using smaller optomechanics and simplifying the layout by removing and combining some of the components. The PBS that was used in the previous system to filter the pump polarisation has been replaced by a laser line PBS with an AR coating that doubles as a dichroic mirror to separate the signal from pump. The lens in front of the camera has been removed so that the detector array samples the far-field of the crystal directly, and the original laser has been replaced by a simple 532 nm laser diode (OdicForce green laser module) providing less than 30 mW of light to the crystal. The pump shaping preparation is handled entirely by a single lens after the diode, with a bandpass filter (BP) to eliminate any unwanted IR light.
The breadboard footprint has shrunk to \(30\times 20\,\mathrm{cm}^{2}\) and is only 15 cm high.The control electronics run from a single mains socket and add only \(25\times 22.5\times 7.5\,\mathrm{cm}^{2}\) volume, while the total component cost is reduced to \(\sim\)6,000, in this case including the laser.
The system still retains the same wavelength tunability via crystal heating and translation with crystal translation stage (CTS), but even at a fixed room temperature, significant wavelength tunability is available. This is seen in Figure 6, imaging another gold electrode sample (identical to that seen in Figure 2), using only a room temperature crystal (\(24.5^{\circ}\)C) without active temperature control. The bandpass filter on the camera was replaced with a long-pass filter to reject any residual pump light while also removing the need to change detection filters when moving between different poling periods. Images used in this analysis are taken with a 200 ms exposure time. It can be seen that despite the reduction in size, the system retains its imaging capabilities and broad tunability. Depending on the probe wavelength desired for a particular application, this shows that the device could be designed to operate without any need for temperature control, further reducing SWaP-C.
## V Discussion
In conclusion, we have demonstrated compact IUP systems that can perform infrared imaging with visible detection and are robust and portable enough to be used outside of a laboratory environment. We have also demonstrated rapid analysis that allows a real-time quantitative measure of transmission and phase shift of a sample using as few as 3 recorded images. These systems represent a significant step forwards in the affordability and practicality of IUP as an alternative to direct IR imaging.
Future operation speeds could be enhanced by reducing the acquisition time required at each scanning mirror position, simply by using a higher power laser and/or a more sensitive camera. Both are readily available without major cost implications. The spatial resolution of the system can be further enhanced by increasing the momentum correlations between the signal and idler photons; in the current configuration this would be achieved by increasing the width of the pump beam [37].
Figure 5: (a) Schematic of second generation of compact imaging with undetected photons, with further reduced SWaP-C. Green indicates the pump beam. Blue and red represent signal (visible) and idler (SWIR) beams, respectively. (b) Implementation of the second generation compact device. (c) Demonstration of the device outside of a laboratory, featuring system enclosure and real-time analysis output to a graphical user interface.
Figure 6: Visibility, contrast, and phase of gold microelectrode at different probe wavelengths using the system in Figure 5. Different wavelengths are reached by translating crystal whilst leaving its temperature at room temperature. (a) Probe wavelength 1450 nm and detected wavelength 840 nm. (b) Probe wavelength 1620 nm and detected wavelength 792 nm. (c) Probe wavelength 1783 nm and detected wavelength 758 nm.
Furthermore, there are a number of ways in which the operating wavelengths can be extended towards the mid-IR, including different pump wavelengths [14], different poling periods [38, 39], and different nonlinear materials [40, 26, 41]. By moving towards the mid-IR, we anticipate our system will be a valuable tool for chemically and medically relevant applications [3, 8, 9, 42].
###### Acknowledgements.
We acknowledge funding from the UK National Quantum Hub for Imaging (QUANTIC, No. EP/T00097X/1), an EPSRC DTP, and the Royal Society (No. UF160475). The authors declare no conflicts of interest.
|
2308.07966 | Understanding DNS Query Composition at B-Root | The Domain Name System (DNS) is part of critical internet infrastructure, as
DNS is invoked whenever a remote server is accessed (an URL is visited, an API
request is made, etc.) by any application. DNS queries are served in
hierarchical manner, with most queries served locally from cached data, and a
small fraction propagating to the top of the hierarchy - DNS root name servers.
Our research aims to provide a comprehensive, longitudinal characterization of
DNS queries received at B-Root over ten years. We sampled and analyzed a
28-billion-query large dataset from the ten annual Day in the Life of the
Internet (DITL) experiments from 2013 through 2022. We sought to identify and
quantify unexpected DNS queries, establish longitudinal trends, and compare our
findings with published results of others. We found that unexpected query
traffic increased from 39.57% in 2013 to 67.91% in 2022, with 36.55% of queries
being priming queries. We also observed growth and decline of
Chromium-initiated, random DNS queries. Finally, we analyzed the largest DNS
query senders and established that most of their traffic consists of unexpected
queries. | Jacob Ginesin, Jelena Mirkovic | 2023-08-15T18:02:17Z | http://arxiv.org/abs/2308.07966v1 | # Understanding DNS Query Composition at B-Root
###### Abstract
The Domain Name System (DNS) is part of critical internet infrastructure, as DNS is invoked whenever a remote server is accessed (an URL is visited, an API request is made, etc.) by any application. DNS queries are served in hierarchical manner, with most queries served locally from cached data, and a small fraction propagating to the top of the hierarchy - DNS root name servers. Our research aims to provide a comprehensive, longitudinal characterization of DNS queries received at B-Root over ten years. We sampled and analyzed a 28-billion-query large dataset from the ten annual "Day in the Life of the Internet (DITL)" experiments from 2013 through 2022. We sought to identify and quantify unexpected DNS queries, establish longitudinal trends, and compare our findings with published results of others. We found that unexpected query traffic increased from 39.57% in 2013 to 67.91% in 2022, with 36.55% of queries being priming queries. We also observed growth and decline of Chromium-initiated, random DNS queries. Finally, we analyzed the largest DNS query senders and established that most of their traffic consists of unexpected queries.
## 1 Introduction
The Domain Name System (DNS) is the Internet's system for mapping alphanumeric resource names (e.g. the name of a web or a mail server) to their respective values (in most cases an IP address). This system is critical for basic functioning of the Internet. DNS queries are issued whenever a remote server is accessed by a client application, usually to obtain an IP address for a given server's name. A missing or an incorrect reply to such queries can halt all communication between the server and the client.
In many cases DNS queries are answered by a _caching resolver_ on the client's local network, which oftentimes has the full response in its cache. If the local caching resolver does not know the answer to a query, it will interact with other DNS participants to obtain, cache, and return the answer to the client. The DNS system is organized as a hierarchy of DNS name servers, with servers at the higher levels of the hierarchy containing information about servers one level lower than itself. At the top of the hierarchy resides 13 DNS roots. Most DNS queries are answered by lower levels of the DNS hierarchy, but some queries propagate to DNS roots.
We consider queries that do not follow DNS server naming conventions or occur too frequently with respect to historal data to be _unexpected queries_. Previous work analyzing DNS traces data has revealed a surprising amount of unexpected queries hitting the root servers [10, 11, 19]. Yet, none of the previous work provides a full characterization of unexpected and expected traffic into disjoint and meaningful categories. Such a classification would help us better understand the root causes and severity of different types of unexpected traffic. Our research aims to develop a comprehensive classification of DNS queries, and use it to study trends in DNS query traffic at B-root over the past ten years
We make the following contributions in this paper:
1. We propose a detailed, comprehensive DNS query classification scheme to cover main root causes of unexpected DNS traffic.
2. We quantify unexpected DNS query traffic at B-root, one of 13 DNS roots, both in aggregate and per class of interest. We study longitudinal trends in unexpected DNS queries over the course of ten years, using annually collected "Day in the Life of the Internet (DITL)" data [4]. We find an increase in unexpected traffic -- from 39.57% in 2013 to 67.91% in 2022. We additionally find 36.55% of traffic in 2022 is due to empty queries.
3. We identify top senders of DNS queries to B-root, then classify the traffic coming from each sender. We reveal most traffic from top senders consists of unexpected queries.
## 2 Background and Related Work
In this section we provide more details about DNS hierarchy and query resolution, as well as discuss prior work characterizing traffic at DNS query resolvers.
### DNS Hierarchy
DNS queries are issued by applications and operating systems whenever a connection is established with a remote server. For example, if a user types the URL www.example.com into their browser, a DNS query containing the aforementioned name is sent to discover the corresponding IP address. The query is first sent to a _caching resolver_ - usually a server in the same local network as the query sender. The caching resolver (_resolver_ for short) will attempt to respond by searching for the query's answer in its cache. If the full answer is not in the cache, the resolver will interact with different _authoritative name servers_ to try to determine the full answer. Such an answer will be returned to the client, then saved in cache to respond to potential future queries.
The DNS utilizes a distributed, hierarchical zoning system in order to designate authority, ensure the system's robustness, and effectively distribute query traffic across servers. Each name, e.g. www.example.com, can be viewed as collection of name segments separated by dots [2]. Each name segment resides in a separate name space, each of which has a designated _authoritative name server_. Such servers will answer queries about names within that name space, either by providing the full answer or by directing the query sender to an authoritative name server for a subset of the given name space. In our example, the caching resolver trying to find the IP address for www.example.com may have in its cache the name and IP address of the name server authoritative for.com _top-level domain (TLD)_. The resolver may repeat its query to the TLD name server, and receive back the name and IP address of the name server authoritative for example.com _second-level domain (sTLD)_. The resolver will cache this new information, then repeat its query to this sTLD name server and receive a full response, which will be cached and returned to the client.
If the resolver from our example does not have information about the relevant TLD server (e.g.,.com), it will send its query to one of DNS root name servers. The names and IP addresses of root name servers are often hard-coded in operating system releases, thus a resolver always knows how to reach a DNS root. The root zone is served by 13 logical root name servers (13 root server names, A-M) and hundreds of physical servers.1 The root name server will provide the name and IP address of the relevant TLD server, which will be cached by the caching resolver. The rest of name resolution continues as described in the previous paragraph.
Footnote 1: The number of root servers used to be capped at 13 due to the size limitation of UDP reply packets. After the introduction and rollout of anycast, the number of physical servers per DNS root has increased.
Responses from DNS replies carry a "time-to-live (TTL)" value, a number of seconds that the authoritative server suggests they should be cached. A resolver can decide to cache a DNS record for a shorter time than the recommended TTL. DNS records for names higher in the hierarchy, such as sTLD and TLD servers, usually have TTL values in hours or even days. Caching should ensure that a resolver can quickly reply to most client DNS queries, and that higher levels of DNS hierarchy do not receive too frequent queries from any resolver.
Two recent extensions to DNS protocol introduce changes to the query resolution process. Query minimization (QMIN) RFC 7816 [9] instructs DNS resolvers to protect their clients' privacy by only asking each authoritative name server for the name segment that the resolver is currently trying to resolve. In our example, the resolver would not send the full www.example.com query to each authoritative name server. Instead, it would send a com query to a root name server, a example.com query to the TLD name server, and the full query to the sTLD name server. RFC 8109 [13] introduces _priming queries_, i.e., queries of type nameserver (NS) for the root zone ".". Such queries can be sent to any root, and the reply should specify all root server names and IP addresses. Priming queries can help a resolver learn a new IP address for a root name server. In practice, the mapping between root server names and IP addresses has been stable enough as to not require additional root servers to be introduced [12].
Finally, while the most popular DNS query maps a DNS name into an IP address (query type A for IPv4 address, query type AAAA for IPv6 address), there are other types of DNS queries. A NS query returns the name and often the IP address of the authoritative name server for the specified query name. A pointer record (PTR) query asks for reverse mapping from an IP address into a DNS name. A start-of-authority (SOA) query requests some metadata about the name, such as the email address of the administrator, when the domain was last updated, and how long the server should wait between refreshes. Mail exchanger (MX) queries are for mail servers serving a given name. There exists several other, less frequent, query types [1].
### DNS Query Classification
Castro et al. [10] analyzed DITL datasets at eight root servers from years 2006 through 2008. They uncovered very high volumes of unexpected traffic across the root zone -- almost 98% of queries were identified as unexpected. Unexpected queries were specified to fall into any one of the following categories: invalid query class (a field in a DNS query with five valid values), A or AAAA queries where the query name is an address, queries with invalid TLDs, queries with non printable characters or underscores, PTR queries for a private IP address, identical queries (same class, type, name and ID), repeated queries (same class, type, and name but different ID), and queries where referral records (TLD and sTLD) have not been properly cached. Our study is in a sense a modern sequel to this previous work. We extend the
Castro et al. study in a few ways: (1) we dive deeper into invalid query categories, characterizing them by their root cause, (2) our analysis covers newer DNS query trends, like query minimization and priming queries, (3) our analysis spans ten years of DITL data albeit at only one root, and (4) we analyze top senders in several invalid query categories to investigate if there are any commonalities between them that would explain their querying behavior.
### DNS Sender Analysis
A recent study measured centralization in senders to B-Root, with a specific focus on tracking the 5 top cloud providers [14]. This study reveals that in 2020, more than 30% of all queries to two TLD name servers and B-root were sent from five large cloud providers: Google, Amazon, Microsoft, Facebook, and Cloudflare. We extend upon this work by examining and ranking all senders at B-root instead of just cloud providers. We find senders which often send at rates that are too high often send predominantly queries with invalid TLDs. A similar trend was observed by Castro et al. [10] in 2006-2008.
## 3 Dataset
Each year, most root servers and several TLD servers collect and publish all their query traffic on a specific, predetermined day. This effort is known as "Day in the Life of the Internet" or DITL, and is undertaken to produce useful data for research [4]. Although DITL data has been collected at other root name servers since 2006, B-Root joined the experiment in 2013; likewise, our sample covers just ten years of traffic (2013-2022) [16]. Ideally, we would have analyzed all roots' data from DITL collection. However, this data is only available on OARC servers, which have limited computational power. For this reason, we started with B-root data, which was available locally at our servers, and we plan to extend our analysis to other roots' data in the future.
To speed up our analysis, we analyzed samples from B-Root's DITL data. For each year of DITL experiment data, we utilized the sample function from Python's random package [17] to generate ten year-specific subsets of data. For each of our 10 subsets, we generate 4 additional subsets each denoting one of four time segments: 12-1am, 6-7am, 12-1pm, and 6-7pm. Table 1 shows the details of the dataset we analyze in this paper. In total, we study 28 billion DNS queries spread over 10 years, one day per year.
## 4 Methodology
In this section we describe our methodology to determine query classes, and how we implemented our approach.
### Classification Goals
While previous works have primarily focused on opportunistically measuring some aspects of unexpected queries, our goal is to provide a more comprehensive, general classification. We seek to define a method to allow us to stratify DNS queries into sections denoting different root causes of unexpected traffic. To do this, we consider two qualities to be of interest when creating our classification method: full-coverage and mutual exclusivity. A method that has full-coverage
\begin{table}
\begin{tabular}{c c c c c c}
**Date (Y/M/D)** & **Main** & **12-1am** & **6-7am** & **12-1pm** & **6-7pm** \\
2013/05/28 & 1.00B & 19.96M & 37.68M & 69.44M & 48.53M \\ \hline
2014/04/28 & 0.98B & 255.10M & 61.24M & 11.19M & 46.80M \\ \hline
2015/04/13 & 0.96B & 35.87M & 42.82M & 63.72M & 29.55M \\ \hline
2016/04/05 & 4.22B & 148.62M & 154.42M & 344.51M & 172.56M \\ \hline
2017/04/11 & 3.50B & 134.81M & 128.26M & 183.75M & 172.21M \\ \hline
2018/04/10 & 3.90B & 97.01M & 121.28M & 255.69M & 181.40M \\ \hline
2019/04/08 & 3.63B & 93.38M & 157.30M & 295.90M & 103.52M \\ \hline
2020/05/05 & 3.52B & 67.86M & 144.47M & 267.39M & 116.75M \\ \hline
2021/04/13 & 2.07B & 79.78M & 69.06M & 112.40M & 93.40M \\ \hline
2022/04/12 & 4.11B & 92.40M & 105.70M & 326.66M & 87.48M \\ \end{tabular}
\end{table}
Table 1: Evaluated Datasets
places every single query of a given dataset into a single, defined category at each level of classification. A method that is mutually exclusive ensures there's no overlap between query categories at the same classification level.
We opt to primarily classify queries based on query names; yet, we also recognize it's possible to classify queries by other features (e.g. query types, cached/uncached queries, repeated queries) and we hope to do so in the future. In developing our name-based method that fulfills the aforementioned qualities, we consider the DNS zoning hierarchy as defined by RFC 1035 [2]. In moving down the zoning hierarchy from the root zone (right to left in the context of a textual DNS query), we recognize three mutually exclusive possibilities: the query is empty ("."), the query ends following some text (e.g. "foo."), or the query continues (e.g. "[more query].foo."). This method is recursive on the latter query case, which is relevant for processing queries whose subdomains are multi-level [2].
In accordance with our method, we stratified our data into three all-encompassing, mutually exclusive categories - _empty_ ("."), _has-TLD_ (e.g. "example.com"), and _one-word_ (e.g. "foolbar."). Next we split the has-TLD category into _valid-TLD_ and _invalid-TLD_ by comparing the TLD of each query to IANA's maintained valid TLD list [3]. Within the one-word category we also attempt to detect presence of valid TLDs, which can occur due to query minimization [9]. We further stratified the valid-TLD category by categorizing valid TLDs by frequency.
Before categorizing invalid-TLD queries by TLD frequency, we separated classifications of queries we deemed interesting. We quantified queries that contained top-level domains consisting of entirely numbers because they're deemed invalid by RFC1034 [1]. We quantified queries from Appletalk, a discontinued proprietary suite of networking protocols for Apple products, as it could potentially indicate legacy Apple product usage [8], leaking private data into the public Internet. We quantified queries with TLDs containing "bad encoding" (ASCII depicted as "\(\backslash\)xxx\(\backslash\)xxx") because of its high frequency in DITL data. Because Chromium-initiated queries are known to occasionally contain an invalid-TLD [15], we quantified those as well (the importance of Chromium-initiated queries is discussed further in Section 5.5).
We separated Chromium-initiated queries from within the one-word category due to their overabundance in certain years. Chromium-initiated queries are discussed further in Section 5.5. We quantified minimized queries (minimized queries at root servers look like one-word queries whose content is a valid top-level domain) in our collections after the technique's introduction in March of 2016 [9]. Minimized queries and their importance are discussed further in Section 5.6.
Our implementation of our classification method involves use of dictionary-based matching and regular expressions. We achieve exclusivity by enforcing the order in which we apply classification criteria within a Python program.
## 5 Results
In this Section we present our results. We show the breakdown of DNS query traffic in 2013 and 2022 in Section 5.1. We analyze trends in query types in Section 5.2. We analyze longitudinal trends in Section 5.3. We explore top senders of queries to B-Root in 5.4. We specifically explore Chromium-initiated queries in Section 5.5. We quantify the increasing presence of minimized queries in Section 5.6. We explore empty queries in Section 5.7.
### 2013 & 2022 B-Root Traffic Breakdown and Comparison
We applied our classification method to DITL collections of B-Root DNS traces from 2013 and 2022 with the intent of revealing trends over the past ten years (see Figures 1 and 2 respectively). After splitting our data into the four previously designated categories - empty, valid-TLD, invalid-TLD, and one-word - we did additional work to further stratify each category. Within valid-TLD, we quantified the highest frequency valid top-level domains. Within invalid-TLD, we quantified Appletalk queries [8], queries with top-level domains that are incorrectly encoded (e.g. "[query].\(\backslash\)xxx\(\backslash\)xxx\(\backslash\)xxx"), queries with all-number top-level domains2, and Chromium-initiated queries (see Section 5.5). Beyond these specific categories within invalid-TLD, we quantified the highest frequency unique invalid top-level domains. Within the One-Word category, we quantified Chromium-resulting queries. Because minimized queries were introduced in 2016, we characterized them only in our 2022 dataset.
Footnote 2: All-number top-level domains are explicitly specified as invalid by RFC1034.
Between 2013 and 2022, we see a 34% increase in empty queries, a 36% reduction in valid-TLD queries, a 10% reduction in invalid-TLD queries, and a 10% increase in one-word queries. The large increase in empty queries is significant and could be due to priming queries, as discussed in Section 5.7.
Within valid-TLD, we see a sharp reduction in the percentage of.com queries (21.43% to 5.67%),.net queries (13.83% to 3.74%), and.org queries (2.77% to 0.60%). Surprisingly,.arpa queries stay at approximately the same percentage across the 10 year gap (2.66% to 2.87%).
Within invalid-TLD, we see a small increase in.internal queries despite an overall reduction in the category--this is potentially indicative of a persistent, growing leak. Appletalk queries decrease from 1.13% to.57%, which is expected given Appletalk is long defunct [8].
### Query Types
Figure 3 shows the distribution of queries by type at B-Root from years 2013 through 2022. For all years, A-Type queries, used to request an IPv4 address for a given query name, are the most common (60% of the total previous to 2022). AAAA-type queries, used to request IPv6 an address, are generally the second most common query type (15% of the total previous to 2022). In 2022, we measure a large reduction in A and AAAA-type queries and a large increase in NS-type queries. The increase in NS-type queries is associated with the implementation of resolver priming [13]--priming queries are further discussed in Section 5.6.
### Longitudinal Trends
We applied our classification method to each of our collections of B-Root DNS traces from 2013 through 2022 with the intent of discovering longitudinal trends. Figure 4 shows the breakdown of empty, one-word, invalid-TLD, and valid-TLD queries for each year 2013-2022. Valid-TLD queries consistently decline from 57.82% in 2013 to 22.84% in 2022. invalid-TLD queries stay approximately constant through the 10 year sample, hovering between 20% and 30% of all queries. One-word queries see a steady increase from 8.26% in 2013 to 68.45% in 2020, followed by a sharp decline to 18.87% in 2020. This rise and fall is largely due to Chromium-resulting queries, as further discussed in section 5.5. Empty queries hovered around 3% until jumping to 37.42% in 2022. This sudden increase is thought to be a result of excessive priming queries, as discussed in section 5.7.
### Top Senders
We identified resolvers that are top senders in DITL dataset from 2022, and show them and their query composition in Figure 5. Amazon Web Services (AWS) accounts for the 1st, 2nd, 3rd, 8th, and 10th highest IP host groups and account for approximately 14% of all queries to B-Root. AWS sends almost entirely invalid-TLD and one-word queries to B-Root. Microsoft Azure, another cloud computing platform, has a similar query classification breakdown to AWS. This is potentially indicative of rented cloud machines being misconfigured or used for malicious purposes. Charter and Compudyn, both internet service providers, account for 3.43% of all traffic to B-Root. Both providers primarily send invalid-TLD queries, potentially indicating a misconfiguration. Additionally, empty and valid-TLD queries aren't present in significant quantities from these large senders.
### Chromium-Resulting Queries
Chromium is an open-source web browser project primarily maintained by Google. In addition to Google Chrome, several other major web browsers including Microsoft Edge, Opera, Brave, Samsung Internet, and Amazon Silk are
Figure 1: Classification of 1.00 billion DNS traces at B-Root in 2013
based on the Chromium codebase. In total, approximately 75% of the web-browser market share is Chromium-based [7].
Chromium includes a feature titled Omnibox, which allows users to enter website names, URLs, or search terms. Chromium then decides if the entered term is a URL or a search term by performing a DNS query. A URL will result in a valid response, while a search term will not -- Chromium can then supply search results from Google. However, a user's machine may be behind a captive portal (e.g., in a hotel), which intercepts each DNS query and responds with either the correct response (e.g., in the URL case) or with a redirect to an internal Web site (e.g., in the search term case). This situation would interfere with the Chromium's response to user input. For this reason each Chromium browser attempts to detect presence of captive portals by sending three randomly generated query names [6][5]. These queries contain 7-15, lowercase alphabetic characters (e.g., "daozjwend.").
As a consequence of this feature combined with Chromium's high market share, root zone name servers have reported a very high quantity of Chromium-originating queries. Our findings at B-Root agree with the findings of previous work quantifying these queries [15]. We see a gradual increase in Chromium-originating queries from 2013 through 2020, followed by a sharp decline after 2020 following a change to Omnibox's probing process [18]. This trend is shown in Figure 6. Because Chromium-resulting queries have been known to appear both with and without a TLD [15], we quantify both types.
### DNS Query Name Minimization
We seek to quantify the presence of minimized queries (these appear at B-Root as one word valid TLDs, e.g. com) since the inception of the query minimization specification in March of 2016 [9]. Because the DNS is highly distributed and controlled by hundreds of different organizations, a change in the DNS protocol takes time to propagate. Thus, studying the quantity of minimized queries hitting the root zone could provide insight into the speed at which DNS protocol changes propagate throughout the entire DNS network, potentially aiding in future DNS specification development. In our data, we find a generally steady increase in minimized queries after 2016, as shown in Figure 7. In accordance with the frequency of valid TLDs found in Section 5.1, the distribution of minimized queries is expected.
Figure 2: Classification of 4.11 billion DNS traces at B-Root in 2022
Figure 4: Breakdown of longitudinal trends from 2013 through 2022 in DITL datasets at B-Root
Figure 3: Breakdown of DNS query types at B-Root from 2013 through 2022
Figure 5: Top query senders in 2022 DITL dataset at B-root
Figure 6: Chromium-initiated queries from 2013 through 2022 in DITL datasets at B-Root
Figure 7: Breakdown of minimized DNS queries at B-Root from 2016 through 2022
### Empty Queries
One of the most notable outliers we discovered in our data is the overabundance of empty queries in 2022--37.42% of queries in 2022 to B-Root are empty. Before 2022, empty queries only ever occupied as much as 4% of all traces. Upon investigation, we find empty query top senders only account for a small fraction of all empty queries, as shown in Figure 8. Similarly, we find each sender, on average, sends 2.8 empty queries to B-Root. We also find 97.20% of empty queries sent to B-Root in 2022 are type NS. The decentralization and query type indicate the majority of the empty queries hitting B-Root in 2022 are _Priming Queries_, a standard introduced in March of 2017 [13]. In comparison with the gradual growth of minized query traffic after the introduction of RFC7816, as shown in Figure 4 the growth of empty queries was nonexistent, then sudden. To the best of our knowledge we are the first to identify this pattern, and in the future we hope to track down the root cause.
## 6 Conclusion and Further Directions
This investigation into B-Root's DNS traces collected from the annual DITL experiment over ten years characterized longitudinal trends, as well as modern issues, such as a high volume of priming queries. Future work involves characterizing valid TLD traffic at B-root and identifying unexpected queries in that category. We would also like to analyze other root's TLD data and see if trends identified at B-root apply to other roots. We encourage other DNS operators to implement our classification method. We also hope to extend our classification approach with more categories in the future.
## 7 Acknowledgement
This work was performed as a part of a Research Experience for Undergraduates (REU) program, supported by National Science Foundation (NSF) grant #2051101.
|
2310.12847 | Correspondence between Color Glass Condensate and High-Twist Formalism | The Color Glass Condensate (CGC) effective theory and the collinear
factorization at high-twist (HT) are two well-known frameworks describing
perturbative QCD multiple scatterings in nuclear media. It has long been
recognized that these two formalisms have their own domain of validity in
different kinematics regions. Taking direct photon production in proton-nucleus
collisions as an example, we clarify for the first time the relation between
CGC and HT at the level of a physical observable. We show that the CGC
formalism beyond shock-wave approximation, and with the
Landau-Pomeranchuk-Migdal interference effect is consistent with the HT
formalism in the transition region where they overlap. Such a unified picture
paves the way for mapping out the phase diagram of parton density in nuclear
medium from dilute to dense region. | Yu Fu, Zhong-Bo Kang, Farid Salazar, Xin-Nian Wang, Hongxi Xing | 2023-10-19T15:57:27Z | http://arxiv.org/abs/2310.12847v1 | # Correspondence between Color Glass Condensate and High-Twist Formalism
###### Abstract
The Color Glass Condensate (CGC) effective theory and the collinear factorization at high-twist (HT) are two well-known frameworks describing perturbative QCD multiple scatterings in nuclear media. It has long been recognized that these two formalisms have their own domain of validity in different kinematics regions. Taking direct photon production in proton-nucleus collisions as an example, we clarify for the first time the relation between CGC and HT at the level of a physical observable. We show that the CGC formalism beyond shock-wave approximation, and with the Landau-Pomeranchuk-Migdal interference effect is consistent with the HT formalism in the transition region where they overlap. Such a unified picture paves the way for mapping out the phase diagram of parton density in nuclear medium from dilute to dense region.
_Introduction.-_ In high-energy scatterings involving heavy nuclei, many interesting nuclear dependent phenomena have been observed [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. The essential ingredient for understanding novel nuclear dependence in different collision systems is the description of multiple parton scattering inside the nuclei. It is thus critical to elucidate these multiple scatterings in perturbative QCD (pQCD) in different kinematic regimes [15; 16; 17; 18] of the nuclear medium.
The Color Glass Condensate (CGC) effective theory [19; 20; 21; 22; 23; 24; 25] and the collinear factorization at high-twist (HT) [26; 27; 28] are two well-known theoretical frameworks describing QCD multiple scatterings in nuclear media. They have been extensively used to describe the phase diagram of parton density in nucleon/nuclei as shown schematically in Fig. 1, as a function of parton momentum fraction \(x\) and the associated hard scale \(Q\). In the dilute region where \(x\sim\mathcal{O}(1)\), the corresponding pQCD collinear factorized formalism at leading twist [29] has been very successful and set as a benchmark theory for high-energy physics. In the relatively dense region where \(x\lesssim\mathcal{O}(1)\), the high-twist expansion approach based on the generalized QCD collinear factorization theorem [27; 28] provides a robust framework to describe multiple scatterings in nuclear medium order by order, which are essentially power corrections to the leading twist cross-section. Such an approach has been successfully applied to calculate the incoherent multiple scattering at the next-to-leading power [30; 31], and to the study of jet quenching in cold nuclei [32; 33]. In the high energy limit, \(x\sim 1/\sqrt{s}\to 0\), the gluon density grows rapidly resulting in a high gluon occupation number. It is expected that the gluon density is tamed by non-linear QCD effects at sufficiently small-\(x\)[34; 35]. The CGC provides an effective description of this saturated regime, with many experimental consequences [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52].
It has long been recognized that the HT and CGC approaches have drastic differences. One of the main differences is the QCD factorization theorems they rely on. The HT approach follows the generalized QCD collinear factorization, in which the medium property is encoded in the multi-parton quantum correlation functions satisfying the DGLAP-type evolution [53; 54; 55]. The CGC, on the other hand, follows transverse momentum-dependent factorization at small-\(x\), and the corresponding medium properties are encoded in correlators of light-like Wilson lines, which satisfy the Jalilian-Marian-Iancu-McLerran-Weigert-Leonidov-Kovner/Balitsky-Kovchegov nonlinear evolution [56; 57; 58; 59; 60; 61; 62; 63; 64]. In terms of multiple scattering, extra soft rescatterings are considered order by order in a power series in addition to the hard scattering in HT approach, while in the CGC analysis, all scatterings are treated on the same footing and within the eikonal approximation which allows for their exponentiation into the light-like Wilson line.
Despite the many successes of the HT and CGC formalisms, they were limited to their own domain of valid
ity. It is believed that they have to agree with each other in the overlap region where both are applicable, simply because of the universality of the medium property they probe. There have been tremendous efforts to show the correspondence between CGC and QCD collinear factorization, aiming to extend the applicability of CGC from small-\(x\) (dense) to large-\(x\) (dilute) region with particular emphasis on the sub-eikonal corrections to the parton propagators [65; 66; 67; 68; 69; 70; 71; 72; 73; 74], the rapidity evolution of unintegrated gluon distributions [75; 76; 77; 78; 79], as well as new semiclassical approaches [80; 81; 82; 83; 84; 85; 86]1. However, no consensus has yet been reached on the relations between HT and CGC and the identification of transition mechanisms from dilute to dense regions.
Footnote 1: Sub-eikonal corrections are also necessary to describe the physics of spin at small-\(x\)[87; 88; 89; 90; 91; 92; 93; 94; 95].
In this letter, we clarify for the first time the correspondence between HT and CGC formalisms at the level of a physical observable. In particular, taking direct photon production in \(pA\) collisions as an example, we present a systematic treatment of the nuclear enhanced initial- and final-state double scatterings, as well as their interference. We prove the consistency between HT and CGC by going beyond the shock wave approximation and including the Landau-Pomeranchuk-Migdal (LPM) interference effect [96; 97]. We argue that the generalization of such an approach to all hard scattering processes is straightforward. Therefore, our results provide a unified picture of dilute-dense dynamics in nuclear media. It paves the way to mapping out the phase diagram of atomic nuclei in terms of parton density as shown in Fig. 1, and to understanding the underlying multiple scattering mechanisms.
_Dilute versus dense regions.-_ In order to show explicitly the correspondence between the CGC and the generalized collinear factorization formalism, we take direct photon production in \(pA\) collisions as an example, \(p(P_{p}^{-})+A(P_{A}^{+})\to\gamma(p_{\gamma})+X\), where \(P_{p}^{-}\), \(p_{\gamma}\) and \(P_{A}^{+}\) are, respectively, the momentum for the incoming proton, the observed photon and the averaged momentum per nucleon inside the nucleus. The direct photon production has a unique advantage to test QCD multiple scattering effects due to the absence of strong interaction between the photon and the nuclear medium. We focus on the interactions between quarks from the proton and gluons from the nucleus. The extension to other channels and processes can be performed in a similar fashion.
In collisions involving a large nucleus with mass number \(A\), effects of multiple scatterings can be enhanced by powers of the nuclear size, \(L_{A}^{-}\sim A^{1/3}\), thus becoming important. Such multiple scatterings can be described by the generalized factorization formalism order by order as illustrated in Fig. 2, i.e. \(\mathrm{d}\sigma=\mathrm{d}\sigma^{\mathrm{LT}}+\mathrm{d}\sigma^{\mathrm{T4} }+\ldots\), where LT and T4 stand for leading twist and twist-4, respectively. In the dilute region, the cross-section is dominated by partonic processes of single scattering, as shown in Fig. 2(a). The standard LT factorization yields [29]
\[E_{\gamma}\frac{\mathrm{d}^{3}\sigma^{\mathrm{LT}}}{\mathrm{d}^{3}\mathbf{p_{ \gamma}}}= f_{q/p}(x_{q})\otimes xf_{g/A}(x)\otimes H_{q+g\to\gamma+q}^{(2)}\,, \tag{1}\]
where \(\otimes\) stands for the convolution of the LT quark \(f_{q/p}\) and gluon \(f_{g/A}\) distribution function and the hard partonic part \(H_{q+g\to\gamma+q}^{(2)}=(e_{q}^{2}\alpha_{em}\alpha_{s}/N_{c})\xi^{2}[1+(1- \xi)^{2}]/p_{\gamma}^{4}\) for single scattering \(q+g\to\gamma+q\)[98], where \(\xi=p_{\gamma}^{-}/(x_{q}P_{p}^{-})\). The summation over the quark flavor index is implicit throughout this Letter.
The leading nuclear corrections beyond the single scattering picture can be formulated within the framework of generalized factorization. Previous studies of direct photon production in \(pA\) collisions have focused on contributions from initial-state double scattering [30; 99], which are dominant in the large-\(x\) region. In this study, using the generalized factorization, we go beyond the large-\(x\) region and calculate for the first time the complete result including both initial and final state double scatterings, as well as their interference. The final results can be written schematically as,
\[E_{\gamma}\frac{\mathrm{d}^{3}\sigma^{\mathrm{T4}}}{\mathrm{d}^{3}\mathbf{p_{ \gamma}}}= f_{q/p}\otimes\mathcal{D}_{X}T_{gg}\otimes H_{q+gg\to\gamma+q}^{(4)}\,, \tag{2}\]
Figure 1: Phase diagram of parton density in nuclear medium in terms of momentum fraction \(x\) and probing scale \(Q\).
Figure 2: Schematic diagrams for single (a) and multiple (b) scatterings for direct photon production in \(pA\) collisions, the circles represent quark-gluon hard interaction.
for each cut diagram, where \(H^{(4)}\) is the corresponding hard function at twist-4, \(T_{gg}\)'s are twist-4 gluon-gluon correlation functions in the nucleus, for example,
\[T_{gg}(\{x_{i}\})= \int_{\{y_{i}^{-}\}}e^{iP_{A}^{+}\left[x_{1}y^{-}+x_{2}(y_{1}^{-}-y _{2}^{-})+x_{3}y_{2}^{-}\right]}\theta(y^{-}-y_{1}^{-})\] \[\times\theta(-y_{2}^{-})\langle P_{A}|F_{\alpha}^{+}(y_{2}^{-})F^ {\beta+}(0)F_{\beta}^{+}(y^{-})F^{+\alpha}(y_{1}^{-})|P_{A}\rangle, \tag{3}\]
with \(\int_{\{y_{i}^{-}\}}\equiv[1/(4\pi^{2}P_{A}^{+})]\int dy^{-}dy_{1}^{-}dy_{2}^{-}\). The short-hand notation \(\mathcal{D}_{X}\) stands for the linear combination of derivatives \(\partial/\partial x_{i}\) and \(\partial^{2}/(\partial x_{i}\partial x_{j})\) that act on \(T_{gg}(\{x_{i}\})\equiv T_{gg}(x_{1},x_{2},x_{3})\), and we call it non-derivative contribution when \(\mathcal{D}_{X}=1\). Unlike the LT parton distribution functions, which possess a probability density interpretation, the twist-4 matrix elements characterize the quantum parton-parton correlations inside the nucleus. The detailed derivation will be given in a forthcoming companion paper [100], and we provide the complete expressions for Eq. (2) in the supplemental material.
In the extremely dense region (\(x\to 0\)) as shown in Fig. 1, the gluon occupation number becomes large, and the probe coherently interacts with the entire nucleus, where the coherence length \(\lambda_{c}\sim 1/xP_{A}^{+}\gg L_{A}^{-}\). In this regime, one must resum coherent multiple scatterings to all orders. In the CGC effective theory, within the hybrid factorization formalism [101], the cross-section can be written as
\[E_{\gamma}\frac{\mathrm{d}^{3}\sigma^{\mathrm{CGC}}}{\mathrm{d} ^{3}\mathbf{p}_{\gamma}}= \frac{e_{q}^{2}\alpha_{em}}{2\pi^{2}}\xi^{2}[1+(1-\xi)^{2}]\otimes f _{q/p}(x_{q})\] \[\otimes\int\mathrm{d}^{2}\mathbf{l}_{\perp}\frac{\mathbf{l}_{\perp}^{2}F( x,\mathbf{l}_{\perp})}{(\xi\mathbf{l}_{\perp}-\mathbf{p}_{\gamma\perp})^{2}\mathbf{p}_{\gamma \perp}^{2}}\,, \tag{4}\]
where the dipole distribution is defined as
\[F(x,\mathbf{l}_{\perp})= \int\frac{\mathrm{d}^{2}\mathbf{y}_{\perp}}{2\pi}\int\frac{\mathrm{d} ^{2}\mathbf{y}_{\perp}^{\prime}}{2\pi}e^{-\mathbf{l}_{\perp}\cdot(\mathbf{y}_{\perp}-\mathbf{ y}_{\perp}^{\prime})}\] \[\times\frac{1}{N_{c}}\langle\mathrm{Tr}[V^{\dagger}(\mathbf{y}_{\perp} ^{\prime})V(\mathbf{y}_{\perp})]\rangle\,, \tag{5}\]
and \(V(\mathbf{y}_{\perp})=\mathcal{P}\left[\exp\left(ig\int\mathrm{d}y^{-}A^{+}(\mathbf{y} _{\perp},y^{-})\right)\right]\) stands for the light-like Wilson line, encoding the multiple eikonal scattering of the projectile quark with the nucleus. Here, \(\langle\dots\rangle_{x}\) stands for the average over different classical color charge configurations in the CGC. The cross-section in Eq. (4) has a collinear divergence at \(\mathbf{p}_{\gamma\perp}=\xi\mathbf{l}_{\perp}\), which can be regularized by the redefinition of the photon fragmentation function [102; 103].
_Mismatch between HT and power expansion of CGC._ - In this Letter, we are aiming to find the link between the CGC and HT beyond the small-\(x\) limit. Such a kinematic region can be realized when \(p_{\gamma\perp}\) is larger than the saturation scale \(Q_{s}\sim\langle l_{\perp}\rangle\), which is the typical transverse momentum in the multiple parton scattering. To see the connection to the high-twist formalism, we perform Taylor (or collinear) expansion of the CGC result in powers of \(Q_{s}^{2}/\mathbf{p}_{\gamma\perp}^{2}\) for \(p_{\gamma\perp}>Q_{s}\). We also use the following relations between the collinear gluon distributions and the moments of dipole distribution,
\[\lim_{x\to 0}xf_{g/A}(x)= \frac{N_{c}}{2\pi^{2}\alpha_{s}}\int\mathrm{d}^{2}\mathbf{l}_{\perp} \mathbf{l}_{\perp}^{2}F(x,\mathbf{l}_{\perp})\,, \tag{6}\] \[\lim_{x\to 0}T_{gg}(x,0,0)= \frac{N_{c}^{2}}{2(2\pi)^{4}\alpha_{s}^{2}}\int\mathrm{d}^{2}\bm {l}_{\perp}\mathbf{l}_{\perp}^{4}F(x,\mathbf{l}_{\perp})\,. \tag{7}\]
The first relation above has been long established in Refs. [104; 105], whereas the second equation is derived for the first time and will be presented in the companion paper [100]. The CGC result in Eq. (4) after the collinear expansion in small-\(x\) limit becomes,
\[E_{\gamma}\frac{\mathrm{d}^{3}\sigma^{\mathrm{CGC}}}{\mathrm{d} ^{3}\mathbf{p}_{\gamma}} = f_{q/p}(x_{q})\otimes H_{q+g\to\gamma+q}^{(2)}\] \[\otimes\left[xf_{g/A}(x)+\frac{(2\pi)^{2}\alpha_{s}}{N_{c}}\frac{ 4\xi^{2}}{p_{\gamma\perp}^{2}}T_{gg}(x,0,0)+\cdots\right]_{x\to 0}. \tag{8}\]
One immediately sees that the first term matches the LT result in Eq. (1) in the limit of \(x\to 0\), where the longitudinal phase \(e^{ixP_{A}^{\perp}y^{-}}\) in \(f_{g/A}\) can be neglected. Such matching between CGC and LT results has been realized in other processes [105; 106; 107]. However, the matching to HT formalism has never been established. The second term in Eq. (8) reproduces the non-derivative term in twist-4 result in Eq. (2) if one neglects all the longitudinal phases in Eq. (3) and assumes all the twist-4 distributions reduce to the universal object at small \(x\) in Eq. (7). Since the derivative terms in Eq. (2) also arise from the longitudinal phases that get entangled with the collinear expansion, they are one of the primary reasons for the mismatch between the CGC and HT formalism.
_Sub-eikonal phases and LPM interference._ - A rigorous proof of the matching between CGC and HT factorization at finite \(x\) is nontrivial, and has recently triggered various efforts [65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85]. In this Letter, we reveal for the first time two key ingredients for the matching: sub-eikonal phases and the LPM effect, in proving the exact correspondence at twist-4 level in finite-\(x\) region.
In the CGC, the eikonal scatterings between the fast projectile and the nucleus' small-\(x\) background field can be resummed into an effective vertex, known as the shock wave approximation, allowing one to write down the dipole distribution in a compact form shown in Eq. (5). The price paid in such a compact expression is the neglect of the information encoded in the longitudinal phase factors, which is essential at finite-\(x\). Therefore, we must first bring back the longitudinal sub-eikonal phases to restore the information associated with the non-zero longitudinal momentum transfer. These sub-eikonal phases can not be easily exponentiated to all orders, we thus examine the corresponding twist contributions from the CGC effective vertex by expanding the light-like Wilson
line in powers of gauge field \(A^{+}\). At leading order in the expansion, the interacting vertex between the quark projectile and the nuclear medium becomes:
\[\Gamma_{q}(l)= (2\pi)\delta(l^{-})\gamma^{-}\int_{y}e^{-i\mathbf{l}_{\perp}\cdot\mathbf{y} _{\perp}}e^{il^{+}y^{-}}igA^{+}(y^{-},\mathbf{y}_{\perp})\,,\]
where \(l\) denotes the momentum transfer from the medium to the quark, and we introduced the short-hand \(\int_{y}\equiv\int\mathrm{d}^{2}\mathbf{y}_{\perp}\int\mathrm{d}y^{-}\). Armed with this vertex we find that the single scattering contribution reads
\[E_{\gamma}\frac{\mathrm{d}^{3}\sigma_{\mathrm{S}}^{\mathrm{CGC} _{\mathrm{sub}}}}{\mathrm{d}^{3}\mathbf{p}_{\mathbf{\gamma}}}= f_{q/p}(x_{q})\otimes\int_{y,y^{\prime}}\!\!\mathcal{H}_{ \mathrm{S}}\langle\mathrm{Tr}\left[A^{+}(y)A^{+}(y^{\prime})\right]\rangle. \tag{9}\]
The explicit expression for the perturbative factor \(\mathcal{H}_{\mathrm{S}}\) is given in the supplemental material. The leading term in the expansion of \(\mathcal{H}_{\mathrm{S}}\) in inverse powers of \(p_{\gamma\perp}^{2}\) is
\[\mathcal{H}_{\mathrm{S}}(p_{\gamma},y,y^{\prime})=\frac{2}{\pi}H _{q+g\rightarrow\gamma+q}^{(2)}e^{ixP_{A}^{+}(y^{-}-y^{\prime-})}\] \[\times\delta^{(2)}(\mathbf{y}_{\perp}-\mathbf{y}_{\perp}^{\prime})( \partial_{\mathbf{y}_{\perp}}\cdot\partial_{\mathbf{y}_{\perp}^{\prime}})+\mathcal{O} (1/\mathbf{p}_{\mathbf{\gamma}\perp}^{6})\,. \tag{10}\]
The derivatives convert the gauge field \(A^{+}\) into the field strength tensor \((\partial_{\mathbf{y}_{\perp}}\cdot\partial_{\mathbf{y}_{\perp}^{\prime}})A^{+}A^{+} \to F_{\perp}^{+}\cdot F_{\perp}^{+}\), which eventually leads to the exact matching to the standard leading twist collinear factorization result shown in Eq. (1), including the longitudinal phase factor \(e^{ixP_{A}^{+}y^{-}}\). As it is customary, to make this identification, we employed the correspondence between the CGC average and the quantum average [108], \(\langle\mathcal{O}\rangle=\langle P_{A}|\mathcal{O}|P_{A}\rangle/\langle P_{A} |P_{A}\rangle\) with \(\langle P_{A}|P_{A}^{\prime}\rangle=2P_{A}^{+}\delta^{(3)}(P_{A}-P_{A}^{ \prime})\).
In connecting to the complete twist-4 contribution, we need to consider the expansion of the light-like Wilson line in the CGC up to three gluon fields at amplitude level, corresponding to triple scatterings. In the following, we take double scattering as an example to show explicitly the matching. The extension to single-triple interference process can be carried out in the same fashion and will be detailed in Ref. [100].
As shown in Fig. 3, there are three diagrams that contribute to double scatterings at the amplitude level. In the eikonal approximation employed in the CGC, emissions between scatterings are omitted [101], thus only diagrams \((a,b)\) contribute, corresponding to initial- and final-state double scattering, respectively. However, by keeping track of the longitudinal sub-eikonal phases, one observes that the computation of diagram \((c)\) yields two different contributions that only differ by an overall phase factor. In terms of the formation time of the radiated photon, \(\tau_{\gamma}=2x_{q}P_{p}^{-}\xi(1-\xi)/(\mathbf{p}_{\mathbf{\gamma}\perp}-\xi\mathbf{l} _{\perp})^{2}\), this phase difference can be expressed as \(1-e^{i\Delta y^{-}/\tau_{\gamma}}\), where \(\Delta y^{-}\) is the distance between scattering locations. It is clear that in high energy limit \(P_{p}^{-}\rightarrow\infty\), there is a perfect destructive interference thus this diagram vanishes. This cancellation displays the characteristic LPM effect [96; 97], revealing the fact that when the photon formation time is larger than the two scattering centers \(\tau_{\gamma}\gg\Delta y^{-}\), the photon becomes coherent and can not resolve the two different scatterings. However, in the finite-\(x\) region, the phases do not cancel each other completely and therefore there remains a net contribution at the twist-4 level, which is required to establish the matching with the HT formalism. The LPM effect has been studied extensively in the context of parton energy loss [109; 110; 111; 112; 113; 114; 115; 116; 117], but this is the first time it is emphasized within the context of matching CGC and HT.
Similar types of diagrams are also neglected in the single-triple interference processes from CGC expansion. We emphasize again that such types of diagrams are non-negligible in the finite-\(x\) region due to the LPM effect, which is another important ingredient in the exact matching between CGC and HT. Including these two missing ingredients, we obtain the following result
\[E_{\gamma}\frac{\mathrm{d}^{3}\sigma_{\mathrm{D}}^{\mathrm{CGC} _{\mathrm{sub}}}}{\mathrm{d}^{3}\mathbf{p}_{\mathbf{\gamma}}}=f_{q/p}(x_{q})\otimes \int_{y,y^{\prime}_{1},y_{2}}\Theta(y,y^{\prime},y_{1},y_{2})\] \[\times\mathcal{H}_{\mathrm{D}}\langle\mathrm{Tr}\left[A^{+}(y_{2 })A^{+}(y^{\prime})A^{+}(y)A^{+}(y_{1})\right]\rangle\,, \tag{11}\]
for each cut diagram, each possessing different step functions \(\Theta\) that reflect different orderings as well as different perturbative factors \(\mathcal{H}_{\mathrm{D}}\). Their complete expressions are shown in the supplemental material. As in the single scattering case, we expand \(\mathcal{H}_{\mathrm{D}}\) in inverse powers of \(p_{\gamma\perp}^{2}\) and we find (up to next-to-leading order):
\[\mathcal{H}_{\mathrm{D}}(p_{\gamma},y,y^{\prime},y_{1},y_{2})=8 \alpha_{s}H_{q+g\rightarrow\gamma+q}^{(2)}e^{ixP_{A}^{+}(y^{-}-y^{\prime-})}\] \[\times\delta^{(2)}(\mathbf{y}_{\perp}-\mathbf{y}_{1\perp})\delta^{(2)}( \mathbf{y}_{\perp}^{\prime}-\mathbf{y}_{2\perp})\delta^{(2)}(\mathbf{y}_{1\perp}-\mathbf{y}_{2 \perp})\] \[\times\left[1+\frac{\mathcal{D}_{X}}{\mathbf{p}_{\mathbf{\gamma}\perp}^{ 2}}\left.(\partial_{\mathbf{y}_{\perp}}\cdot\partial_{\mathbf{y}_{2\perp}})\right]( \partial_{\mathbf{y}_{\perp}}\cdot\partial_{\mathbf{y}_{\perp}^{\prime}})+\mathcal{O} (1/\mathbf{p}_{\mathbf{\gamma}\perp}^{8})\,. \tag{12}\]
As in the LT case, the derivatives \((\partial_{\mathbf{y}_{\perp}}\cdot\partial_{\mathbf{y}_{\perp}^{\prime}})\) transform two gauge fields into strength field tensors. Then, the first term in the square bracket in Eq. (12) contributes to the gauge link in LT gluon distribution function. The two additional derivatives \((\partial_{\mathbf{y}_{\perp}}\cdot\partial_{\mathbf{y}_{2\perp}})\), on the second term in the square bracket, promote the two remaining
Figure 3: Double scattering diagrams for direct photon production in \(pA\) collisions.
gauge fields into strength field tensors, thus providing the genuine T4 contribution for double (and triple-single interference) scattering. The final result matches exactly the twist-4 result in Eq. (2), including all the longitudinal phase factors, and the corresponding derivative operators \(\mathcal{D}_{X}\). Thus, by including sub-eikonal phases and the diagrams that are responsible for the LPM effect in the finite-\(x\) region, we finally achieve an exact matching between the CGC and HT.
_Summary_- We proved for the first time the consistency between CGC and collinear factorization formalism of twist expansion at up to twist-4 level for direct photon production in \(pA\) collisions. We clarify explicitly that the naive collinear expansion of CGC in terms of multiple scattering reproduces the leading twist result in the small-\(x\) limit, while only recovering part of the complete result at twist-4. We emphasize two important missing ingredients in CGC that lead to the mismatch, i.e. sub-eikonal phases and diagrams related to the LPM interference, both of which are important at finite-\(x\). Including these two missing ingredients in CGC, we show the exact matching to collinear factorization at leading twist in the dilute region and twist-4 in the relatively dense region,
\[E_{\gamma}\frac{\mathrm{d}\sigma^{\mathrm{CGC}_{\mathrm{amb}}}}{ \mathrm{d}^{3}\mathbf{p}_{\gamma}}\bigg{|}_{p_{\gamma_{\perp}}>Q_{s}}=E_{\gamma} \frac{\mathrm{d}\sigma^{\mathrm{LT}}}{\mathrm{d}^{3}\mathbf{p}_{\gamma}}+E_{ \gamma}\frac{\mathrm{d}\sigma^{\mathrm{T4}}}{\mathrm{d}^{3}\mathbf{p}_{\gamma}}+\cdots. \tag{13}\]
The methodology developed in this paper can be easily extended to any other processes, such as single inclusive hadron production in \(pA\) collisions, and dijet production in deep inelastic scattering, as long as the CGC factorization is valid. Therefore, one can take full advantage of these processes to calculate their perturbative hard parts using our approach, and then map out the phase diagram from dilute to dense regions from existing RHIC and LHC data, and future measurements at Electron Ion Colliders [18; 118; 119]. We thus expect a very broad application of our new framework in \(eA\) and \(pA\) collisions, which can provide robust theoretical input for searching for signatures of gluon saturation.
This work is supported by the NSFC under Grants Nos. 12022512, 12035007, 11890714 and 1935007, by the Guangdong Major Project of Basic and Applied Basic Research No. 2020B0301030008 (H.X.), by the US NSF Grant No. PHY-1945471 (Z.K., F.S.) and No. OAC-2004571 within the X-SCAPE Collaboration (F.S., X.W.), by the U.S. DOE under Contract No. DE-AC02-05CH11231, and within the framework of the SURGE Collaboration.
|
2304.08758 | Characterization, synthesis, and optimization of quantum circuits over
multiple-control $\textit{Z}$-rotation gates: A systematic study | We conduct a systematic study of quantum circuits composed of
multiple-control $Z$-rotation (MCZR) gates as primitives, since they are
widely-used components in quantum algorithms and also have attracted much
experimental interest in recent years. Herein, we establish a
circuit-polynomial correspondence to characterize the functionality of quantum
circuits over the MCZR gate set with continuous parameters. An analytic method
for exactly synthesizing such quantum circuit to implement any given diagonal
unitary matrix with an optimal gate count is proposed, which also enables the
circuit depth optimal for specific cases with pairs of complementary gates.
Furthermore, we present a gate-exchange strategy together with a flexible
iterative algorithm for effectively optimizing the depth of any MCZR circuit,
which can also be applied to quantum circuits over any other commuting gate
set.
Besides the theoretical analysis, the practical performances of our circuit
synthesis and optimization techniques are further evaluated by numerical
experiments on two typical examples in quantum computing, including diagonal
Hermitian operators and Quantum Approximate Optimization Algorithm (QAOA)
circuits with tens of qubits, which can demonstrate a reduction in circuit
depth by 33.40\% and 15.55\% on average over relevant prior works,
respectively. Therefore, our methods and results provide a pathway for
implementing quantum circuits and algorithms on recently developed devices. | Shihao Zhang, Junda Wu, Lvzhou Li | 2023-04-18T06:34:18Z | http://arxiv.org/abs/2304.08758v1 | Characterization, synthesis, and optimization of quantum circuits over multiple-control _Z_-rotation gates: A systematic study
###### Abstract
We conduct a systematic study of quantum circuits composed of multiple-control \(Z\)-rotation (MCZR) gates as primitives, since they are widely-used components in quantum algorithms and also have attracted much experimental interest in recent years. Herein, we establish a circuit-polynomial correspondence to characterize the functionality of quantum circuits over the MCZR gate set with continuous parameters. An analytic method for exactly synthesizing such quantum circuit to implement any given diagonal unitary matrix with an optimal gate count is proposed, which also enables the circuit depth optimal for specific cases with pairs of complementary gates. Furthermore, we present a gate-exchange strategy together with a flexible iterative algorithm for effectively optimizing the depth of any MCZR circuit, which can also be applied to quantum circuits over any other commuting gate set. Besides the theoretical analysis, the practical performances of our circuit synthesis and optimization techniques are further evaluated by numerical experiments on two typical examples in quantum computing, including diagonal Hermitian operators and Quantum Approximate Optimization Algorithm (QAOA) circuits with tens of qubits, which can demonstrate a reduction in circuit depth by \(33.40\%\) and \(15.55\%\) on average over relevant prior works, respectively. Therefore, our methods and results provide a pathway for implementing quantum circuits and algorithms on recently developed devices.
## I Introduction
With the arrival of the noisy intermediate-scale quantum (NISQ) era [1], the synthesis and optimization of quantum gate circuits have become the crucial step towards harnessing the power of quantum computing on realistic devices [2; 3]. While single-qubit rotation and two-qubit controlled-NOT (CNOT) gates have received long-term investigations as they constitute an elementary gate set capable of universal quantum computation [4; 5], the multiple-control rotation (MCR) gates defined to act on more qubits also attract a great deal of interest from both fundamental and practical aspects:
* Theoretically, MCR operations often serve as important components in many quantum algorithms or quantum computing models, such as preparing quantum hypergraph states [6; 7], building a circuit-based quantum random access memory [8; 9], participating in Shor's factoring algorithm [10] and different types of quantum search algorithms [11; 12; 13], quantum walk [14], fault-tolerant quantum computation [15; 16]. Therefore, a good understanding of MCR circuits can facilitate the design and analysis of new quantum information processing schemes. In fact, MCR gates have been included as basic building blocks in some popular quantum computing software frameworks, such as Qiskit [17] and PennyLane [18].
* Instead of performing concatenated single- and two-qubit gates in conventional experiments [11; 19; 20], recent experimental progress has also been made for direct implementations of MCR gates in a variety of physical systems, including ion traps [21], neutral atoms [22], linear and nonlinear quantum optics [23; 24; 25], and superconducting circuit systems [26; 27; 28]. In particular, MCR gates have been used as \(native\) quantum gates in practical experiments for demonstrating quantum algorithms [29; 30] and quantum error correction [31]. Therefore, quantum circuits over suitable MCR gates for benchmarking and exploiting these ongoing quantum hardware need to be specifically considered.
To our knowledge, several notable works have investigated quantum circuit models at the level of MCR gates with various techniques and results. For example, discussions about the use of multiple-control Toffoli gates as basic building blocks in circuit synthesis were presented in early years, including the use of Reed-Muller Spectra [32], Boolean satisfiability (SAT) techniques [33], or NCV-\(|v_{1}\rangle\) libraries [34]. Typically, in 2014 the issue of decomposing diagonal Hermitian quantum gates into a set consisting of solely multiple-controlled Pauli \(Z\) operators has been studied [35] by introducing a binary representation of these gates. In 2016, different circuit identities that can replace certain configurations of the multiple-control Toffoli gates with their simpler multiple-control relative-phase implementations were reported [36], showing the optimized resource counts. Given these promising results, quantum circuits based on a wider range of multiple-control quantum gates and their applications are worthy of more in-depth exploration as well.
In this paper, we develop a systematic characteriza
tion, synthesis and optimization of quantum circuits over multiple-control \(Z\)-rotation (MCZR) gates with continuous parameters, each of which would apply a \(Z\)-axis rotation gate \(R_{Z}(\theta)=diag\{1,e^{i\theta}\}\) with a real-valued \(\theta\) to the target qubit only when all its control qubits are set to \(1\). In fact, such quantum gates play a prominent role in quantum state generation [6, 37, 38, 39], quantum circuit construction [40, 41, 42], and fault-tolerant quantum computation [15, 16]. Accordingly, schemes aimed at realizing fast and high-fidelity special or general MCZR gates are constantly being proposed [43, 44, 45, 46, 47, 48] as well as experimentally demonstrated [22, 27, 29, 30, 31] in recent years. In 2017, one-step implementation of the two-qubit \(CZ\), three-qubit \(CCZ\), and four-qubit \(CCCZ\) gates has been realized with an experimental fidelity of about \(0.94\), \(0.868\), and \(0.817\), respectively, based on the continuous-variable geometric phase in a superconducting circuit [27]. In 2020, a multimode superconducting processor circuit with all-to-all connectivity that can implement the near-perfect generalized \(CCZ(\theta)\) gates with an arbitrary angle \(\theta\) as the native three-qubit controlled operations was presented [29], and experimentally demonstrated three-qubit Grover's search algorithm and the quantum Fourier transform. Hence, how to perform quantum computing tasks over such gates with a low gate count and circuit depth is of practical significance, motivating us to conduct a systematic study in this work. For a general consideration, the number of control qubits, the set of acting qubits and the angle parameters \(\theta\) of MCZR gates are all unrestricted. Our main contributions are as follows:
* In Section III, we put forward a convenient polynomial representation to describe the functionality of the MCZR circuits, indicating that any realizable unitary matrix must be a diagonal one (see Eq. (6)).
* In Section IV, we analytically derive a circuit synthesis method that can provide an optimal gate-count for implementing any given diagonal unitary matrix, which also achieves an optimal circuit depth for cases consisting of well-defined pairs of complementary gates (see **Theorem**3 ).
* In Section V, we consider how to reduce the circuit depth of any given MCZR circuit by proposing a gate-exchange strategy (see **Lemma**2) together with a flexible iterative depth-optimization algorithm (see **Algorithm**1), which can yield better optimization results at the cost of more execution time.
* In Section VI, we validate the performance of our synthesis and depth-optimization methods for MCZR circuits by experimental evaluations on two typical examples, including the diagonal Hermitian quantum operators and Quantum Approximate Optimization Algorithm (QAOA) circuits, both of which show improvements over previous results. For the former, our constructed circuits on average can achieve a \(33.40\%\) depth reduction over the prior work [35] for the circuit size \(n\in[2,12]\). For the latter, our optimized circuit depth ranges from \(3.00\) to \(4.05\) for \(n\in[6,50]\), and on average can reduce the circuit depth up to \(58.88\%\) over randomly selected circuits and \(15.55\%\) over the results from Ref. [49], respectively. Notably, here we achieve a nearly-constant depth for moderate-size QAOA circuits on 3-regular graphs.
We expect the methods and results of this paper would be beneficial to the study of implementing quantum circuits and algorithms on specific quantum systems, and some further directions are discussed in Section VII.
## II Notation
For convenience, here we introduce some notations used throughout this paper. We denote the set \(\{a,a+1,a+2,\ldots,b\}\) by \([a,b]\) with \(a,b\) being integers and \(a\leq b\). When \(a=1\), notation \([a,b]\) is simplified to \([b]\). For a binary number \(x\), we use \(q(x)=bin2dec(x)\) to represent its corresponding decimal number. The symbols \(||v||\) and \(|S|\) indicate the Hamming weight of a binary string \(v\) (i.e. the number of \(1\)s in \(v\)) and the size of the set \(S\) (i.e. the number of its elements), respectively. For an \(n\)-bit string \(v=v_{1}v_{2}\ldots v_{n}\), we denote the set of positions of all '\(1\)' bits as \(P_{v}=\{p_{1},p_{2},\cdots,p_{||v||}\}\) such that \(v_{p_{1}}=v_{p_{2}}=\cdots=v_{p_{||v||}}=1\). We use \(I_{m\times n}\) to denote the size \(m\times n\) identity matrix, and the symbol \(\circ\) is used to concatenate \(n(m\geq 2)\) subcircuits \(\{QC_{1},QC_{2},\ldots,QC_{m}\}\) to form a circuit \(QC\) such that \(QC=QC_{1}\circ QC_{2}\circ\ldots\circ QC_{m}\).
## III Characterization of MCZR circuits
To characterize the functionality of the MCZR circuit, we first establish a useful circuit-polynomial correspondence and then illustrate its unitary matrix representation.
The MCZR gate family for an \(n\)-qubit quantum circuit can be denoted as \(\{C^{(k)}Z(\theta_{c,t}):c\subset[n],t\in[n],k=|c|\}\), with \(c\) being the control set, \(t\) being the target, and \(\theta_{c,t}\) being a \(Z\)-rotation angle parameter. By definition, the action of a MCZR gate on each computational basis state is
\[C^{(k)}Z(\theta_{c,t}):\left|x_{1},x_{2},\ldots,x_{n}\right>\] \[\mapsto \exp(i\theta_{c,t}x_{t}\prod_{j\in c}x_{j})\left|x_{1},x_{2}, \ldots,x_{n}\right>. \tag{1}\]
The global phase factor in Eq. (II) indicates that the function of gate \(C^{(k)}Z(\theta_{c,t})\) remains unchanged under any permutation of \(k\) control and one target qubits in the set
\(act=c\bigcup t\). Therefore, we can simply denote each MCZR gate acting on all qubits in a set \(act\subseteq[n]\) as \(G(act,\theta_{act})\) such that
\[G(act,\theta_{act}):\ket{x_{1},x_{2},\ldots,x_{n}}\] \[\mapsto \exp(i\theta_{act}\prod_{j\in act}x_{j})\ket{x_{1},x_{2},\ldots,x_ {n}}. \tag{2}\]
In this way, any quantum circuit \(QC\) consisting of \(m\) MCZR gates \(G(act_{1},\theta_{act_{1}})\), \(G(act_{2},\theta_{act_{2}})\),..., \(G(act_{m},\theta_{act_{m}})\) can transform each basis state as
\[QC:\ket{x_{1},x_{2},\ldots,x_{n}}\] \[\mapsto \exp(i\cdot p(x_{1},x_{2},\ldots,x_{n}))\ket{x_{1},x_{2},\ldots,x_ {n}}, \tag{3}\]
with
\[p(x_{1},x_{2},\ldots,x_{n})=\sum_{k=1}^{m}\theta_{act_{k}}\left(\prod_{j\in act _{k}}x_{j}\right) \tag{4}\]
being a \(phase\)\(polynomial\) associated with the circuit \(QC\). That is to say, any given \(n\)-qubit MCZR circuit \(QC\) corresponds to a unique phase polynomial with real coefficients and degree at most \(n\).
Now we turn to the unitary matrix representation of \(n\)-qubit MCZR circuits. Eq. (2) reveals that each MCZR gate can be explicitly expressed as a diagonal unitary matrix of size \(2^{n}\times 2^{n}\) as
\[G(act,\theta_{act})=\sum_{x\in\{0,1\}^{n}}\exp(i\theta_{act}\prod_{j\in act}x _{j})\ket{x}\bra{x}, \tag{5}\]
with all its diagonal elements being 1 or \(e^{i\theta_{act}}\). Since all MCZR gates are diagonal and commutative, two or more MCZR gates that act on the same set of qubits in a circuit can be merged into one by just adding their angle parameters. Without loss of generality, in this paper we focus on the non-trivial MCZR circuit \(QC\) such that all the constituent \(m\) gates have distinct qubit set \(act_{k}(k=1,2,\ldots,m)\), and its unique phase polynomial in Eq. (4) exactly has degree \(\max\{\ket{act_{k}}:k=1,2,\ldots,m\}\) and \(m\) terms with real coefficients being the angle parameters \(\{\theta_{act_{k}}:k=1,2,\ldots,m\}\). Accordingly, the circuit \(QC\) in Eq. (3) would function as a diagonal unitary matrix as
\[D(QC)=\sum_{x\in\{0,1\}^{n}}\exp(i\cdot p(x))\ket{x}\bra{x}, \tag{6}\]
with the polynomial \(p(x=x_{1},x_{2},\ldots,x_{n})\) defined in Eq. (4). Obviously, two MCZR circuits over different gate sets would implement two distinct diagonal unitary matrices. For clarity, we display an instance circuit with \(n\)=3 and its polynomial as well as unitary matrix representation in Fig. 1.
## IV Optimal synthesis of MCZR circuits
In above section, we have revealed that a MCZR circuit can implement a diagonal unitary matrix. This in turn raises a natural question: can an arbitrary diagonal operator be implemented by a MCZR gate circuit exactly? This is an attractive subject since diagonal unitary matrices have a wide range of applications in quantum computing and quantum information [49, 50, 51, 52, 14].
In this section, we address this issue by proposing a circuit synthesis method to construct an \(n\)-qubit gate-count optimal MCZR circuit for implementing a size \(N\times N\) (\(N=2^{n}\)) diagonal unitary matrix
\[D(\overrightarrow{\alpha}=[\alpha_{0},\alpha_{1},\ldots,\alpha_ {N-1}]) =\left[\begin{array}{ccccc}e^{i\alpha_{0}}&0&\cdots&0&0\\ 0&e^{i\alpha_{1}}&\cdots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&0&e^{i\alpha_{N-1}}\end{array}\right]\] \[=\sum_{x\in\{0,1\}^{n}}\exp(i\alpha_{q(x)})\ket{x}\bra{x} \tag{7}\]
with \(q(x)=bin2dec(x)\), which also enables the circuit depth optimal for specific cases. In particular, we emphasize that the \(optimality\) mentioned in this paper always indicates an \(exact\) optimal value for the gate count and circuit depth rather than \(asymptotically\) optimal results, indicating that our optimal results cannot be improved any more. For convenience, here we rewrite each available gate \(G(act,\theta_{act})\) in Eq. (2) as \(G(v,\theta_{v})\) by associating \(act\) with an \(n\)-bit string \(v=v_{1}v_{2}\ldots v_{n}\in\{0,1\}^{n}\) such that
\[v_{j}:=\begin{cases}1,&j\in act;\\ 0,&j\in[n]\backslash act.\end{cases} \tag{8}\]
Our main results in this section are summarized as **Theorems** 1, 2, and 3.
**Theorem 1.**_The MCZR gate set \(\{G(v,\theta_{v})\}\) for implementing a target diagonal unitary matrix \(D(\overrightarrow{\alpha})\) in Eq. (7) with \(2^{n}\) given parameters \([\alpha_{0},\alpha_{1},\ldots,\alpha_{N-1}]\) is unique, and each gate parameter can be computed analytically as_
\[\theta_{v}=(-1)^{||v||}\sum_{x:P_{x}\subseteq P_{v}}(-1)^{||x||}\alpha_{q(x)},\quad v\in\{0,1\}^{n}, \tag{9}\]
_with \(q(x)\), \(P_{v}(P_{x})\), and \(||v||(||x||)\) defined in Section II. Since \(\theta_{v}\) indicates a trivial identity gate that can be omitted, the optimal gate-count for implementing \(D(\overrightarrow{\alpha})\) is thus \(|\{G(v,\theta_{v}\neq 0)\}|\) with \(\theta_{v}\) from Eq. (9)._
Proof.: According to Eq. (8), there are totally \(2^{n}{-}1\) different types of gates \(\{G(v,\theta_{v}):v\in\{0,1\}^{n}\}\)\(00..0\}\) available to construct a MCZR circuit \(QC\) that functions as Eq. (6), with its phase polynomial \(p(x)\) in Eq. (4) rewritten as
\[p(x)=\sum_{v\in\{0,1\}^{n}\backslash 00..0}\theta_{v}({x_{1}}^{v_{1}}{x_{2}}^{v _{2}}\ldots{x_{n}}^{v_{n}}). \tag{10}\]
Since two quantum circuits which differ only by a global phase factor are equivalent, we suppose that a circuit \(QC\) described by Eq. (6) can perform the target diagonal matrix \(D(\overrightarrow{\alpha})\) in Eq. (7) as
\[e^{i\theta_{00..0}}\sum_{x\in\{0,1\}^{n}}\exp(i\cdot p(x))\ket{x} \bra{x}\\ =\sum_{x\in\{0,1\}^{n}}\exp(i\alpha_{q(x)})\ket{x}\bra{x}, \tag{11}\]
leading to
\[\theta_{00..0}+p(x)=\alpha_{q(x)},\quad x\in\{0,1\}^{n} \tag{12}\]
with \(\theta_{00..0}\) being a global phase factor, \(p(x)\) in Eq. (10) and \(q(x)=bin2dec(x)\) defined in Section II. In total, Eq. (12) gives us \(2^{n}\) linear equations as
\[\left\{\begin{array}{l}x=00..00:\ \theta_{00..00}=\alpha_{0};\\ x=00..01:\ \theta_{00..00}+\theta_{00..01}=\alpha_{1};\\ x=00..10:\ \theta_{00..00}+\theta_{00..10}=\alpha_{2};\\ x=00..11:\ \theta_{00..00}+\theta_{00..01}+\theta_{00..11}=\alpha_{3};\\ \vdots\\ x=11..11:\ \sum\limits_{v\in\{0,1\}^{n}}\theta_{v}=\alpha_{N-1}.\end{array}\right. \tag{13}\]
Thus, if we can solve a set of \(2^{n}\) angle parameters \(\{\theta_{v}:v\in\{0,1\}^{n}\}\) satisfying Eq. (13) for any given \(\overrightarrow{\alpha}=[\alpha_{0},\alpha_{1},\ldots,\alpha_{N-1}]\), then we obtain a MCZR circuit over the gate set \(\{G(v,\theta_{v})\}\) for implementing any \(D(\overrightarrow{\alpha})\) in Eq. (7). In the following, we give an exact analytical expression of the solution to Eq. (13) and prove its uniqueness.
The linear equations in Eq. (13) can be succinctly summarized into a standard form as
\[J\cdot\begin{pmatrix}\theta_{00..00}\\ \theta_{00..01}\\ \theta_{00..11}\\ \vdots\\ \theta_{11..11}\end{pmatrix}=\begin{pmatrix}\alpha_{0}\\ \alpha_{1}\\ \alpha_{2}\\ \alpha_{3}\\ \vdots\\ \alpha_{N-1}\end{pmatrix} \tag{14}\]
such that the size \(2^{n}\times 2^{n}\) coefficient matrix \(J\) has elements
\[J_{\vec{q}(x),\vec{q}(v)}=\begin{cases}1,&P_{v}\subseteq P_{x};\\ 0,&otherwise,\end{cases}\quad x,v\in\{0,1\}^{n}, \tag{15}\]
where the function \(\vec{q}(\cdot)=bin2dec(\cdot)+1\) transforms a binary string into a decimal number as the row or column index of a matrix, and the set \(P_{x(v)}\) about a string \(x(v)\) is defined in Section II. Consider another size \(2^{n}\times 2^{n}\) matrix denoted \(K\) with elements
\[K_{\vec{q}(v),\vec{q}(x)}=\begin{cases}(-1)^{||v||+||x||},&P_{x}\subseteq P _{v};\\ 0,&otherwise,\end{cases}\quad x,v\in\{0,1\}^{n}, \tag{16}\]
here we can prove the product of two matrices in Eqs. (16) and (15) as \(Q=K\cdot J\) is exactly an identity matrix of size \(2^{n}\times 2^{n}\). By definition, the matrix elements of \(Q\) are
\[Q_{\vec{q}(v_{1}),\vec{q}(v_{2})} =\sum_{x\in\{0,1\}^{n}}K_{\vec{q}(v_{1}),\vec{q}(x)}J_{\vec{q}(x),\vec{q}(v_{2})}\] \[=(-1)^{||v_{1}||}\sum_{x:P_{v_{2}}\subseteq P_{x}\subseteq P_{v _{1}}}(-1)^{||x||}+0,\] \[v_{1},v_{2}\in\{0,1\}^{n}. \tag{17}\]
For the diagonal element of \(Q\) with \(v_{1}=v_{2}\) and \(P_{v_{1}}=P_{v_{2}}\), Eq. (17) turns into
\[Q_{\vec{q}(v_{1}),\vec{q}(v_{1})}=(-1)^{||v_{1}||}\cdot(-1)^{||v_{1}||}=1, \quad v_{1}\in\{0,1\}^{n} \tag{18}\]
by taking \(x=v_{1}\). For the off-diagonal elements of \(Q\) with \(v_{1}\neq v_{2}\) and \(P_{v_{1}}\neq P_{v_{2}}\), we have two cases:
1. \(P_{v_{2}}\not\subset P_{v_{1}}\), then no string \(x\) can satisfy \(P_{v_{2}}\subseteq P_{x}\subseteq P_{v_{1}}\), leading Eq. (17) to \(Q_{\vec{q}(v_{1}),\vec{q}(v_{2})}=0\);
2. \(P_{v_{2}}\subset P_{v_{1}}\), then there are totally \(2^{||v_{1}||-||v_{2}||}\) strings \(x\) that can satisfy \(P_{v_{2}}\subseteq P_{x}\subseteq P_{v_{1}}\), wherein \(||x||\) is even for exactly half of these \(x\) and odd for the other half, leading Eq. (17) to \(Q_{\vec{q}(v_{1}),\vec{q}(v_{2})}=0\).
At this point, we prove that \(K\cdot J=I_{2^{n}\times 2^{n}}\) and thus the square matrix \(K\) defined in Eq. (16) is the unique inverse matrix of the coefficient matrix \(J\) in Eq. (14) by the common knowledge of linear algebra. By multiplying both sides of Eq. (14) with \(K\) and using Eq. (16), we obtain an analytic form of the solutions \(\{\theta_{v}\}\) to Eq. (14) as
\[\theta_{v} =\sum_{x\in\{0,1\}^{n}}K_{\vec{q}(v),\vec{q}(x)}\alpha_{q(x)}\] \[=(-1)^{||v||}\sum_{x:P_{x}\subseteq P_{v}}(-1)^{||x||}\alpha_{q(x )},\quad v\in\{0,1\}^{n}, \tag{19}\]
with \(q(x)\), \(P_{v}(P_{x})\), and \(||v||(||x||)\) defined in Section II.
In summary, Eq. (19) represents a unique set of solutions so that the resultant MCZR circuit for implementing \(D(\overrightarrow{\alpha})\) in Eq. (7) naturally achieves an optimal
gate count. The angle parameter \(\theta_{v}=0\) indicates its associated MCZR gate \(G(v,\theta_{v})\) is a trivial identity gate that can be omitted. Therefore, the optimal gate count for realizing any diagonal unitary operator in Eq. (7) is \(|\{G(v,\theta_{v}\neq 0)\}|\) with the gate parameters obtained from Eq. (19), and in the worst case is \(2^{n}-1\) when all angle parameters are solved to be non-zero. For clarity, an example with \(n=3\) is shown in Figs. 2 (a) and (b).
As a by-product, the uniqueness of the gate set \(\{G(v,\theta_{v})\}\) for implementing a diagonal unitary matrix as declared in **Theorem**1 gives us **Lemma**1.
**Lemma 1**.: _All MCZR gates in \(\{G(v,\theta_{v}):v\in\{0,1\}^{n},\theta_{v}\in[0,2\pi)\}\) are independent, that is, none of them can be decomposed into a combination of the others._
Besides the gate count, the circuit depth is another important circuit cost metric that needs attention, since a reduced circuit-depth means less circuit execution time. A quantum circuit can be represented as a directed acyclic graph (DAG) in which each node corresponds to a circuit's gate and each edge corresponds to the input/output of a gate. Then the circuit depth \(d\) is defined as the maximum length of a path flowing from an input of the circuit to an output [53]. Equivalently speaking, \(d\) is the number of layers of quantum gates that compactly act on disjoint sets of qubits [54; 55]. For example, the depth of the circuit in Fig. 1 with three non-zero angle parameters is \(d=2\). Notice that a set of MCZR gates may form distinct layer configurations with respective circuit depths, as exemplified by the comparison between the depth-4 circuit in Fig. 2(c) and depth-5 circuit in Fig. 2(d). More generally, in **Theorem**2 we reveal the optimal circuit depth of any MCZR circuit constructed from pairs of complementary gates as defined in **Definition**1.
**Definition 1**.: _We call a pair of MCZR gates \(G(v_{1},\theta_{v_{1}})\) and \(G(v_{2},\theta_{v_{2}})\) are complementary if and only if they satisfy \(v_{1}\oplus v_{2}=11..11\)._
**Theorem 2**.: _The optimal circuit depth of any MCZR circuit constructed from \(d_{1}\) pairs of complementary gates is exactly \(d_{1}\)._
Proof.: Suppose we construct an \(n\)-qubit MCZR circuit over \(d_{1}\) pairs of complementary gates \(\{G(v,\theta_{v})\}\) by arranging them into \(d\) layers denoted \(\{L_{1},L_{2},\ldots,L_{d}\}\) such that all gates in each layer \(L_{i}\) (\(i=1,2,\ldots,d\)) are disjoint. Here we prove the minimum value of \(d\) is \(d_{1}\).
Figure 2: Example with \(n=3\) to show the gate-count optimal synthesis of quantum MCZR circuits. To construct a circuit for realizing a given diagonal unitary matrix \(D(\overrightarrow{\alpha})\) of size \(8\times 8\) in (a), we can first use Eq. (9) to solve the angle parameters \(\{\theta_{v}:v\in\{0,1\}^{3}\}\) of all employed MCZR gates as linear combinations of given \(\{\alpha_{0},\alpha_{1},\ldots,\alpha_{7}\}\) with non-zero coefficients marked green shown in (b). Note the angle parameter \(\theta_{v}=0\) indicates a trivial identity gate that can be removed in the circuit. Then, these gates are arranged in different layers to give a circuit layer configuration. For a general case, we present a circuit consisting of all gates in complementary pairs with a depth \(d=4\) in (c), while another circuit with a depth \(d=5\) is depicted in (d) for comparison. As a summary, the circuit in (c) to implement (a) can be directly obtained by **Theorem**3.
For brevity, we denote each gate layer \(L_{i}\) by an \(n\)-bit string as
\[s(L_{i})=\sum_{v:G(v,\theta_{v})\in L_{i}}v,\quad i=1,2,\ldots,d, \tag{20}\]
and all \(d\) such strings totally own \(nd\) bits of \(0\) and \(1\). On the other hand, the total number of '\(1\)' bits in \(2d_{1}\) strings \(v\) representing these gates is \(nd_{1}\). Therefore, we have
\[nd\geq nd_{1} \tag{21}\]
and the lower bound of circuit depth as
\[d\geq d_{1}. \tag{22}\]
Obviously, the equality in Eq. (22) can be achieved when every gate layer \(L_{i}\) (\(i=1,2,\ldots,d\)) has a pair of complementary gates, thus forming a circuit with an optimal depth \(d_{1}\).
A typical application of **Theorem** 2 is to construct a depth-optimal MCZR circuit over all \(2^{n}-1\) non-zero gate parameters solved from **Theorem** 1 for implementing a given diagonal operator. That is, when all these gates are arranged into \((2^{n}-2)/2=2^{n-1}-1\) layers of complementary gates as \(L_{1}=[v=00..01,v=11..10]\), \(L_{2}=[v=00..10,v=11..01]\),..., \(L_{2^{n-1}-1}=[v=01..11,v=10..00]\) plus a sole gate in \(L_{2^{n-1}}=[v=11..11]\), a circuit with an optimal depth \(2^{n-1}\) is obtained. For clarity, a circuit example with \(n=3\) and the optimal depth \(d=4\) is shown in Fig. 2(c), while another circuit with a larger depth \(d=5\) is shown in Fig. 2(d) for comparison.
Finally, the combination of **Theorem** 1 and **Theorem** 2 leads to a pair-wise circuit synthesis method described as **Theorem** 3.
**Theorem** 3 (Pair-wise MCZR circuit synthesis).: _A MCZR circuit QC over the gate set \(\{G(v,\theta_{v})\}\) for implementing an arbitrary diagonal unitary matrix \(D(\overrightarrow{\alpha})\) in Eq. (7) can be synthesized by computing each gate parameter \(\theta_{v}\) according to Eq. (9) in a pair-wise way as \(L_{1}=[v=00..01,v=11..10]\), \(L_{2}=[v=00..10,v=11..01]\),..., \(L_{2^{n-1}-1}=[v=01..11,v=10..00]\), \(L_{2^{n-1}}=[v=11..11]\) such that \(QC=L_{1}\circ L_{2}\circ\ldots\circ L_{2^{n-1}}\). Note that \(G(v,\theta_{v}=0)\) is an identity gate that will not appear in \(QC\), and thus QC has an optimal gate count \(m_{D}=\lfloor\{G(v,\theta_{v}\neq 0)\}\rfloor\) for any \(D(\overrightarrow{\alpha})\). Specifically, QC has an optimal circuit depth when the implementation of \(D(\overrightarrow{\alpha})\) only employs pairs of complementary gates. For example, this theorem gives us the circuit in Fig. 2(c) to implement Fig. 2(a)._
In summary, we provide a gate-count optimal circuit synthesis (that is, **Theorem** 3) for realizing a given diagonal unitary matrix in Eq. (7), which also enables the circuit depth optimal when all obtained non-zero angle parameters correspond to pairs of complementary gates. Furthermore, in the following we consider how to optimize the depth of any other types of MCZR circuits.
## V Depth optimization of MCZR circuits
Since all MCZR gates are diagonal and commutative, the task of optimizing the depth of any given MCZR circuit is equivalent to rearranging all its gates into as few disjoint layers as possible. In this section, we propose a gate-exchange strategy together with a flexible algorithm for effectively reducing the circuit depth.
### A gate-exchange strategy for optimizing the circuit depth
First of all, we present a simple but useful strategy in **Lemma** 2 that can reduce (or retain) the depth of any MCZR circuit.
**Lemma** 2.: _For a depth-\(d_{1}\) MCZR circuit \(QC_{1}\) over the gate set \(S=\{G(v,\theta_{v})\}\), suppose that (1) a pair of complementary gates \(G(v_{1},\theta_{v_{1}})\) and \(G(v_{2},\theta_{v_{2}})\) are located in two different layers of \(QC_{1}\), and (2) the gate \(G(v_{1},\theta_{v_{1}})\) and a subset of gates \(\{G(v^{\prime},\theta_{v^{\prime}})\}\subset S\) are located in the same layer of \(QC_{1}\). Then, the exchange of \(\{G(v^{\prime},\theta_{v^{\prime}})\}\) and \(G(v_{2},\theta_{v_{2}})\) in \(QC_{1}\) would arrange \(G(v_{1},\theta_{v_{1}})\) and \(G(v_{2},\theta_{v_{2}})\) into one layer, leading to a new depth-\(d_{2}\) circuit \(QC_{2}\) with \(d_{2}\leq d_{1}\)._
We give an intuitive explanation of **Lemma** 2. In the original depth-\(d_{1}\) circuit \(QC_{1}\), suppose the gate \(G(v_{1},\theta_{v_{1}})\) and gates in \(\{G(v^{\prime},\theta_{v^{\prime}})\}\) are located in a layer indexed by \(L_{1}\), while the gate \(G(v_{2},\theta_{v_{2}})\) is located in another layer indexed \(L_{2}\). Then the exchange of \(G(v_{2},\theta_{v_{2}})\) and \(\{G(v^{\prime},\theta_{v^{\prime}})\}\) arranges the former and the latter into the layer \(L_{1}\) and \(L_{2}\), respectively. Since the gate \(G(v_{2},\theta_{v_{2}})\) alone acts on more qubits than any gate in \(\{G(v^{\prime},\theta_{v^{\prime}})\}\) does, such a gate-exchange operation would lead to two possible situations about the resultant circuit \(QC_{2}\): (1) \(QC_{2}\) has the same depth \(d_{1}\) as \(QC_{1}\), or (2) some (or all) of the gates in \(\{G(v^{\prime},\theta_{v^{\prime}})\}\) and the gates adjacent to layer \(L_{2}\) can be merged into the same layer, thus causing a depth reduction over \(QC_{1}\).
Based on **Lemma** 2, we can derive a two-step framework for achieving a depth-optimal MCZR circuit as described in **Lemma** 3.
**Lemma** 3.: _In principle, the optimal circuit depth \(d_{opt}\) of the MCZR circuits constructed from a given gate set \(S=\{G(v,\theta_{v})\}\) with \(|S|=m\) can be achieved by two steps: (1) arrange all \(d_{1}\) pairs of complementary gates in \(S\) into a depth-\(d_{1}\) configuration, and (2) find a depth-optimal circuit over the other \(r=(m-2d_{1})\) gates. Then \(d_{opt}\) is equal to the total depth of these two parts._
A special case of **Lemma** 3 is **Theorem** 2, such that \(m=2d_{1}\) gives us \(d_{opt}=d_{1}\). In general, we can accomplish the second step of **Lemma** 3 by comparing at most \(r!\) different layer configurations and find the depth-optimal circuit over a given gate set \(S\). However, for \(S\) with a moderate value \(r\), the number of all possible layer
configurations can be quite large and thus the optimal depth is usually hard to determine. To deal with such complicated cases, in the following we further propose a flexible iterative algorithm for optimizing the depth of a circuit with no complementary gates, and reveal its flexibility with a use case.
### A flexible iterative depth-optimization algorithm
In this section, we propose an iterative algorithm denoted **Algorithm 1** for optimizing the depth of MCZR circuits with no complementary gates, and reveal its flexibility with a use case.
The input of **Algorithm 1** includes: a given MCZR circuit \(QC\) with its constituent gates located from left to right as a sequence \(SEQ=[act_{k}:k=1,2,\ldots,m]\), with \(act_{k}\) being the set of qubits acted upon by the \(k\)th gate, and an iteration number \(iter\in\mathbb{N}^{+}\). The output is a circuit over gates in \(SEQ\) that has a depth smaller than or equal to that of \(QC\). Notice that two subroutine functions Greedy_Layer_Formation and Generate_New_GateSeq are introduced here: the former receives a gate sequence \(SEQ\) and can arrange as many disjoint gates in \(SEQ\) into each layer as possible to form a circuit layer configuration \(R\), while the latter can generate a new gate sequence \(SEQ\) from a given circuit \(R=\{L_{i}:i=1,2,\ldots,d\}\) by extracting and regrouping gates in original layers \(L_{i}\). Since the application of our greedy layer formation procedure on different sequences over a given MCZR gate set may result in distinct circuits, we will iteratively use these two functions in our **main program** to seek circuits with the shortest possible depth as follows.
First, since two gates that act on the same qubit must be located in different layers of a circuit, a depth lower bound \(LB\) on all possible circuits constructed from the input gate set \(SEQ\) can be derived as:
\[LB(SEQ)=max\{\textsc{Count}(j,SEQ):j\in[n]\}, \tag{23}\]
where \(\textsc{Count}(j,SEQ)\) indicates the number of integer \(j\) appeared in \(SEQ\). Second, we apply the function Greedy_Layer_Formation to the input gate sequence \(SEQ\) and obtain a new depth-\(d^{(1)}\) circuit with layer configuration \(R^{(1)}\) such that \(d^{(1)}\leq d\). Third, if \(d^{(1)}>LB\) and \(iter\geq 2\), we can further iteratively generate a new gate sequence \(SEQ^{(t)}\) from the previous circuit \(R^{(t-1)}\) via Generate_New_GateSeq, followed by applying Greedy_Layer_Formation to obtain a new circuit \(R^{(t)}\) of depth \(d^{(t)}\) in each loop \(t\geq 2\). In this process, we can terminate the loop when getting the optimal depth as \(d^{(t)}=LB\). Finally, we choose the circuit with shortest depth among all constructed \(\{R^{(t)}\}\) above as our output depth-optimized circuit \(R=\{L_{1},L_{2},\ldots,L_{d_{opt}}\}\). As a result, **Algorithm 1** ensures that: (1) \(d_{opt}\leq d^{(1)}\leq d\), and (2) \(d_{opt_{2}}\leq d_{opt_{1}}\) for two iteration numbers \(iter_{2}\geq iter_{1}\). Therefore, our **Algorithm 1** controlled by an iteration number \(iter\) is a flexible depth-optimization algorithm by considering the relation between the reduced depth and optimization time cost.
```
Input: A depth-\(d\) MCZR circuit \(QC\) with its constituent gates located from left to right as a sequence \(SEQ=[act_{k}:k=1,2,\ldots,m]\), with \(act_{k}\) being the qubit set of the \(k\)th gate; an iteration number \(iter\geq 1\). Output: A circuit \(QC_{opt}\) over gates in \(SEQ\) with a layer configuration \(R=\{L_{i}:i=1,2,\ldots,d_{opt}\}\) such that \(d_{opt}\leq d\).
1main program: Calculate the circuit depth lower bound \(LB\) for \(SEQ\) by Eq. (23).
2\([R^{(1)},d^{(1)}]=\)Greedy_Layer_Formation(\(SEQ\)); \(t\gets 1\);
3if\(d^{(1)}>LB\)&&\(iter\geq 2\)then// Perform iterative layer formation.
4for\(t\gets 2\)to\(iter\)do
5\(SEQ^{(t)}=\)Generate_New_GateSeq(\(R^{(t-1)}\));
6\([R^{(t)},d^{(t)}]=\)Greedy_Layer_Formation(\(SEQ^{(t)}\));
7if\(d^{(t)}==LB\)then
8 break;
9 endif
10 endfor
11
12 endif
13\(d_{opt}\gets d^{(p)}=min\{d^{(q)}:q\in[t]\}\); \(R\gets R^{(p)}\);
14return\([R,d_{opt}]\).
15functionGreedy_Layer_Formation(\(SEQ\)):
16\(i\gets 0\);
17while\(|SEQ|\neq 0\)do
18\(i\gets i+1\); \(c\gets 0\); \(L_{i}\leftarrow\varnothing\); \(remove\_set\leftarrow\varnothing\);
19for\(k\gets 1\)to\(|SEQ|\)do// Greedily form the layer \(L_{i}\).
20if\(L_{i}\) and \(SEQ[k]\) have no integers in common then
21\(c\gets c+1\); \(L_{i}[c]\gets SEQ[k]\);
22\(remove\_set[c]\gets k\);
23 endif
24
25 endfor
26Delete\(SEQ[remove\_set]\);
27
28 endwhile
29\(d\gets i\);
30return\([R=\{L_{1},L_{2},\ldots,L_{d}\},d]\).
31
32 endfunctionGenerate_New_GateSeq( \(R=\{L_{i}=[act^{i}_{1},act^{i}_{2},\ldots,act^{i}_{|L_{i}|}]:i=1,2,\ldots,d\}\)):
33\(SEQ=[act^{1}_{1},act^{2}_{1},\ldots,act^{d}_{1},act^{2}_{2},\ldots,act^{d}_{ 2},\ldots,act^{p}_{|L_{p}|}]\) with the layer index \(p\) such that \(|L_{p}|=max\{|L_{i}|:i\in[d]\}\);
34return\(SEQ\).
35
36 endfunction
```
**Algorithm 1**An iterative depth-optimization algorithm for MCZR circuits.
A demonstrative example of **Algorithm 1** is shown in Fig. 3. The gate sequence for the 6-qubit and depth
7 circuit \(QC\) consisting of 9 two-qubit \(CZ(\theta)\) gates as shown in Fig. 3(a) is
\[SEQ=[\{1,2\},\{1,3\},\{2,3\},\{1,4\},\{4,5\},\] \[\{5,6\},\{2,5\},\{3,6\},\{4,6\}], \tag{24}\]
and we apply **Algorithm 1** with \(iter=2\) to achieve a depth-optimized circuit as follows:
1. First, we calculate the depth lower bound on circuits for \(SEQ\) by Eq. (23) as \(LB=3\).
2. Second, we apply Greedy_Layer_Formation to \(SEQ\) in Eq. (24) and obtain a new circuit \(QC^{(1)}\) of depth \(d^{(1)}=4\) as shown in Fig. 3(b), which has a layer configuration \(R^{(1)}=\{L_{1},L_{2},L_{3},L_{4}\}\) with \[L_{1}=[act_{1}^{1}=\{1,2\},act_{2}^{1}=\{4,5\},act_{3}^{1}=\{3,6\}],\] \[L_{2}=[act_{1}^{2}=\{1,3\},act_{2}^{2}=\{5,6\}],\] \[L_{3}=[act_{1}^{3}=\{2,3\},act_{2}^{3}=\{1,4\}],\] \[L_{4}=[act_{1}^{4}=\{2,5\},act_{2}^{4}=\{4,6\}].\] (25) Intuitively, the comparison between the circuit \(QC\) in Fig. 3(a) and \(QC^{(1)}\) in Fig. 3(b) reveals that the working principle of our function Greedy_Layer_Formation is to move the gates in the right column of original circuit to fill the vacancies in the left column as much as possible, thus causing a circuit depth reduction.
3. Third, we apply Generate_New_GateSeq to \(R^{(1)}\) in Eq. (25) due to the condition \(d^{(1)}>LB\) and \(iter>1\), and generate a new gate sequence \(SEQ^{(2)}\) shown in Fig. 3(c) as \[SEQ^{(2)}=[\{1,2\},\{1,3\},\{2,3\},\{2,5\},\{4,5\},\] \[\{5,6\},\{1,4\},\{4,6\},\{3,6\}].\] (26)
4. Finally, we apply Greedy_Layer_Formation again to Eq. (26) and obtain a new layer configuration \(\{L_{1},L_{2},L_{3}\}\), that is, the circuit \(QC^{(2)}\) of depth \(d^{(2)}=3\) as shown in Fig. 3(d).
Note that if we apply **Algorithm 1** with only \(iter=1\) to \(SEQ\) in Fig. 3(a), the resultant depth-optimized circuit would be just \(QC^{(1)}\) in Fig. 3(b). This simple example implies that, if we apply Greedy_Layer_Formation to more distinct gate sequences generated from Generate_New_GateSeq, the more significant depth reduction over the original circuit is likely to occur at the expense of more optimization time. More practical cases of **Algorithm 1** will be demonstrated in Section VI.
## VI Experimental evaluation
To further evaluate the performances of the proposed synthesis and optimization methods, here we refine them into two explicit workflows and consider their applications to two typical use cases in quantum computing. All experiments are performed with MATLAB 2022a on an Intel Core i5-12500 CPU operating at 3.00 GHz frequency and 16GB of RAM.
### Workflow of our synthesis and optimization methods
For convenience, here we summarize the main results in Secs. IV and V into the workflow to fulfill two types of tasks as follows:
Figure 3: An example to demonstrate **Algorithm 1** with \(iter=2\). (a) A given 6-qubit MCZR circuit \(QC\) of depth \(d=7\), with its 9 two-qubit gates \(CZ(\theta_{(i,j)})\) being separated by green dashed lines as \(\{L_{1},L_{2},\ldots,L_{7}\}\) and in a sequence \(SEQ=[\{1,2\},\{1,3\},\ldots,\{3,6\},\{4,6\}]\). The circuit depth lower bound for \(SEQ\) is \(LB=3\) by Eq. (23). Then, we apply the function Greedy_Layer_Formation to (a) and obtain a circuit \(QC^{(1)}\) of depth \(d^{(1)}=4\) as shown in (b), where its four gate layers are separated by red dashed lines as \(R^{(1)}=\{L_{1},L_{2},L_{3},L_{4}\}\) and Eq. (25). Due to \(d^{(1)}>LB\) and \(iter=2\), next we apply the function Generate_New_GateSeq to \(R^{(1)}\) and generate a new gate sequence \(SEQ^{(2)}\) in (c). Once again, we apply Greedy_Layer_Formation to (c) and obtain a new circuit \(QC^{(2)}\) of depth \(d^{(2)}=3\) in (d), achieving the optimal circuit depth \(LB\).
**Task 1**: How to construct a gate-count optimal MCZR circuit followed by further depth-optimization for implementing a given diagonal unitary matrix in Eq. (7)?
**Workflow 1**: First, we synthesize a gate-count optimal MCZR circuit according to **Theorem** 3 with \(m\) gates, which includes two parts: (i) \(d_{1}\) layers of complementary gates denoted \(QC_{1}\), and (ii) the other \((m-2d_{1})\) gates. Second, we apply **Algorithm** 1 with a specified parameter \(iter\) to optimize the part (ii) into a depth-\(d_{2}\) circuit \(QC_{2}\). Finally, the overall output circuit is \(QC=QC_{1}\circ QC_{2}\) of depth \(d_{1}+d_{2}\).
**Task 2**: How to optimize the circuit depth of a given MCZR circuit \(QC\) over the gate set \(S=\{G(v,\theta_{v})\}\) with \(|S|=m\)?
**Workflow 2**: First, we perform the gate-exchange operation to \(QC\) according to **Lemma** 2, which arranges all \(d_{1}\) pairs of complementary gates in \(S\) into a depth-\(d_{1}\) circuit denoted \(QC_{1}\). Second, we apply **Algorithm** 1 to the other \((m-2d_{1})\) gates and obtain a circuit \(QC_{2}\) of depth \(d_{2}\). Finally, putting these results together gives a depth-optimized circuit \(QC_{opt}=QC_{1}\circ QC_{2}\) of depth \(d_{1}+d_{2}\).
In the following, we demonstrate the utility of above workflows for two practical quantum computing tasks: (1) constructing diagonal Hermitian quantum operators, and (2) optimizing the depth of QAOA circuits.
### Diagonal Hermitian quantum operators
We use \(D_{H}^{(n)}\) to denote an \(n\)-qubit diagonal Hermitian quantum operator with its diagonal elements being \(\pm 1\), and there are totally \(2^{2^{n}-1}\) different such operators since \(D_{H}^{(n)}\) and \(-D_{H}^{(n)}\) are essentially equivalent. Note that operators of this type act as the oracle operator or fixed operator in the well-known Deutsch-Jozsa algorithm [56; 57], Grover's algorithm [58] and some recent algorithms showing quantum advantage for string learning and identification [59; 60; 52]. Therefore, an efficient construction of \(D_{H}^{(n)}\) over MCZR gates would facilitate the implementation of relevant quantum algorithms on specific devices [27; 30].
Prior work [35] has revealed that \(D_{H}^{(n)}\) can be synthesized by at most \(2^{n}-1\) multiple-controlled Pauli \(Z\) gates, that is, MCZR gates with a fixed angle parameter \(\pi\), based on a binary representation and solving linear equations over the binary field \(\mathbb{F}_{2}\). As comparison, here we apply our synthesis and optimization methods to construct circuits for realizing such operators, and to be more specific, our strategies include: pair-wise synthesis method in **Theorem** 3 (\(app01\)), our **Workflow 1** in Sec. VI.1 with \(iter=1\) (\(app02\)), \(iter=5\) (\(app03\)), and \(iter=20\) (\(app04\)), respectively. We perform experiments on all 8, 128, 32768 diagonal Hermitian operators \(D_{H}^{(n)}\) for \(n=2,3,4\), respectively, as well as 100 randomly selected ones for each \(5\leq n\leq 12\), and compare our results with the previous work. Due to the uniqueness property, our constructed circuits have the same MCZR gate set as that from Ref. [35], and therefore we mainly illustrate our circuit depth reduction. The detailed experimental results are presented in Fig. 4.
In Fig. 4 (a), we present the average circuit depth of \(n\)-qubit MCZR circuits (\(n\in[2,12]\)) constructed from the previous work [35], our four strategies \(app01\), \(app02\), \(app03\), and \(app04\) by the blue, purple, orange, green, and red curve, respectively. Accordingly, the average execution time of constructing a circuit of size \(n\) by these strategies are recorded in Fig. 4 (b). Typically, the time growth of our sole circuit synthesis algorithm \(app01\) as a function of \(n\) agrees well with the total time complexity of calculating Eq. (9), that is, \(\propto n3^{n}\). As comparison, the time of previous work [35] increases more drastically wih \(n\), since its most time-consuming procedure for solving linear equations over \(\mathbb{F}_{2}\) to determine whether each MCZR gate exists or not would require time scaling roughly as \(O(N^{3})=O(8^{n})\). It is worth noting that all our four strategies have both a reduced circuit depth and less execution time over the previous work. In Fig. 4 (c), the circuit depth reduction curve for each of our strategies shows an explicit upward trend as the circuit size \(n\) increases, which can achieve as high as 28.88%, 40.51%, 41.40%, 42.27% for constructing a circuit of \(n=12\) on average in time 38.40s, 38.79s, 40.16s, and 45.78s, respectively. Also, the usefulness of **Algorithm** 1 is reflected by observing that \(app02\) can achieve a \(11.57\%\) smaller depth over the sole synthesis algorithm \(app01\) at the expense of only \(1.03\%\) more time for circuits of \(n=12\), while \(app03\) and \(app04\) give us shorter and shorter depths as \(iter\) increases. Finally, in Fig. 4 (d) we evaluate the overall average performances of our strategies \(app01\), \(app02\), \(app03\), and \(app04\) for all involved circuit instances with \(n\in[2,12]\), including the average depth reduction of 23.29%, 32.16%, 32.88%, and 33.40%, and the average time ratio of 36.93%, 37.31%, 38.59%, and 43.67% with respect to the previous work, respectively. It seems that for such circuit instances, the average depth-optimization trend would rise slowly as the iteration number \(iter\) in **Workflow 1** increases.
In summary, here we demonstrate our **Workflow 1** for synthesizing and optimizing MCZR circuits by taking diagonal Hermitian operators as an example, which can show substantial improvement over the previous work in terms of both circuit depth and execution time. In addition, our results empirically validate that a shorter circuit depth is likely to be achieved by increasing the iteration number \(iter\) in **Algorithm** 1 with more time (see Fig. 4.(d)). In the following, we focus on another example to highlight the flexibility of **Algorithm** 1 for realizing controllable depth optimization.
### Phase-separation part in QAOA circuit
Quantum Approximate Optimization Algorithm (QAOA) is a well-known hybrid quantum-classical
algorithm designed to solve combinatorial optimization problems. A typical stage of the QAOA circuit for the MaxCut problem consists of three parts: a layer of Hadamard gates, a phase-separation part consisting of \(CZ(\theta)\) gates, and a layer of \(R_{x}\) rotation gates. Here we focus on reducing the depth of the middle part in \(n\)-qubit MaxCut-QAOA circuits of 3-regular graphs [49] by using our **Workflow 2** in Sec. VI.1, which is thus **Algorithm 1** for \(n\geq 6\).
To our knowledge, prior work [49] has used a so-called min-layer formation (MLF) procedure for reducing the number of \(CZ(\theta)\) gate layers in QAOA circuits, which is exactly a particular case of our **Algorithm 1** with the iteration number taken as \(iter=1\). For comparison, here we apply **Algorithm 1** with \(iter=1,2,3,4,5\) to optimize such phase-separation part consisting of two-qubit \(CZ(\theta)\) gates in OAOA circuits, respectively. According to the definition of 3-regular graphs such that every vertex is connected to three other vertices, the circuit depth lower bound in Eq. (23) is determined to be 3 for any circuit instance input to **Algorithm 1**. As an example, the depth optimization of a 6-qubit phase-separation circuit \(QC\) of depth 7 by taking \(iter=2\) has been presented in Fig. 3. More broadly, here we pick the \(n\)-qubit circuit instances corresponding to \(n\)-node 3-regular graphs with \(n\) being an even number in the range of 6 to 50, and for each size \(n\) we randomly pick 100 graphs. Thus, a total of \(23\times 100=2300\) MaxCut-QAOA circuit instances have been used for the evaluation. The experimental results are presented in Fig. 5.
The average circuit depth of 100 original randomly selected \(n\)-qubit QAOA circuits for \(n\in[6,50]\) is shown as the black curve in Fig. 5 (a), where the blue, purple, orange, green, and red curves indicate the optimized cir
Figure 4: Experimental results of constructing diagonal Hermitian operators with size \(n\in[2,12]\) by applying a previous method [35], our circuit synthesis method in **Theorem 3** (\(app01\)), and our **Workflow 1** with \(iter=1(app02),5(app03)\), and \(20(app04)\), respectively. (a) The blue, purple, orange, green, and red curves indicate the average depth of circuits obtained from previous work and \(app01\) to \(app04\) for each \(n\). Accordingly, the execution time and circuit depth reduction over the previous work as a function of \(n\) on average are respectively recorded in (b) and (c), indicating that our four strategies can achieve both a reduced circuit depth and less execution time compared to previous work. Notably, all our strategies can have a more significant depth reduction for large-size \(n\), and the effectiveness of our depth-optimization **Algorithm 1** can be reflected by comparing \(app02\)-\(app04\) with \(app01\). (d) As an overall performance evaluation, the average depth reduction and time ratio of our four strategies over the previous work for the entire set of instances are displayed in dark blue and dark red lines, respectively, such that on average we can achieve a 33.40% depth reduction with only 43.67% time by \(app04\).
cuit depth obtained from performing **Algorithm**1 with \(iter=1\) (that is, MLF procedure in Ref. [49]) as well as \(iter=2,3,4,5\), respectively. Specifically, the optimized circuit depths as indicated by the red line in Fig. 5(a) with \(iter=5\) grows quite slowly and ranges from \(3.00\) to \(4.05\) for \(n\in[6,50]\). Accordingly, Figs. 5 (b) and 5(c) show the circuit depth reduction and execution time for each instance with size \(n\) on average, respectively. In particular, the depth-reduction curve for each setting \(iter\) is growing overall as the circuit size \(n\) increases, and can achieve as high as \(63.45\%\) for \(n=50\) in time less than \(0.05\)s when adopting \(iter=5\). Furthermore, Fig. 5 (d) shows the overall performance of **Algorithm**1 with \(iter=1,2,3,4,5\) on all \(2300\) circuit instances, where on average we can achieve a depth reduction of \(51.19\%\), \(56.17\%\), \(57.71\%\), \(58.44\%\), and \(58.88\%\) over one original randomly selected QAOA circuit instance by using time of \(0.0046\)s, \(0.0090\)s, \(0.0135\)s, \(0.0178\)s and \(0.0222\)s for each \(iter\in[1,5]\), respectively. Notably, the average execution time scales nearly linearly as \(iter\) increases from \(1\) to \(5\), and the average depth obtained from \(iter=5\) is \(15.55\%\) smaller than that from \(iter=1\) at the expense of \(4.81\)X increase in time. Once again, these results reflect the flexibility of **Algorithm**1 as it can achieve a shorter circuit depth at the expense of more execution time. Therefore, for dealing with such QAOA-circuit case one can take **Algorithm**1 with gradually increasing the iteration number \(iter\) to seek the best possible results.
Finally, we point out the expense of depth-optimization time overhead is especially worthwhile in the use-case of QAOA since the obtained circuit needs to be executed on the quantum hardware many times for solving the MaxCut problem, and thus a shorter circuit depth obtained from the precedent optimization proce
Figure 5: Experimental results of optimizing the depth of phase-separation parts in \(100\) randomly selected \(n\)-qubit QAOA circuits with even \(n\in[6,50]\) by applying **Algorithm**1 with \(iter=1,2,3,4,5\), respectively. (a) The black, blue, purple, orange, green, and red curves indicate the average circuit depth of original \(100\) random \(n\)-qubit instances as well as optimized ones with \(iter=1\) to \(5\), respectively. Accordingly, the circuit depth reduction and execution time as a function of \(n\) on average are respectively recorded in (b) and (c), both of which show an upward trend on the whole. Note that the results for \(iter=1\) are equivalent to the previous min-layer formation method aimed at optimizing QAOA circuits [49], while as comparison our **Algorithm**1 is more flexible and useful since it can achieve a more significant circuit depth reduction by adjusting the parameter \(iter\) at the cost of more execution time. (d) As an overall performance evaluation, the average depth reduction and execution time for all \(2300\) circuit instances with different \(iter\) are displayed in dark blue and dark red, respectively, where the time cost shows a nearly-linear growth when increasing \(iter\).
dure could save a large amount of time in the subsequent process of running the QAOA circuit. As a result, our depth-optimized circuits might be executed on the scalable quantum processor with non-local connectivity [61], or can act as a better starting point for possible further circuit compilation if needed [49].
## VII Discussion and Conclusion
In this study, we present a systematic study of quantum circuits over multiple-control \(Z\)-rotation gates with continuous parameters. Based on an established polynomial representation, we derive a gate-count optimal synthesis of such circuits for implementing any diagonal unitary matrix, which also enables the circuit depth optimal for specific MCZR circuits. Furthermore, we propose practical optimization strategies for reducing the circuit depth of any given MCZR circuit, which can show substantial performance improvement over prior works for typical examples in quantum computing. Compared to the conventional study of implementing diagonal unitary operators over the single- and two-qubit gate set [62; 63; 64], here we provide an alternative scheme by utilizing a multiqubit gate set as the computational primitives, which would match the quantum experimental progress in certain directions, such as neutral atoms [22] and superconducting systems [27; 29]. In addition, note that above techniques are raised for dealing with general cases, we point out there may also exist other useful ideas aimed at special-case circuits. For example, particular quantum graph states [54] or hypergraph states [65] can be prepared with linearly many MCZR gates and constant depth by observing their underlying lattice graphs. Readers of interest could explore more about such specific cases.
Although this paper mainly focuses on quantum circuits over MCZR gates, it may enlighten the research on other types of circuits as well. First, the circuit-polynomial correspondence put forward to characterize MCZR circuits extends the concept of _phase polynomial_ representation [66], again implying that an appropriate representation could facilitate circuit synthesis and/or optimization. Second, the depth-optimization strategies introduced in Section V are actually suitable for any quantum circuit over commuting gates, such as IQP (instantaneous quantum polynomial-time) circuits used to demonstrate quantum advantage [67]. Finally, this study sheds light on implementing diagonal unitary operators over other available gate sets, such as the multiply-controlled Toffoli gates acting on fewer qubits by considering gate simulation [4]. Therefore, we would like to investigate these interesting topics in the future work.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China (Grant Nos. 62102464, 62272492, 61772565), the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2020B1515020050), and Project funded by China Postdoctoral Science Foundation (Grant Nos. 2020M683049, 2021T140761). We appreciate Dr. Li Zhang from South China Normal University for useful discussions on the data analysis.
|
2307.09116 | Asymmetric One-Sided Semi-Device-Independent Steerability of Quantum
Discordant States | Superlocality and superunsteerability provide operational characterization of
quantum correlations in certain local and unsteerable states respectively. Such
quantum correlated states have a nonzero quantum discord. A two-way nonzero
quantum discord is necessary for quantum correlations pointed out by
superlocality. On the other hand, in this work, we demonstrate that a two-way
nonzero quantum discord is not necessary to demonstrate superunsteerability. To
this end, we demonstrate superunsteerability for one-way quantum discordant
states. This in turn implies the existence of one-way superunsteerability and
also the presence of superunsteerability without superlocality.
Superunsteerability for nonzero quantum discord states implies the occurence of
steerability in a one-sided semi-device-independent way. Just like one-way
steerability occurs for certain Bell-local states in a one-sided
device-independent way, our result shows that one-way steerability can also
occur for certain nonsuperlocal states but in a one-sided
semi-device-independent way. | Chellasamy Jebarathinam, Debarshi Das, R. Srikanth | 2023-07-18T10:04:45Z | http://arxiv.org/abs/2307.09116v4 | # Asymmetric One-Sided Semi-Device-Independent Steerability of Quantum Discordant States
###### Abstract
Superlocality and superunsteerability provide operational characterization of quantum correlations in certain local and unsteerable states respectively. Such quantum correlated states have a nonzero quantum discord. A two-way nonzero quantum discord is necessary for quantum correlations pointed out by superlocality. On the other hand, in this work, we demonstrate that a two-way nonzero quantum discord is not necessary to demonstrate superunsteerability. To this end, we demonstrate superunsteerability for one-way quantum discordant states. This in turn implies the existence of one-way superunsteerability and also the presence of superunsteerability without superlocality. Superunsteerability for nonzero quantum discord states implies the occurence of steerability in a one-sided semi-device-independent way. Just like one-way steerability occurs for certain Bell-local states in a one-sided device-independent way, our result shows that one-way steerability can also occur for certain nonsuperlocal states but in a one-sided semi-device-independent way.
_Introduction:-_ Local quantum measurements on entangled states can be used to demonstrate quantum nonlocality, originating from an experimental situation proposed by Einstein, Podolsky and Rosen [1] and the Bohm-Aharonov version of it [2]. Bell proposed a framework to distinguish quantum nonlocality from local realistic description of the measurement results by introducing an inequality, which is satisfied by any local hidden variable model for the observed correlations between space-like separated observers [3]. Such an inequality is violated by certain quantum correlations and the phenomenon is referred as Bell nonlocality [4]. There exists another form of quantum nonlocality as pointed out by Schrodinger [5]. This form of quantum nonlocality is called quantum steering and its framework analogous to the Bell's framework was proposed by Wiseman, Jones and Doherty (WJD) [6]. Apart from being fundamental aspects of quantum theory, both the forms of quantum nonlocality find applications for quantum technologies (see Sec IV in [4] and Sec V in [7] for applications of Bell nonlocality and quantum steering respectively). In contrast to Bell nonlocality of quantum correlations, quantum steering is an asymmetric form of quantum correlations both from fundamental and their applications point of view. Quantum steering can exist in only one-way, that is, certain entangled states have steerability from only one side [8] (also see Sec. III. D in [7]) and quantum steering can only provide one-sided device-independent applications [9] (also see Sec. V in [7]).
Quantification of quantum resources through appropriate quantifiers is an important aspect of quantum information science [10]. Quantification of quantum correlations beyond entanglement, called quantum discord, was proposed in [11; 12]. This kind of quantum correlation has also emerged as a quantum resource for applications in quantum information science [13; 14] (and also see Sec VI in [15]). From a quantum foundational perspective [16], quantum discord was proposed as Bohr's notion of non-mechanical disturbance [17]. Certain distinguishing features of quantum discord to quantum entanglement have been characterized such as no death for discord [18] and quantum discord may increase under certain decoherence conditions [19].
Simulation of certain local and unsteerable states using finite shared randomness has been shown to motivate the amount of shared randomness as resource [20; 21]. Superlocality [22] and superunsteerability [23] have been recently formalized to demonstrate a quantum advantage in simulating certain local and unsteerable correlations, respectively, in terms of local Hilbert space dimension over the minimal amount of shared randomness required to simulate them. Such quantum advantage has been invoked to provide operational characterization of quantum correlations in certain local and unsteerable states having a nonzero quantum discord [24; 25; 26]. Superlocality or superunsteerability has also been found to be useful for certifying quantum discord in a measurement device-independent way [27] and also as a resource for measurement device-independent quantum key distribution protocols [28], quantum random access codes [29] and quantum random number generation [27].
Studying the precise relationships among quantum discord, superunsteerability and superlocality could provide better understanding of quantum correlations as well as their role as a resource in quantum information processing. Superlocality is inequivalent to quantum discord [24]. This arises the question whether superunsteerability is inequivalent to superlocality or quantum discord. Also, as superunsteerability is an asymmetric concept, a natural question that arises is of whether superunsteerability can occur for nonzero quantum discord states with the one-wayness property, analogous to one-way quantum steering in the case of certain entangled states. In this work, we answer this question in the affirmative by demonstrating that quantum correlations in certain one-way quantum discordant states can be operationally captured by superunsteerability. For such states, superunsteerability cannot occur both the ways, because the state has zero quantum discord on one side. Thus, in this work, we demonstrate the existence of one-way superunsteerability. This in turn implies that superunsteerability is inequivalent to superlocality. |
2302.13790 | A remark on 0-cycles as modules over algebras of finite correspondences | Given a smooth projective variety $X$ over a field, consider the $\mathbb
Q$-vector space $Z_0(X)$ of 0-cycles (i.e. formal finite $\mathbb Q$-linear
combinations of the closed points of $X$) as a module over the algebra of
finite correspondences. Then the rationally trivial 0-cycles on $X$ form an
absolutely simple and essential submodule of $Z_0(X)$. | M. Rovinsky | 2023-02-27T14:09:18Z | http://arxiv.org/abs/2302.13790v2 | # A remark on 0-cycles as modules over algebras of finite correspondences
###### Abstract.
Given a smooth projective variety \(X\) over a field, consider the \(\mathbb{Q}\)-vector space \(Z_{0}(X)\) of 0-cycles (i.e. formal finite \(\mathbb{Q}\)-linear combinations of the closed points of \(X\)) as a module over the algebra of finite correspondences. Then the socle of \(Z_{0}(X)\) is absolutely simple, essential and consists of all rationally trivial 0-cycles.
Let \(k\) be a field. There are several ways and versions in which the zero-cycles on \(k\)-schemes can be considered as a functor. In each of these versions, we want this functor to be an object of an abelian category, and we study its structure ("composition series").
Consider a set \(S\) of smooth projective varieties over a fixed field. Let \(Z_{0}(S)\) be the direct sum of the \(\mathbb{Q}\)-vector spaces of 0-cycles (i.e. formal finite linear combinations of the closed points) on varieties in \(S\) with rational coefficients.
We consider \(Z_{0}(S)\) as a module over the algebra of finite correspondences.
The aim of this note is to show that the socle \(\operatorname{Soc}(S)\) of the module \(Z_{0}(S)\) is absolutely simple and consists of rationally trivial 0-cycles. Assuming the Beilinson-Bloch motivic filtration conjecture, we show that the radical filtration on \(Z_{0}(S)/\operatorname{Soc}(S)\) is a modification of the conjectural motivic filtration on Chow groups of 0-cycles.
## 1. Category algebras and non-degenerate modules
For any small preadditive category \(\mathcal{C}\), set \(A_{\mathcal{C}}:=\bigoplus_{X,Y\in\mathcal{C}}\operatorname{Hom}_{\mathcal{C} }(X,Y)\).
The composition pairings \(\operatorname{Hom}_{\mathcal{C}}(X,Y)\times\operatorname{Hom}_{\mathcal{C}}(Y,Z)\to\operatorname{Hom}_{\mathcal{C}}(X,Z)\) (and the zero pairings \(\operatorname{Hom}_{\mathcal{C}}(W,X)\times\operatorname{Hom}_{\mathcal{C}}(Y,Z)\to A_{\mathcal{C}}\) for all quadruples \(W,X,Y,Z\) with \(X\neq Y\)) induce an associative ring structure on the abelian group \(A_{\mathcal{C}}\).
The ring \(A_{\mathcal{C}}\) is unital if and only if there are only finitely many objects in \(\mathcal{C}\). However, even if \(A_{\mathcal{C}}\) is not unital, it is idempotent (in the sense of [1, Definition 4]), i.e. for every finite collection \(B\) of elements of \(A_{\mathcal{C}}\) there is an idempotent \(e\in A_{\mathcal{C}}\) such that \(ea=a\) for all \(a\in B\), namely the sums of identities \(\operatorname{id}_{X}\in\operatorname{Hom}_{\mathcal{C}}(X,X)\subseteq A_{ \mathcal{C}}\) for all objects \(X\) in a finite set containing the union of the supports of the elements of \(B\). (By definition, the support of an element \(a\) is the smallest set \(\operatorname{Supp}(a)\) such that \(a\in\bigoplus_{X,Y\in\operatorname{Supp}}\operatorname{Hom}_{\mathcal{C}}(X,Y)\subseteq A_{\mathcal{C}}\).)
Recall, cf. e.g. [2, p.113], that a left module \(M\) over an associative ring \(A\) is called non-degenerate if \(AM=M\). Obviously, \(A_{\mathcal{C}}\) is a non-degenerate left \(A_{\mathcal{C}}\)-module.
Denote by \(\operatorname{Mod}_{\mathcal{C}}\) the category of non-degenerate left \(A_{\mathcal{C}}\)-modules.
Denote by \(\mathcal{C}^{\vee}\) the category of additive functors from \(\mathcal{C}\) to the category of abelian groups.
**Lemma 1.1** (Morita equivalence).: _If \(\mathcal{C}\) is a small preadditive category then \(\mathcal{C}^{\vee}\) and \(\operatorname{Mod}_{\mathcal{C}}\) are equivalent abelian categories. In particular, if two small preadditive categories \(\mathcal{C}\) and \(\mathcal{C}^{\prime}\) are equivalent then the categories \(\operatorname{Mod}_{\mathcal{C}}\) and \(\operatorname{Mod}_{\mathcal{C}^{\prime}}\) are equivalent as well._
Proof.: We send any functor \(\mathcal{F}:\mathcal{C}\to\{\text{abelian groups}\}\) to \(\bigoplus_{X\in\mathcal{C}}\mathcal{F}(X)\), which is a non-degenerate \(A_{\mathcal{C}}\)-module in an obvious way. In the opposite direction, given a \(A_{\mathcal{C}}\)-module
and an object \(X\), we set \(\mathcal{F}(X):=\operatorname{id}_{X}(M)\). Any morphism \(f\in\operatorname{Hom}_{\mathcal{C}}(X,X^{\prime})\subseteq A_{\mathcal{C}}\) induces \(\mathcal{F}(X)=\operatorname{id}_{X}(M)\stackrel{{ f}}{{\to}}f \circ\operatorname{id}_{X}(M)=\operatorname{id}_{X^{\prime}}\circ f\circ \operatorname{id}_{X}(M)\subseteq\operatorname{id}_{X^{\prime}}(M)=\mathcal{F} (X^{\prime})\).
It is easy to see that these two functors are quasi-inverse equivalences. In particular, we get a chain of equivalences: \(\operatorname{Mod}_{\mathcal{C}}\simeq\mathcal{C}^{\vee}\simeq(\mathcal{C}^{ \prime})^{\vee}\simeq\operatorname{Mod}_{\mathcal{C}^{\prime}}\).
The Yoneda embedding \(\mathcal{C}\to\mathcal{C}^{\vee}\simeq\operatorname{Mod}_{\mathcal{C}}\), \(X\mapsto h_{X}:=\operatorname{Hom}_{\mathcal{C}}(X,-)\) is a fully faithful functor. We are interested in the structure of the \(A_{\mathcal{C}}\)-module \(h_{X}\) for the 'unit' object \(X\).
## 2. Algebras of finite correspondences and their modules
Fix a field \(k\). For each pair of smooth \(k\)-varieties \(X\) and \(Y\), define \(\operatorname{Cor}(X,Y)_{\mathbb{Q}}\) as the \(\mathbb{Q}\)-vector space with a basis given by the irreducible closed subsets of \(X\times_{k}Y\) whose associated integral subschemes are finite, flat and surjective over a connected component of \(X\).
For each triple of smooth \(k\)-varieties \((X,Y,Z)\), define the bilinear pairing \(\operatorname{Cor}(X,Y)_{\mathbb{Q}}\times\operatorname{Cor}(Y,Z)_{\mathbb{Q} }\to\operatorname{Cor}(X,Z)_{\mathbb{Q}}\) by \((\alpha,\beta)\mapsto\operatorname{pr}_{XZ*}(\alpha\times Z\cap X\times\beta)\).
These pairings as compositions, turn the category of smooth \(k\)-varieties with morphisms \(\operatorname{Cor}(-,-)_{\mathbb{Q}}\) into an additive category \(\operatorname{SmCor}_{k}\). Denote by \(\operatorname{SmCor}_{k}^{\operatorname{proj}}\) the full subcategory of projective \(k\)-varieties.
Given a set \(S\) of smooth \(k\)-varieties, we may consider \(S\) as a full subcategory of \(\operatorname{SmCor}_{k}\). As the category \(S\) is preadditive, the ring \(A_{S}:=\bigoplus_{X,Y\in S}\operatorname{Cor}(X,Y)_{\mathbb{Q}}\) is already defined.
### The socle of \(Z_{0}(s)\)
For each smooth variety \(Y\) over \(k\), let \(Z_{0}(Y):=\operatorname{Cor}(\operatorname{Spec}(k),Y)_{\mathbb{Q}}\) be the \(\mathbb{Q}\)-vector space of 0-cycles on \(Y\), and \(Z_{0}^{\circ}(Y)\) be the subspace of 0-cycles of degree 0 on each connected component of \(Y\).
For each set \(S\) of smooth varieties over \(k\), consider \(Z_{0}(S):=\bigoplus_{X\in S}Z_{0}(X)\). Then the above pairings \(\operatorname{Cor}(Y,Z)_{\mathbb{Q}}\times Z_{0}(Y)\to Z_{0}(Z)\) (with \(X=\operatorname{Spec}(k)\)), given by \((\alpha,\beta)\mapsto\operatorname{pr}_{Z*}(\alpha\times Z\cap\beta)\), induce an \(A_{S}\)-module structure on \(Z_{0}(S)\).
Obviously, \(Z_{0}^{\circ}(S):=\bigoplus_{X\in S}Z_{0}^{\circ}(X)\) is an \(A_{S}\)-submodule of \(Z_{0}(S)\).
**Theorem 2.1**.: _Let \(S\) be a set of smooth projective varieties over \(k\). Then_
1. _any non-zero_ \(A_{S}\)_-submodule of_ \(Z_{0}(S)\) _contains the submodule_ \(Z_{0}^{\operatorname{rat}}(S):=\bigoplus_{X\in S}Z_{0}^{\operatorname{rat}}(X)\) _of 0-cycles on all_ \(X\in S\) _rationally equivalent to 0;_
2. _any proper_ \(A_{S}\)_-submodule of_ \(Z_{0}(S)\) _is contained in the submodule_ \(Z_{0}^{\circ}(S)\)_;_
3. _for any non-empty_ \(S\)_, the modules_ \(Z_{0}^{\operatorname{rat}}(S)\) _and_ \(Z_{0}(S)/Z_{0}^{\circ}(S)\) _are absolutely simple, i.e._ \(\operatorname{End}_{A_{S}}(Z_{0}^{\operatorname{rat}}(S))=\operatorname{End}_{A _{S}}(Z_{0}(S)/Z_{0}^{\circ}(S))=\mathbb{Q}\)_._
Proof.: It is clear that if \(S^{\prime}\) is the set of connected components of varieties in \(S\) then \(A_{S^{\prime}}\) and \(A_{S}\) are naturally isomorphic, while \(Z_{0}(S^{\prime})\) and \(Z_{0}(S)\) coincide as \(A_{S}\)-modules, so we may assume that all varieties in \(S\) are connected. Given any non-zero element \(\xi=(\xi_{X})_{X\in S}\in Z_{0}(S)\), there is \(X\in S\) such that \(\xi_{X}\neq 0\), so \(\xi^{\prime}:=\operatorname{id}_{X}\xi\neq 0\).
If \(\deg(\xi_{X})\neq 0\) then, for any \(Y\in S\) and any closed point \(y\in Y\), the finite correspondence \([X\times_{k}y]\in\operatorname{Cor}(X,Y)_{\mathbb{Q}}\) maps \(\xi_{X}\) to a non-zero multiple of \([y]\in Z_{0}(Y)\), so \(\xi^{\prime}\) (and \(\xi\)) generates the whole \(A_{S}\)-module \(Z_{0}(S)\). This shows (2).
We may therefore assume further that \(\deg(\xi_{X})=0\) and, as \(\xi_{X}\neq 0\), that \(\dim X>0\).
Let \(\xi_{X}=\sum_{i=1}^{N}m_{i}[p_{i}]\) for non-zero \(m_{i}\in\mathbb{Q}\) and closed points \(p_{i}\in X\).
By a refinement of the projective version of the Noether normalization lemma proved in [7], \(X\) admits a morphism \(\varphi:X\to\mathbb{P}_{k}^{n}\), where \(n:=\dim X\), which maps \(p_{2},\dots,p_{N}\) into
a hyperplane \(H\subset\mathbb{P}^{n}_{k}\) and maps \(p_{1}\) to the complement of \(H\). Set \(p^{\prime}_{i}:=\varphi(p_{i})\) for all \(i\), so \(p^{\prime}_{2},\ldots,p^{\prime}_{N}\in H\), \(p^{\prime}_{1}\in\mathbb{P}^{n}_{k}\smallsetminus H\). This means that \(\varphi_{*}\xi_{X}=\sum_{i=1}^{N}m_{i}[p^{\prime}_{i}]\neq 0\).
Let us show by induction on \(N\geq 2\) that there exists a finite endomorphism \(\psi:\mathbb{P}^{n}_{k}\to\mathbb{P}^{n}_{k}\) sending the points \(p^{\prime}_{2},\ldots,p^{\prime}_{N}\) to a single \(k\)-rational point \(p\in\mathbb{P}^{n}_{k}\) and sending the point \(p^{\prime}_{1}\) to a distinct \(k\)-rational point \(q\in\mathbb{P}^{n}_{k}\), \(q\neq p\). For the induction step, let \(W_{0},\ldots,W_{n}\) be homogeneous coordinates on \(\mathbb{P}^{n}_{k}\) such that \(H\) is given by the equation \(W_{0}=0\), while both \(p^{\prime}_{2}\) and \(p^{\prime}_{3}\) do not lie on the hyperplane given by the equation \(W_{1}=0\). For each \(2\leq i\leq n\), set \(w_{i}:=W_{i}/W_{1}\), and let \(P_{ij}\) be the minimal polynomial of \(w_{i}(p^{\prime}_{j})\) over \(k\).
Set \(d:=\max\limits_{2\leq i\leq n}\deg(P_{i2}P_{i3})\), and \(P_{i}:=P_{i2}(w_{i})P_{i3}(w_{i})w_{i}^{d-\deg(P_{i2}P_{i3})}W_{1}^{d}\). Then the map
\[g:(W_{0}:\ldots:W_{N})\mapsto(W_{0}^{d}:W_{1}^{d}:P_{1}:P_{2}:\ldots:P_{N})\]
is a well-defined endomorphism of \(\mathbb{P}^{n}_{k}\), \(g\) preserves \(H\), \(g(p^{\prime}_{2})=g(p^{\prime}_{3})\) is \(k\)-rational, and \(g\) transforms \(\varphi_{*}\xi_{X}\) to \(m_{1}[p^{\prime\prime}_{1}]+(m_{2}+m_{3})[p^{\prime\prime}_{3}]+\sum_{i=4}^{N }m_{i}[p^{\prime\prime}_{i}]\), where \(p^{\prime\prime}_{3},\ldots,p^{\prime\prime}_{N}\in H\) and \(p^{\prime\prime}_{1}\notin H\).
Then \(\psi_{*}\varphi_{*}\xi_{X}\) is a non-zero multiple of \([p]-[q]\).
Let \(\Upsilon\) be an \(n\)-dimensional variety admitting a non-constant morphism \(h:\Upsilon\to\mathbb{P}^{1}_{k}\) (e.g., \(\Upsilon=\mathbb{P}^{n-1}\times\mathbb{P}^{1}_{k}\) and \(h:\Upsilon\to\mathbb{P}^{1}_{k}\) is the projection). Fix a fibre \(D\) of \(h\), and a hyperplane \(H^{\prime}\subset\mathbb{P}^{n}_{k}\) containing \(p\) but not \(q\). By the same [7, Theorem 1], there exists a finite morphism \(\pi:\Upsilon\to\mathbb{P}^{n}_{k}\) such that \(\pi(D)=H^{\prime}\), so \(D\) meets \(\pi^{-1}(p)\) but not \(\pi^{-1}(q)\), and therefore, \(h_{*}\pi^{*}\psi_{*}\varphi_{*}\xi_{X}\neq 0\). Then \(h_{*}(^{t}\Gamma_{\pi})_{*}\psi_{*}\varphi_{*}\xi_{X}=h_{*}\pi^{*}\psi_{*} \varphi_{*}\xi_{X}\) is a non-zero divisor \(E=\sum\limits_{i=0}^{n}a_{i}[q_{i}]\) on \(\mathbb{P}^{1}_{k}\) for some \(a_{i}\neq 0\) and pairwise distinct \(q_{i}\).
Choose a morphism \(f:\mathbb{P}^{1}_{k}\to\mathbb{P}^{1}_{k}\) such that \(f(q_{0})=0\), \(f(q_{i})=\infty\) for all \(1\leq i\leq n\), so \(f_{*}h_{*}(^{t}\Gamma_{\pi})_{*}\psi_{*}\varphi_{*}\xi_{X}=a_{0}([0]-[\infty])\). Finally, for each \(Y\in S\), any \(0\)-cycle on \(Y\) rationally equivalent to \(0\) is a linear combination of images of the cycle \([0]-[\infty]\) under finite correspondences \(\gamma\) from \(\mathbb{P}^{1}_{k}\) to \(Y\), i.e. of elements \((\gamma\circ f\circ h\circ{}^{t}\Gamma_{\pi}\circ\psi\circ\varphi)_{*}\xi_{X}\) for appropriate \(\gamma\)'s.
Note that the same proof shows the simplicity of the \(A_{S}\otimes F\)-modules \(Z^{\operatorname{rat}}_{0}(S)\otimes F\) and \((Z_{0}(S)/Z^{\circ}_{0}(S))\otimes F\) for any field \(F\) of characteristic \(0\), and thus, the \(A_{S}\)-modules \(Z^{\operatorname{rat}}_{0}(S)\) and \(Z_{0}(S)/Z^{\circ}_{0}(S)\) are absolutely simple.
_Remark 2.2_.: Let \(S\) be a set of irreducible smooth projective varieties over \(k\). Then the \(A_{S}\)-module \(Z_{0}(S)/Z^{\circ}_{0}(S)\) is isomorphic to \(V:=\bigoplus_{S}\mathbb{Q}\), where \(A_{S}\) acts via its quotient isomorphic to the algebra of endomorphisms of \(V\) with kernels containing coordinate subspaces of finite codimension. The latter algebra is obviously dense in the algebra \(\mathfrak{gl}(V)\) of all endomorphisms of \(V\).
### Motivic \(A_{s}\)-modules
Recall (see, e.g., [8]), that an effective Grothendieck motive over \(k\) modulo an 'adequate' equivalence relation \(\sim\) is defined as a pair \((X,\pi)\) consisting of a smooth projective variety \(X\) over \(k\) and a projector \(\pi\) in the algebra of self-correspondences on \(X\) of dimension \(\dim X\) with coefficients in \(\mathbb{Q}\) modulo \(\sim\). The morphisms between pairs \((X,\pi)\) and \((X^{\prime},\pi^{\prime})\) are algebraic cycles \(\alpha\) on \(X\times_{k}X^{\prime}\) of dimension \(\dim X\) modulo \(\sim\), and such that \(\alpha=\pi^{\prime}\circ\alpha\circ\pi\).
The motives over \(k\) modulo an equivalence relation \(\sim\) form a pseudo-abelian tensor category, denoted by \(\mathcal{M}^{\sim}_{k,\operatorname{eff}}\), under \((X,\pi)\otimes(X^{\prime},\pi^{\prime}):=(X\times_{k}X^{\prime},\pi\times\pi^ {\prime})\).
Denote by \(\mathbb{M}^{\sim}:\operatorname{SmCor}_{k}^{\operatorname{proj}}\to\mathcal{M}^{ \sim}_{k,\operatorname{eff}}\) the additive functor \(X\mapsto(X,\Delta_{X})\), where \(\Delta_{X}\) is the class of the diagonal in \(X\times_{k}X\). In particular, \(\mathbb{M}^{\sim}(\mathbb{P}^{1}_{k})\cong\mathbb{M}^{\sim}(\operatorname{Spec}(k ))\oplus\mathbb{L}\) for an object
\(\mathbb{L}\) such that the natural map \(\operatorname{Hom}_{\mathcal{M}^{\sim}_{k,\mathrm{eff}}}(U,V)\to\operatorname{Hom} _{\mathcal{M}^{\sim}_{k,\mathrm{eff}}}(U\otimes\mathbb{L},V\otimes\mathbb{L})\) is bijective for all effective motives \(U\) and \(V\).
Denote by \(\mathcal{M}^{\sim}_{k}\) the category of triples \((X,\pi,n)\), where \((X,\pi)\) are as above and \(n\) is an integer, while \(\operatorname{Hom}_{\mathcal{M}^{\sim}_{k}}((X,\pi,n),(X^{\prime},\pi^{\prime},n^{\prime})):=\operatorname{Hom}_{\mathcal{M}^{\sim}_{k,\mathrm{eff}}}((X, \pi)\otimes\mathbb{L}^{\otimes(m+n-n^{\prime})},(X^{\prime},\pi^{\prime}) \otimes\mathbb{L}^{\otimes m})\) for any integer \(m>|n^{\prime}-n|\). We consider \(\mathcal{M}^{\sim}_{k,\mathrm{eff}}\) as a full subcategory of \(\mathcal{M}^{\sim}_{k}\) under \((X,\pi)\mapsto(X,\pi,0)\).
**Theorem 2.3**.: _The functor \(\mathbb{M}^{\sim}\) is full. In other words, the natural ring homomorphism \(A_{S}\to\bigoplus_{X,Y\in S}CH^{\dim Y}(X\times_{k}Y)_{\mathbb{Q}}\) is surjective._
Proof.: This is a particular case of [3, Theorem 7.1].
Each motive \(N\in\mathcal{M}^{\sim}_{k}\) gives rise to an \(A_{S}\)-module \(\mathfrak{M}^{\sim}_{N}(S):=\bigoplus_{X\in S}\operatorname{Hom}_{\mathcal{M} ^{\sim}_{k}}(N,\mathbb{M}^{\sim}(X))\).
We omit \(\sim\) from the notation when it is the numerical equivalence \(\sim_{\mathrm{num}}\).
**Corollary 2.4**.: _For any motive \(N\in\mathcal{M}_{k}\), the \(A_{S}\)-module \(\mathfrak{M}_{N}(S)\) is semisimple._
Proof.: The \(A_{S}\)-action on \(\mathfrak{M}_{N}(S)\) factors through an action of \(A_{S}/\sim_{\mathrm{num}}\), while \(A_{S}/\sim_{\mathrm{rat}}\cong\bigoplus_{X,Y\in S}CH^{\dim Y}(X\times_{k}Y)_{ \mathbb{Q}}\), so \(A_{S}/\sim_{\mathrm{num}}\cong\bigoplus_{X,Y\in S}CH^{\dim Y}(X\times_{k}Y)_{ \mathbb{Q}}/\sim_{\mathrm{num}}\). By [5], \(\mathcal{M}_{k}\) is an abelian semisimple category, and therefore, any non-degenerate (\(A_{S}/\sim_{\mathrm{num}}\))-module is semisimple. In particular, so is \(\mathfrak{M}_{N}(S)\).
## 3. Loewy filtrations on \(Z_{0}(s)\)
Modifying slightly the standard definition (see, e.g. [4]), a filtration of a module \(M\) is called a Loewy filtration if it is finite, its successive quotients are semisimple and its length is minimal under these assumptions.
Let \(S\) be a set of smooth irreducible projective varieties over a field \(k\). We are interested in Loewy filtrations on the \(A_{S}\)-module \(Z_{0}(S)\).
By Theorem 2.1, the socle (i.e. the maximal semisimple submodule) of the \(A_{S}\)-module \(Z_{0}(S)\) is \(Z_{0}^{\mathrm{rat}}(S)\), while the radical (i.e. the intersection of all maximal submodules) of the \(A_{S}\)-module \(Z_{0}(S)\) is \(Z_{0}^{\circ}(S)\), and \(Z_{0}^{\mathrm{rat}}(S)\) is an essential submodule of \(Z_{0}(S)\).
The \(A_{S}\)-action on the quotient \(CH_{0}(S)_{\mathbb{Q}}:=Z_{0}(S)/Z_{0}^{\mathrm{rat}}(S)\) factors through an action of the quotient \(A_{S}/\sim_{\mathrm{rat}}\) of \(A_{S}\) by the rational equivalence.
### The case of curves
**Proposition 3.1**.: _Let \(S\) be a set of smooth projective curves over \(k\)._
_Then \(Z_{0}^{\mathrm{rat}}(S)\subset Z_{0}^{\circ}(S)\subset Z_{0}(S)\) is the unique Loewy filtration on the \(A_{S}\)-module \(Z_{0}(S)\)._
Proof.: By Theorem 2.1, the socle of the \(A_{S}\)-module \(Z_{0}(S)\) is simple and coincides with \(Z_{0}^{\mathrm{rat}}(S)\), while \(Z_{0}^{\circ}(S)\) is the unique maximal submodule of the \(A_{S}\)-module \(Z_{0}(S)\). There remains only to check the semisimplicity of \(Z_{0}^{\circ}(S)/Z_{0}^{\mathrm{rat}}(S)\).
One has \(A_{S}/\sim_{\mathrm{rat}}=\bigoplus_{X,Y\in S}\operatorname{Pic}(X\times_{k}Y )_{\mathbb{Q}}\) and \(I:=\bigoplus_{X,Y\in S}\operatorname{Pic}^{\circ}(X\times_{k}Y)_{\mathbb{Q}}\) is an ideal in \(A_{S}/\sim_{\mathrm{rat}}\) with \(I^{2}=0\), while \((A_{S}/\sim_{\mathrm{rat}})/I=\bigoplus_{X,Y\in S}\operatorname{NS}(X\times_{k }Y)_{\mathbb{Q}}\) is a semisimple algebra. Here \(\operatorname{Pic}\) is the Picard group, \(\operatorname{Pic}^{\circ}\) is the subgroup of algebraically trivial elements, \(\operatorname{NS}:=\operatorname{Pic}/\operatorname{Pic}^{\circ}\) is the Neron-Severi group.
Then, for any \((A_{S}/\sim_{\mathrm{rat}})\)-module \(M\), the submodule \(IM\) and the quotient \(M/IM\) can be considered as \((A_{S}/\sim_{\mathrm{rat}})/I\)-modules, and thus, they are semisimple. Applying this to \(M=Z_{0}(S)/Z_{0}^{\mathrm{rat}}(S)\), we see that the \(A_{S}\)-module \(IM=Z_{0}^{\circ}(S)/Z_{0}^{\mathrm{rat}}(S)\) is semisimple.
### Consequences of the filtration conjecture
According to the Bloch-Beilinson filtration conjecture (e.g., [6, Conjecture 2.3]), there should exist an abelian \(\mathbb{Q}\)-linear category \(\mathcal{MM}_{k}\) (of mixed motives over \(k\)) containing the category \(\mathcal{M}_{k}\) as the full subcategory of the semisimple objects, contravariant functors \(H^{i}(-,\mathbb{Q}(j))\) from the category of varieties over \(k\) to \(\mathcal{MM}_{k}\), and a functorial descending filtration \(\mathcal{F}^{\bullet}\) on the Chow groups \(CH^{q}(X)_{\mathbb{Q}}\) for smooth projective \(k\)-varieties \(X\) such that \(gr^{i}_{\mathcal{F}}CH^{q}(X)_{\mathbb{Q}}=\operatorname{Ext}^{i}_{\mathcal{ MM}_{k}}(\mathbb{Q}(0),H^{2q-i}(X,\mathbb{Q}(q)))\).
As a part of the filtration conjecture, it is natural to assume the Grothendieck's'semisimplicity' conjecture on the coincidence of homological \(\otimes\mathbb{Q}\) and numerical equivalences, so that the motive \(H^{2q-i}(X,\mathbb{Q}(q))\) is semisimple by [5].
A simple effective motive \(P\in\mathcal{M}_{k}\) of weight \(i\geq 0\) is called primitive if, for any smooth projective variety \(Y\) of dimension \(<i\), one has \(\mathcal{M}_{k}(P,M(Y\times\mathbb{P}^{1}))=0\).
The Chow groups \(CH_{0}(X)_{\mathbb{Q}}\) are covariant functorial. Setting
\[H_{i}(X,\mathbb{Q}):=H^{2\dim X-i}(X,\mathbb{Q}(\dim X))\text{ for the Poincare dual of }H^{i}(X,\mathbb{Q}),\]
the Beilinson formula can be rewritten as
\[gr^{i}_{\mathcal{F}}CH_{0}(X)_{\mathbb{Q}}=\operatorname{Ext}^{ i}_{\mathcal{MM}_{k}}(\mathbb{Q}(0),H_{i}(X,\mathbb{Q}))\\ =\bigoplus_{P}\operatorname{Ext}^{i}_{\mathcal{MM}_{k}}(\mathbb{ Q}(0),P(i))\otimes_{\operatorname{End}_{\mathcal{M}_{k}}(P)}\operatorname{Hom}_{ \mathcal{MM}_{k}}(P(i),H_{i}(X,\mathbb{Q})),\]
where \(P\) runs over the isomorphism classes of simple primitive motives of weight \(i\), and, we see that the spaces \(\mathcal{F}^{i}CH_{0}(X)_{\mathbb{Q}}\) should be covariant functorial as well.
For each set \(S\) of smooth irreducible projective varieties over a field \(k\), and each \(i\geq 0\), consider \(\mathcal{F}^{i}CH_{0}(S)_{\mathbb{Q}}:=\bigoplus_{X\in S}\mathcal{F}^{i}CH_{0} (X)_{\mathbb{Q}}\). By the functoriality of \(\mathcal{F}^{\bullet}\), this is an \(A_{S}\)-submodule of \(CH_{0}(S)_{\mathbb{Q}}\).
The algebra \(A_{S}\) acts on \(gr^{i}_{\mathcal{F}}CH_{0}(S)_{\mathbb{Q}}\) via its action on the motives \(H_{i}(X,\mathbb{Q})\), so the \(A_{S}\)-action on \(gr^{i}_{\mathcal{F}}CH_{0}(S)_{\mathbb{Q}}\) factors through an action of the quotient \(A_{S}/\sim_{\operatorname{num}}\) of \(A_{S}\), i.e. of the algebra \(B_{S}:=\bigoplus_{X,Y\in S}CH^{\dim Y}(X\times_{k}Y)_{\mathbb{Q}}/\sim_{ \operatorname{num}}\). As the algebra \(B_{S}\) is semisimple, the \(A_{S}\)-module \(gr^{i}_{\mathcal{F}}CH_{0}(S)_{\mathbb{Q}}\) is semisimple as well.
In particular, if dimensions of varieties in \(S\) are \(\leq d\) then the length \(\ell(S)\) of any Loewy filtration of \(CH_{0}(S)_{\mathbb{Q}}\) is \(\leq d+1\). (More precisely, \(\ell(S)-1\) does not exceed the number of those \(0\leq i\leq d\) for which \(H_{i}(X,\mathbb{Q})\) is not a Tate twist of an effective motive of weight \(<i\) for at least one \(X\in S\).)
It seems that the radical filtration on \(CH_{0}(S)_{\mathbb{Q}}\) (i.e. the strictly descending sequence of the iterated radicals) is the motivic one, but with the repeating terms omitted.
Acknowledgements. The study has been funded within the framework of the HSE University Basic Research Program. I am grateful to Ivan Panin and Vadim Vologodsky for helpful discussions.
|
2305.18662 | Entanglement partners and monogamy in de Sitter universes | We investigate entanglement of local spatial modes defined by a quantum field
in a de Sitter universe. The introduced modes show dis-entanglement behavior
when the separation between two regions where local modes are assigned becomes
larger than the cosmological horizon. To understand the emergence of
separability between these local modes, we apply the monogamy inequality
proposed by S. Camalet. We embed the focusing bipartite mode defined by the
quantum field in a pure four-mode Gaussian state, and identify its partner
modes. Then applying a Gaussian version of the monogamy relation, we show that
the external entanglement between the bipartite mode and its partner modes
constrains the entanglement of the bipartite mode. Thus the emergence of
separability of local modes in the de Sitter universe can be understood from
the perspective of entanglement monogamy. | Yasusada Nambu, Koji Yamaguchi | 2023-05-30T00:03:21Z | http://arxiv.org/abs/2305.18662v2 | # Entanglement partners and monogamy in de Sitter universes
###### Abstract
We investigate entanglement of local spatial modes defined by a quantum field in a de Sitter universe. The introduced modes show disentanglement behavior when the separation between two regions where local modes are assigned becomes larger than the cosmological horizon. To understand the emergence of separability between these local modes, we apply the monogamy inequality proposed by S. Camalet. We embed the focusing bipartite mode defined by the quantum field in a pure four-mode Gaussian state, and identify its partner modes. Then applying a Gaussian version of the monogamy relation, we show that the external entanglement between the bipartite mode and its partner modes constrains the entanglement of the bipartite mode. Thus the emergence of separability of local modes in the de Sitter universe can be understood from the perspective of entanglement monogamy.
## I Introduction
Cosmic inflation explains the origin of structures in our Universe by preparing seeds of primordial fluctuations as quantum origin, vacuum fluctuations of a quantum scalar field called inflaton. The contribution of this quantum field to energy density functions as a cosmological constant, leading to the accelerated expansion of the Universe. During the rapid expansion of the Universe, vacuum fluctuations receive parametric amplification, and the resulting fluctuations evolve to become "classical" seed fluctuations causing gravitational instability and later forming the large scale structures [1]. Although this is an accepted scenario of structure formation based on cosmic inflation in standard cosmology, the mechanism of "quantum to classical transition" of primordial fluctuation has not been well understood yet.
Entanglement is a key concept to differentiate quantum systems from classical ones and a crucial tool to investigate the quantum nature of the initial stage of our universe. In our previous studies [2; 3; 4; 5; 6; 7], local oscillator modes defined from the quantum scalar field in a de Sitter universe were investigated, and it was found that the initial entangled state becomes separable; that is, two local modes A and B, which are assigned at two spatial regions, become separable when their separation exceeds the Hubble horizon scale. This disentanglement behavior can be explained as follows: the "thermal noise" with the Gibbons-Hawking temperature associated with the de Sitter horizon breaks quantum correlations between two spatial regions, and therefore, the entangled bipartite state of modes A and B becomes separable. After these two modes become separable, only classical correlations survive between them.
The mechanism of disentanglement can also be studied from the property of multipartite entanglement. The bipartite system AB is defined as a subsystem of the total system, i.e., the field in the entire universe. Although the total system is assumed to be in a pure state, modes AB are in a mixed state since they are correlated with its complement \(\overline{\text{AB}}\). It is always possible to find a subsystem in \(\overline{\text{AB}}\) that purifies AB, which is called the partner mode of AB. Then, we can understand the disentanglement of AB as a result of entanglement sharing between these modes and their partners. More concretely, the disentanglement of AB can be analyzed from the perspective of entanglement monogamy in multipartite quantum systems [8; 9; 10; 11; 12; 13]. Monogamy of entanglement is an intrinsic property of quantum correlations that is not amenable to classical explanations. For the bipartite state AB and its complement \(\text{C}:=\overline{\text{AB}}\), the conventional monogamy relation is expressed as the following inequality
\[E(\text{A:B})+E(\text{A:C})\leq E(\text{A:BC}), \tag{1}\]
where \(E(\text{X:Y})\) denotes a suitably chosen entanglement measure for a bipartite system XY. The inequality restricts the amount of the bipartite entanglement \(E(\text{A:B})\) as sharing of correlations in the tripartite system ABC. However, this inequality does not provide such a tight constraint as to derive a condition on the separability \(E(\text{A:B})=0\) for multipartite Gaussian states [6]. See also Appendix A where we review the monogamy property (1) for a pure four-mode Gaussian state.
A slightly different form of the monogamy relation was proposed by Camalet [14; 15; 16; 17; 18], which relates "internal" and "external" quantum correlations in multipartite states. Here, for a bipartite system AB of interest, the correlation between A and B is internal, while the one between AB and another subsystem X in the complementary system \(\overline{\text{AB}}\) is external. Based on assumptions of general correlation measures, a new kind of monogamy inequality was derived, which states that internal entanglement and external entanglement obey a trade-off relation. As a consequence, explicit forms of the monogamy inequality are obtained in terms of entanglement measures for finite-dimensional systems, such as qubits.
In this paper, we investigate the entanglement behavior of local bipartite modes AB of a quantum field in the de Sitter universe from the viewpoint of the monogamy of entanglement. For this purpose, we identify partner modes that purify the bipartite modes AB by using the formalism proposed in [19; 20; 21; 22; 23]. We then prove a Camalet-type trade-off relation between internal and external correlations for these modes, i.e., a monogamy relation on the entanglement between A and B, and the entanglement between AB and their partners. Based on these formalisms, we find that the emergence of separability between local modes in the de Sitter universe can be understood from the viewpoint of entanglement monogamy.
The paper is organized as follows. In Sec. II, we introduce a quantum scalar field in the de Sitter universe and show the disentanglement behavior of local modes assigned at two spatial points. In Sec. III, we review our method of construction of the partner modes of the two local modes based on the formulas in [19; 20; 21; 22; 23]. In Sec. IV, based on the result obtained in Sec. III, we examine Camalet's monogamy relation for Gaussian modes with which the emergence of separability in the de Sitter universe is analyzed. Section V is devoted to summary and conclusion. We adopt units of \(c=\hbar=1\) throughout this paper.
## II Scalar field and local modes
To comprehend the behavior of the entanglement of quantum fields in the de Sitter universe, we consider a minimally coupled massless scalar field \(\hat{\phi}\) in a (3+1)-dimensional flat Friedmann-Lemaitre-Robertson-Walker (FLRW) universe. The scalar field obeys the Klein-Gordon equation \(\Box\hat{\phi}=0\). The metric of the FLRW universe with the conformal time \(\eta\) and the comoving coordinate \(\mathbf{x}=(x,y,z)\) is
\[ds^{2}=a_{\text{sc}}^{2}(\eta)(-d\eta^{2}+d\mathbf{x}^{2}), \tag{2}\]
where \(a_{\text{sc}}(\eta)\) is the scale factor of the universe. We will later fix its functional form as that which corresponds to the de Sitter universe. The rescaled scalar field \(\hat{\varphi}=a_{\text{sc}}\hat{\phi}\) obeys the following field equation
\[\hat{\varphi}^{\prime\prime}-\left(\frac{a_{\text{sc}}^{\prime\prime}}{a_{ \text{sc}}}+\delta^{ij}\partial_{i}\partial_{j}\right)\hat{\varphi}=0,\quad i,j=x,y,z, \tag{3}\]
where \({}^{\prime}=d/d\eta\). We adopt this mode equation in a (3+1)-dimensional spacetime, but we assume that excitation propagates only in one spatial direction to simplify the analysis. Then the field operators of the massless scalar field are expressed as
\[\hat{\varphi}(x)=\int_{-\infty}^{\infty}\frac{dk}{\sqrt{2\pi}} \,\hat{\varphi}_{k}\,e^{ikx},\quad\hat{\varphi}_{k}=f_{k}(\eta)\hat{a}_{k}+f_ {k}^{*}(\eta)\hat{a}_{-k}^{\dagger}, \tag{4}\] \[\hat{\pi}(x)=\int_{-\infty}^{\infty}\frac{dk}{\sqrt{2\pi}}\,\hat {\pi}_{k}\,e^{ikx},\quad\hat{\pi}_{k}=(-i)(g_{k}(\eta)\hat{a}_{k}-g_{k}(\eta)^{ *}\hat{a}_{-k}^{\dagger}),\] (5) \[[\hat{a}_{k_{1}},\hat{a}_{k_{2}}^{\dagger}]=\delta(k_{1}-k_{2}), \quad f_{k}^{\prime\prime}+\left(k^{2}-\frac{a_{\text{sc}}^{\prime\prime}}{a_ {\text{sc}}}\right)f_{k}=0,\quad g_{k}=i\left(f_{k}^{\prime}-\frac{a_{\text{ sc}}^{\prime}}{a_{\text{sc}}}f_{k}\right),\quad f_{k}g_{k}^{*}+f_{k}^{*}g_{k}=1, \tag{6}\]
where \(\hat{a}_{k}\) and \(\hat{a}_{k}^{\dagger}\) are annihilation and creation operators, and \(\hat{\pi}(x)\) is the conjugate momentum of the field variable \(\hat{\varphi}(x)\). In terms of the Fourier components of the field operators, the creation and the annihilation operators can be represented as
\[\hat{a}_{k}=g_{k}^{*}\hat{\varphi}_{k}+if_{k}^{*}\hat{\pi}_{k}, \quad\hat{a}_{-k}^{\dagger}=g_{k}\hat{\varphi}_{k}-if_{k}\hat{\pi}_{k}. \tag{7}\]
We assume that the scalar field is in the vacuum state \(\left|\psi\right\rangle\) associated with the annihilation operator \(\hat{a}_{k}\) such that
\[\hat{a}_{k}\left|\psi\right\rangle=0. \tag{8}\]
The equal-time commutation relations for the field operators are given by
\[[\hat{\varphi}(\eta,x),\hat{\pi}(\eta,y)]=i\delta(x-y),\quad[\hat{ \varphi}(\eta,x),\hat{\varphi}(\eta,y)]=[\hat{\pi}(\eta,x),\hat{\pi}(\eta,y)]=0. \tag{9}\]
Covariances of the field operators are calculated as
\[M_{11}(x,y) := \langle\{\hat{\varphi}(x),\hat{\varphi}(y)\}\rangle=\frac{2}{\pi} \int_{0}^{\infty}dk|f_{k}|^{2}\cos(k(x-y)), \tag{10}\] \[M_{22}(x,y) := \langle\{\hat{\pi}(x),\hat{\pi}(y)\}\rangle=\frac{2}{\pi}\int_{0 }^{\infty}dk|g_{k}|^{2}\cos(k(x-y)),\] (11) \[M_{12}(x,y) := \langle\{\hat{\varphi}(x),\hat{\pi}(y)\}\rangle=\frac{1}{\pi} \int_{0}^{\infty}dk\,i(f_{k}g_{k}^{*}-f_{k}^{*}g_{k})\cos(k(x-y)), \tag{12}\]
where \(\langle\,\rangle\) denotes the expectation value with respect to the state \(|\psi\rangle\).
### Local modes
We consider measurement of the field operators \(\hat{\varphi},\hat{\pi}\) at spatial points \(x_{\rm A}\) and \(x_{\rm B}\). The measurement process can be represented as the interaction between the field operators and dynamical variables of the measurement apparatus such as Unruh-DeWitt detectors [24]. In the present analysis, we do not specify details of the apparatus but just assume the interaction Hamiltonian between the field operators and the apparatus has the following form:
\[H_{\rm int}=\sum_{j={\rm A,B}}\lambda_{j}(t)g_{j}(\hat{q}_{D}, \hat{p}_{D})\otimes\int dx(w_{1j}(x)\hat{\varphi}(x)+w_{2j}(x)\hat{\pi}(x)), \tag{13}\]
where \(g_{j}\) is a function of canonical variables of the measurement apparatus \((\hat{q}_{D},\hat{p}_{D})\), \(w_{1j}(x),w_{2j}(x)\) are window functions defining a spatial local mode of the field at \(x_{\rm A,B}\), and \(\lambda_{j}(t)\) is a switching function of the interaction. In the present analysis, we do not treat details of measurement protocols but only pay attention to the behavior of the local modes of the quantum field introduced by the window functions.
Let us introduce local operators at \(x_{j}\) (\(j={\rm A,B}\)) using a window function \(w_{j}(x)=w(x-x_{j})\) as
\[\hat{q}_{j} := \int_{-\infty}^{\infty}dxw_{j}(x)\hat{\varphi}(x)=\int_{-\infty} ^{\infty}dk\,\hat{\varphi}_{k}\,e^{ikx_{j}}w_{k}, \tag{14}\] \[\hat{p}_{j} := \int_{-\infty}^{\infty}dxw_{j}(x)\hat{\pi}(x)=\int_{-\infty}^{ \infty}dk\,\hat{\pi}_{k}\,e^{ikx_{j}}w_{k}, \tag{15}\]
where the window function is assumed to be localized around \(x_{j}\), and \(w_{k}\) denotes the Fourier component of the window function:
\[w_{k}=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dx\,w(x)e^{ikx },\quad w_{k}=w_{-k}^{*}. \tag{16}\]
We require that the window function is fixed so that the local operators \((\hat{q}_{j},\hat{p}_{j})\) define independent modes. In other words, they satisfy the canonical commutation relations given by
\[[\hat{q}_{i},\hat{p}_{j}] = i\int_{-\infty}^{\infty}dk\,e^{ik(x_{i}-x_{j})}|w_{k}|^{2}\equiv i \delta_{ij}, \tag{17}\] \[[\hat{q}_{i},\hat{q}_{j}] = [\hat{p}_{i},\hat{p}_{j}]=0. \tag{18}\]
Note that these commutators are independent of the state of the quantum field. Covariances for the local operators are
\[c_{1}(i,j) := \langle\{\hat{q}_{i},\hat{q}_{j}\}\rangle=4\int_{0}^{\infty}dk|w_ {k}|^{2}|f_{k}|^{2}\cos k\Delta_{ij}, \tag{19}\] \[c_{2}(i,j) := \langle\{\hat{p}_{i},\hat{p}_{j}\}\rangle=4\int_{0}^{\infty}dk|w_ {k}|^{2}|g_{k}|^{2}\cos k\Delta_{ij},\] (20) \[c_{3}(i,j) := \langle\{\hat{q}_{i},\hat{p}_{j}\}\rangle=2\int_{0}^{\infty}dk|w _{k}|^{2}\,i(f_{k}g_{k}^{*}-f_{k}^{*}g_{k})\cos(k\Delta_{ij}), \tag{21}\]
where \(\Delta_{ij}:=x_{i}-x_{j}\).
Window functionWe adopt a \(k\)-top hat window function in this study: \(w_{k}=w_{0}\,\theta(k_{c}-|k|)\theta(|k|-k_{0}),\ k_{c}\geq k_{0}\), where \(k_{c}\) is the infrared (IR) cutoff corresponding to the total system size (comoving size of the total universe) and \(k_{c}\) is the ultraviolet (UV) cutoff defining the size of localized modes. This type of a window function was adopted in the stochastic approach to inflation [25], which is a phenomenological treatment of long wavelength quantum fluctuations in the de Sitter universe, and this method describes dynamics of the quantum inflaton field as a classical stochastic variable obeying a Langevin equation. The normalization \(w_{0}\) is determined by (17):
\[\delta_{ij}=2w_{0}^{2}\int_{k_{0}}^{k_{c}}dk\cos(k\Delta_{ij})=2w_{0}^{2}\frac {\sin(k_{c}\Delta_{ij})-\sin(k_{0}\Delta_{ij})}{\Delta_{ij}}. \tag{22}\]
For \(\Delta_{\rm AA}=\Delta_{\rm BB}=0\), Eq. (22) provides the normalization of the window function that is determined as \(w_{0}^{2}=1/(2(k_{c}-k_{0}))\). For \(\Delta_{\rm AB}\neq 0\), Eq. (22) provides \(\sin(k_{c}\Delta_{\rm AB})-\sin(k_{0}\Delta_{\rm AB})=0\) which determines
\[\Delta_{\rm AB}=\frac{(2n-1)\pi}{k_{c}+k_{0}},\frac{(2n)\pi}{k_{c}-k_{0}},\quad n =1,2,\ldots. \tag{23}\]
As a value of \(\Delta_{\rm AB}\), we adopt the following in our analysis:
\[|\Delta_{\rm AB}|=\frac{\pi}{k_{c}+k_{0}}=:\Delta. \tag{24}\]
The quantity \(\Delta\) represents the distance between adjacent two local regions A and B with \(x_{\rm B}-x_{\rm A}=\Delta\) (Fig. 1). The \(\Delta\) also represents the size of each local region. The spatial profile of the window function is given by
\[w(x)=\frac{2}{\sqrt{2\pi}}\int_{k_{0}}^{k_{c}}dk\,w_{0}e^{-ikx}=\frac{1}{\sqrt {\pi(k_{c}-k_{0})}}\frac{\sin(k_{c}x)-\sin(k_{0}x)}{x}. \tag{25}\]
Covariances for the local operators are calculated as
\[c_{1}(\Delta) = \frac{2}{k_{c}-k_{0}}\int_{k_{0}}^{k_{c}}dk|f_{k}|^{2}\cos k \Delta=\frac{2}{1-\delta}\int_{\delta}^{1}dz|f_{k}|^{2}\cos\left(\frac{\pi z} {1+\delta}\right), \tag{26}\] \[c_{2}(\Delta) = \frac{2}{k_{c}-k_{0}}\int_{k_{0}}^{k_{c}}dk|g_{k}|^{2}\cos k \Delta=\frac{2}{1-\delta}\int_{\delta}^{1}dz|g_{k}|^{2}\cos\left(\frac{\pi z} {1+\delta}\right),\] (27) \[c_{3}(\Delta) = \frac{1}{k_{c}-k_{0}}\int_{k_{0}}^{k_{c}}dk\,i(f_{k}g_{k}^{*}-f_{ k}^{*}g_{k})\cos(k\Delta)=\frac{1}{1-\delta}\int_{\delta}^{1}dz\,i(f_{k}g_{k}^{*}-f_{ k}^{*}g_{k})\cos\left(\frac{\pi z}{1+\delta}\right), \tag{28}\]
where \(z=k/k_{c},\delta=k_{0}/k_{c}\). The parameter \(\delta\) represents the size of the local region normalized by the total system size: \(\delta=(k_{0}\Delta/\pi)(1-k_{0}\Delta/\pi)^{-1}\).
The covariance matrix of the bipartite system AB defined by \((\hat{q}_{\rm A},\hat{p}_{\rm A},\hat{q}_{\rm B},\hat{p}_{\rm B})\) is given by
\[\mathbf{m}_{\rm AB}=\begin{bmatrix}a_{1}&a_{3}&c_{1}&c_{3}\\ a_{3}&a_{2}&c_{3}&c_{2}\\ c_{1}&c_{3}&a_{1}&a_{3}\\ c_{3}&c_{2}&a_{3}&a_{2}\end{bmatrix}, \tag{29}\]
Figure 1: Left panel: setup of spatial regions A and B. Right panel: the window functions for A: \(w(x+\Delta/2)\) and B: \(w(x-\Delta/2)\) with \(k_{0}/k_{c}=0.2\). Although the two window functions overlap, local modes A and B are independent and well-defined as the commutation relation (17) is satisfied.
where \(a_{i}:=c_{i}(\Delta=0)\) for \(i=1,2,3\). Owing to the homogeneity of the universe represented by the metric (2), the covariance matrices of each mode A and B have the same components; i.e., the bipartite system AB is in a symmetric Gaussian state. Symplectic eigenvalues of the covariance matrix \(\mathbf{m}_{\rm AB}\) are calculated as
\[(\nu_{1})^{2}=a_{1}a_{2}-a_{3}^{2}+c_{1}c_{2}-c_{3}^{2}+|a_{1}c_{2 }+a_{2}c_{1}-2a_{3}c_{3}|, \tag{30}\] \[(\nu_{2})^{2}=a_{1}a_{2}-a_{3}^{2}+c_{1}c_{2}-c_{3}^{2}-|a_{1}c_{ 2}+a_{2}c_{1}-2a_{3}c_{3}|. \tag{31}\]
The state represented by the covariance matrix (29) is physical, i.e., positive-semidefinite, if and only if \(1\leq\nu_{2}\leq\nu_{1}\).
The partially transposed covariance matrix, which is obtained by reversing the sign of mode B's momentum, has the following two symplectic eigenvalues:
\[(\tilde{\nu}_{1})^{2}=a_{1}a_{2}-a_{3}^{2}-c_{1}c_{2}+c_{3}^{2}+ |(a_{1}c_{2}-a_{2}c_{1})^{2}+4(a_{1}c_{3}-a_{3}c_{3})(a_{2}c_{3}-a_{3}c_{2})|^{ 1/2}, \tag{32}\] \[(\tilde{\nu}_{2})^{2}=a_{1}a_{2}-a_{3}^{2}-c_{1}c_{2}+c_{3}^{2}- |(a_{1}c_{2}-a_{2}c_{1})^{2}+4(a_{1}c_{3}-a_{3}c_{3})(a_{2}c_{3}-a_{3}c_{2})|^{ 1/2}. \tag{33}\]
Based on the positivity criterion of the partially transposed covariance matrix for a bipartite Gaussian state [26; 27; 28], the negativity gives a measure of entanglement between modes A and B, which is defined as [29; 30]
\[N_{\rm A:B}:=\frac{1}{2}{\rm max}\left(\frac{1}{\tilde{\nu}_{2}}-1,0\right). \tag{34}\]
The modes A and B are entangled if \(N_{\rm A:B}>0\), while the modes A and B are separable if \(N_{\rm A:B}=0\).
### Entanglement of local modes in the de Sitter universe
We adopt the de Sitter expansion of the scale factor \(a_{\rm sc}=-1/(H\eta),-\infty<\eta<0\), where \(H\) is the Hubble constant. Mode functions corresponding to the Bunch-Davies vacuum state, which coincides with the Minkowski vacuum state in the short wavelength limit, are given by
\[f_{k}=\frac{1}{\sqrt{2|k|}}\left(1+\frac{1}{i|k|\eta}\right)e^{-i|k|\eta}, \quad g_{k}=\sqrt{\frac{|k|}{2}}e^{-i|k|\eta}. \tag{35}\]
Covariances of the field operators are calculated as
\[M_{11} =\frac{1}{\pi}\int_{0}^{\infty}\frac{dk}{k}\left(1+\frac{1}{k^{2} \eta^{2}}\right)\cos k(x-y), \tag{36}\] \[M_{22} =\frac{1}{\pi}\int_{0}^{\infty}dk\,k\cos k(x-y),\] (37) \[M_{12} =\frac{1}{\pi\eta}\int_{0}^{\infty}\frac{dk}{k}\cos k(x-y). \tag{38}\]
We choose the UV and IR cutoff in the window functions as \(k_{c}=\pi H/\delta,k_{0}=\pi H\). The IR cutoff represents the physical size of the whole universe \(a_{\rm sc}H^{-1}\) and the UV cutoff represents the physical size of the focusing spatial region \(\delta\times a_{\rm sc}H^{-1},0\leq\delta\leq 1\). Covariances (26), (27), (28) of the local modes are
\[c_{1} =\frac{1}{\pi H}\frac{\delta}{1-\delta}\int_{\delta}^{1}\frac{dz} {z}\left(1+\frac{a_{\rm sc}^{2}\,\delta^{2}}{\pi^{2}z^{2}}\right)\cos\left( \frac{\pi z}{1+\delta}\right), \tag{39}\] \[c_{2} =\frac{\pi H}{\delta(1-\delta)}\int_{\delta}^{1}dzz\cos\left( \frac{\pi z}{1+\delta}\right),\] (40) \[c_{3} =-\frac{1}{\pi H}\frac{a_{\rm sc}H\delta}{1-\delta}\int_{\delta} ^{1}\frac{dz}{z}\cos\left(\frac{\pi z}{1+\delta}\right). \tag{41}\]
The left panel of Fig. 2 shows the evolution of negativity of the bipartite mode AB, with a fixed comoving size \(\delta\). The initial nonzero negativity evolves to be zero at some specific value of the scale factor. The physical size of a local region is characterized by \(\delta_{p}:=a_{\rm sc}\delta\). The right panel of Fig. 2 shows the plot of negativity as a function of \((\delta_{p},a_{\rm sc})\). For a fixed \(\delta\in(0,1)\), this figure shows that entanglement (quantum correlation) between the two local modes A and
B is lost after the physical size of the comoving region exceeds the Hubble horizon scale and the "classical" behavior of the quantum field emerges [2; 3; 4; 5; 6].
The disentanglement behavior in this figure can be intuitively understood as a result of the fact that "thermal" noise at the Gibbons-Hawking temperature \(T_{H}=H/(2\pi)\) associated with the cosmological horizon destroys quantum correlations between A and B. The rest of this paper aims to provide a more quantitative understanding of the disentanglement phenomenon in terms of entanglement monogamy. The bipartite state AB is usually mixed because it is defined as a subsystem embedded in the total universe. As the monogamy relation proposed by Camalet [14; 15; 16; 17; 18] suggests, the amount of quantum correlation between A and B (i.e., internal correlation) is affected by the amount of quantum correlation between AB and its complement (i.e., external correlation). Therefore, we look for the partner modes that purify the bipartite mode AB and investigate the entanglement structure among them in the following sections.
## III Purification of local Gaussian modes in quantum fields
To understand the behavior of entanglement between spatial local modes, we look for their partners, i.e., the modes that purify them. In [31], a partner mode of a given mode is calculated in specific examples, including a system with Hawking radiation. A general partner formula that identifies a partner mode for a single mode in any pure Gaussian state is proven in [19]. Generalizing these results, a systematic method to identify any number of modes in a pure state has been developed in [21; 22; 23]. Such a subsystem composed of modes in a pure state is called a quantum information capsule (QIC). Although the QIC formula in [21; 22; 23] provides an algebraic way to identify modes in a pure state, it cannot be directly used for our purpose of analyzing the disentanglement structure of two local modes AB [2; 3; 4; 5] from the viewpoint of monogamy. In this section, we derive a more useful formula to identify the partner modes that purify given two modes AB.
In Sec. III.1, we briefly review the QIC argument with which modes in a pure state are identified. In Sec. III.2, we derive a formula identifying the partner mode of a single mode, which reproduces the partner formula in [19]. In Sec. III.3, we generalize the partner formula to identify the partner of a two-mode system.
### QICs in Gaussian states
The partners here are a special class of QIC. In [21; 22; 23], it has been shown that a linear map, denoted by \(f_{\psi}\) for a pure Gaussian state \(|\psi\rangle\), plays a key role in identifying a QIC. We here briefly review the results in [21; 22; 23], including the properties of \(f_{\psi}\).
Figure 2: Behavior of negativity. Left panel: dependence on the scale factor with different values of the comoving size \(\delta\). Right panel: dependence on the physical size of the local region \(\delta_{p}=a_{\rm sc}\delta\) and the scale factor \(a_{\rm sc}\). The dotted line is \(\delta_{p}=a_{\rm sc}\) that represents the evolution of the total size of the universe. The modes A and B are initially entangled (\(a_{\rm sc}=0\)) and become separable after the physical separation between them exceeds the Hubble horizon scale \(\sim H^{-1}\) with \(k_{c}/a_{\rm sc}\ll k_{0}\) and the effect of the IR cutoff becomes negligible.
Let us first consider a system composed of \(N\) harmonic oscillators, which is assumed to be in a pure Gaussian state \(|\psi\rangle\). The canonical variables are defined by \(\hat{\mathbf{r}}=(\hat{q}_{1},\hat{p}_{1},\ldots,\hat{q}_{N},\hat{p}_{N})^{T}\). For simplicity, we assume that the first moments of the canonical variables vanish, i.e., \(\langle\hat{\mathbf{r}}\rangle=0\), where \(\langle\,\rangle\) denotes the expectation value in \(|\psi\rangle\). The covariance matrix with respect to these canonical variables is defined by \(\mathbf{m}:=\langle\{\hat{\mathbf{r}},\hat{\mathbf{r}}^{T}\}\rangle\).1 Because the total system is assumed to be in a pure Gaussian state, all the symplectic eigenvalues of \(\mathbf{m}\) are one. That is, there exists a symplectic matrix \(\mathbf{S}\) such that
Footnote 1: \([\hat{\mathbf{r}},\hat{\mathbf{r}}^{T}]\) is the antisymmetric part of the operator \(\hat{\mathbf{r}}\hat{\mathbf{r}}^{T}\) and \(\{\hat{\mathbf{r}},\hat{\mathbf{r}}^{T}\}\) is the symmetric part of the operator \(\hat{\mathbf{r}}\hat{\mathbf{r}}^{T}\).
\[\mathbf{m}=\mathbf{S}\mathbf{S}^{T}, \tag{42}\]
where \(\mathbf{S}\) satisfies \(\mathbf{S}^{T}\mathbf{\Omega}_{N}\mathbf{S}=\mathbf{\Omega}_{N}\) and \([\hat{\mathbf{r}},\hat{\mathbf{r}}^{T}]=i\mathbf{\Omega}_{N}\) for
\[\mathbf{\Omega}_{N}:=\bigoplus_{i=1}^{N}\mathbf{J},\quad\mathbf{J}=\begin{bmatrix}0&1\\ -1&0\end{bmatrix}. \tag{43}\]
Note that the pure state condition (42) is equivalent to the following relation:
\[\mathbf{m}\mathbf{\Omega}_{N}\mathbf{m}=\mathbf{\Omega}_{N}. \tag{44}\]
If we introduce a new basis \(\hat{\mathbf{r}}^{\prime}\) for the canonical variables by \(\hat{\mathbf{r}}^{\prime}=\mathbf{S}^{-1}\hat{\mathbf{r}}\), we get
\[\mathbf{m}^{\prime}=\langle\{\hat{\mathbf{r}}^{\prime},\hat{\mathbf{r}}^{T}\} \rangle=\mathbf{Sm}(\mathbf{S}^{T})^{-1}=\mathbb{I}_{2N},\quad[\hat{\mathbf{r}}^{\prime}, \hat{\mathbf{r}}^{\prime T}]=i\mathbf{\Omega}_{N}. \tag{45}\]
The canonical variables defined by \(\hat{\mathbf{r}}^{\prime}\) specify \(N\) uncorrelated modes, each of which is in a pure state. Now we define a linear map \(f_{\psi}\) by
\[f_{\psi}(\hat{q}^{\prime}_{i})=\hat{p}^{\prime}_{i},\quad f_{\psi}(\hat{p}^{ \prime}_{i})=-\hat{q}^{\prime}_{i}, \tag{46}\]
or equivalently,
\[f_{\psi}(\hat{\mathbf{r}}^{\prime})=\mathbf{\Omega}_{N}\hat{\mathbf{r}}^{\prime}. \tag{47}\]
This map has the following properties:
\[[\hat{\mathbf{r}}^{\prime},f_{\psi}(\hat{\mathbf{r}}^{\prime})^{T}] =i\mathbb{I}_{2N}, \tag{48}\] \[\langle\{\hat{\mathbf{r}}^{\prime},\hat{\mathbf{r}}^{\prime T}\}\rangle =\mathbb{I}_{2N},\quad\langle\{\hat{\mathbf{r}}^{\prime},f_{\psi}( \hat{\mathbf{r}}^{\prime})^{T}\}\rangle=0,\quad\langle\{f_{\psi}(\hat{\mathbf{r}}^{ \prime}),f_{\psi}(\hat{\mathbf{r}}^{\prime})^{T}\}\rangle=\mathbb{I}_{2N}. \tag{49}\]
On the one hand, Eq. (48) means that \([\hat{r_{j}}^{\prime},f_{\psi}(\hat{r_{j}}^{\prime})]=i\); i.e., \((\hat{r_{j}}^{\prime},f_{\psi}(\hat{r_{j}}^{\prime}))\) defines a mode for any \(j=1,\ldots,2N\). On the other hand, Eq. (49) implies that the mode defined by \((\hat{r_{j}}^{\prime},f_{\psi}(\hat{r_{j}}^{\prime}))\) is in a pure state. Of course, these are straightforward consequences of the definition in Eq. (46).
Let us generalize this observation. Consider an operator \(\hat{O}\) given by a linear combination of canonical variables. We first rescale this operator as \(\hat{O}\to\hat{O}/\sqrt{2\langle\hat{O}^{2}\rangle}\) so that \(\langle\hat{O}^{2}\rangle=1/2\). Expanding \(\hat{O}\) in the basis \(\hat{\mathbf{r}}^{\prime}\) as
\[\hat{O}:=\sum_{i=1}^{N}w^{\prime}_{i}\hat{r}^{\prime}_{i}=\mathbf{w}^{\prime T} \hat{\mathbf{r}}^{\prime}, \tag{50}\]
where \(\mathbf{w}^{\prime T}=(w^{\prime}_{1},w^{\prime}_{2},\ldots,w^{\prime}_{2N})\), the normalization condition is equivalent to
\[\mathbf{w}^{\prime T}\mathbf{w}^{\prime}=1. \tag{51}\]
Introducing operators \((\hat{Q},\hat{P}):=(\hat{O},f_{\psi}(\hat{O}))\) for \(\hat{O}\) with this normalization, from Eqs. (48), (49) and (51), we find
\[[\hat{Q},\hat{P}] =i \tag{52}\] \[\langle\{\hat{Q},\hat{Q}\}\rangle =1,\quad\langle\{\hat{Q},\hat{P}\}\rangle=\langle\{\hat{P},\hat{Q} \}\rangle=0,\quad\langle\{\hat{P},\hat{P}\}\rangle=1. \tag{53}\]
Equation (52) means that \((\hat{Q},\hat{P})\) satisfies the canonical commutation relation and hence defines a mode. Further, Eq. (53) implies that the covariance matrix for this mode is equal to the \(2\times 2\) identity matrix, implying that it is in a pure state.
In a series of studies [21; 22; 23; 32] on the carriers of information, the smallest subsystem in a pure state that carries the whole encoded information is termed a QIC. The encoded information can be fully retrieved by extracting a QIC from the system. Equations (52) and (53) imply that the mode defined by \((\hat{Q},\hat{P}):=(\hat{O},f_{\psi}(\hat{O}))\) is the QIC when the encoding operation is generated by \(\hat{O}\). In a more general case where the encoding operation is generated by \(\{\hat{O}_{i}\}_{i=1}^{n}\), where each of which is assumed to be a linear combination of canonical variables, it is proven [23] that the QIC is given by a subsystem composed of (at most) \(n\) modes, which is algebraically defined by
\[\{(\hat{O}_{i},f_{\psi}(\hat{O}_{i})\}_{i=1}^{n}. \tag{54}\]
It is shown that operators \(\{(\hat{Q}_{i},\hat{P}_{i})\}_{i}\) defined in Eq. (73) in [23] satisfy
\[[\hat{Q}_{j},\hat{P}_{k}] =i\delta_{jk},\quad[\hat{Q}_{j},\hat{Q}_{k}]=[\hat{P}_{j},\hat{P} _{k}]=0 \tag{55}\] \[\langle\{\hat{Q}_{j},\hat{Q}_{k}\}\rangle =\delta_{jk},\quad\langle\{\hat{Q}_{j},\hat{P}_{k}\}\rangle= \langle\{\hat{P}_{j},\hat{Q}_{k}\}\rangle=0,\quad\langle\{\hat{P}_{j},\hat{P}_ {k}\}\rangle=\delta_{jk}, \tag{56}\]
which generalizes Eqs. (55) and (56). It implies that the mode characterized by \(\{(\hat{Q}_{i},\hat{P}_{i})\}_{i}\) is in a pure state when the total system is in \(\ket{\psi}\), and therefore, they are the QIC as the encoded information is carried by them.
Although the subsystem composed of \(n\) modes playing the role of QIC is uniquely determined, there are several ways to decompose it into \(n\) independent modes. From Eq. (56), the Gaussian state \(\ket{\psi}\) of the total system is decomposed into
\[\ket{\psi}=\ket{\psi^{\prime}}_{12\cdots n}\otimes\ket{\psi^{ \prime\prime}}_{\overline{12\cdots n}},\quad\ket{\psi^{\prime}}_{12\cdots n}: =\bigotimes_{j=1}^{n}\ket{0}_{j} \tag{57}\]
where \(\ket{0}_{j}\) denotes the "vacuum" state for the \(j\)th mode annihilated by \(\hat{a}_{j}:=(\hat{Q}_{j}+i\hat{P}_{j})/\sqrt{2}\) and \(\ket{\psi^{\prime\prime}}_{\overline{12\cdots n}}\) denotes a pure state for the complementary system. Since each of the \(n\) modes is in a pure state in this decomposition, the analysis of the entanglement structure is not straightforward. In the following subsections, we introduce another decomposition that is more useful in analyzing the entanglement structure among partners.
For later convenience, we summarize here the properties of \(f_{\psi}\). From the definition in Eq. (47), \(f_{\psi}\) maps the original basis \(\hat{\mathbf{r}}\) to
\[f_{\psi}(\hat{\mathbf{r}})=f_{\psi}(\mathbf{S}\hat{\mathbf{r}}^{\prime})=\mathbf{S }\mathbf{\Omega}_{N}\mathbf{S}^{-1}\hat{\mathbf{r}}=\mathbf{m}\mathbf{\Omega}_{N}\hat{\mathbf{r}}, \tag{58}\]
where we have used \(\mathbf{S}^{-1}=-\mathbf{\Omega}_{N}\mathbf{S}^{T}\mathbf{\Omega}_{N}\). From Eqs. (48) and (49), for any operators \(\hat{O}\) and \(\hat{O}^{\prime}\) given by linear combinations of canonical operators by (65) [22; 23], it can be directly checked that
\[\langle\hat{O}\rangle=0,\quad\langle f_{\psi}(\hat{O})\rangle=0, \quad f_{\psi}(f_{\psi}(\hat{O}))=-\hat{O} \tag{59}\] \[[\hat{O},f_{\psi}(\hat{O}^{\prime})]=i\langle\{\hat{O},\hat{O}^{ \prime}\},\rangle,\quad\langle\{\hat{O},f_{\psi}(\hat{O}^{\prime})\}\rangle=i [\hat{O},\hat{O}^{\prime}],\] (60) \[[f_{\psi}(\hat{O}),f_{\psi}(\hat{O}^{\prime})]=[\hat{O},\hat{O}^{ \prime}],\quad\langle\{f_{\psi}(\hat{O}),f_{\psi}(\hat{O}^{\prime})\}\rangle= \langle\{\hat{O},\hat{O}^{\prime}\}\rangle \tag{61}\]
hold.
So far, we have reviewed the properties of map \(f_{\psi}\) in a harmonic oscillator system. The analyses can readily be extended to a scalar field by the following procedure. For a scalar field \(\hat{\varphi}\) and its conjugate momentum \(\hat{\pi}\) at a fixed time \(t\), we denote
\[\hat{\mathbf{R}}(x):=\begin{bmatrix}\hat{\varphi}(t,x)\\ \hat{\pi}(t,x)\end{bmatrix}. \tag{62}\]
Here, for notational simplicity, we omit the time variable \(t\) on the left-hand side. The equal-time commutation relations in Eq. (9) are written as
\[[\hat{\mathbf{R}}(x),\hat{\mathbf{R}}^{T}(y)]=i\begin{bmatrix}0&\delta(x- y)\\ -\delta(x-y)&0\end{bmatrix}=i\mathbf{J}\delta(x-y), \tag{63}\]
where \(\mathbf{J}=\begin{bmatrix}0&1\\ -1&0\end{bmatrix}\), while the covariance of the field operators in the state \(\ket{\psi}\) are denoted by
\[\mathbf{M}(x,y):=\langle\{\hat{\mathbf{R}}(x),\hat{\mathbf{R}}^{T}(y)\} \rangle=\begin{bmatrix}M_{11}(x,y)&M_{12}(x,y)\\ M_{21}(x,y)&M_{22}(x,y)\end{bmatrix}=\begin{bmatrix}\langle\{\hat{\varphi}(x), \hat{\varphi}(y)\}\rangle&\langle\{\hat{\varphi}(x),\hat{\pi}(y)\}\rangle\\ \langle\{\hat{\pi}(x),\hat{\varphi}(y)\}\rangle&\langle\{\hat{\pi}(x),\hat{\pi}( y)\}\rangle\end{bmatrix}. \tag{64}\]
We introduce an operator
\[\hat{O}:=\int dx\left[w_{1}(x)\hat{\varphi}(x)+w_{2}(x)\hat{\pi}(x) \right]=\int dx\mathbf{W}^{T}(x)\hat{\mathbf{R}}(x), \tag{65}\]
where
\[\mathbf{W}(x)=\begin{bmatrix}w_{1}(x)\\ w_{2}(x)\end{bmatrix} \tag{66}\]
denotes weighting functions. In the analogy with Eq. (58), we define [23] a map \(f_{\psi}\) by
\[f_{\psi}(\hat{O}) :=\int dxdydz\mathbf{W}^{T}(x)\mathbf{M}(x,y)\mathbf{J}\delta(y-z)\hat{\mathbf{R} }(z)\] \[=\int dx\mathbf{W}^{T}_{f_{\psi}(\hat{O})}(x)\hat{\mathbf{R}}(x), \tag{67}\]
where \(\mathbf{W}_{f_{\psi}(\hat{O})}\) is the window function defining \(f_{\psi}(\hat{O})\), given by
\[\mathbf{W}_{f_{\psi}(\hat{O})}(x):=-\int dy\mathbf{J}\mathbf{M}(x,y)\mathbf{W}(y). \tag{68}\]
When the covariance of the field operator satisfies a purity condition
\[\int dydz\mathbf{M}(x,y)\mathbf{J}\delta(y-z)\mathbf{M}(z,w)=\mathbf{J}\delta(x-w), \tag{69}\]
which corresponds to Eq. (44), it is shown [22; 23] that the map \(f_{\psi}\) satisfies all the properties in Eqs. (59), (60) and (61). Therefore, for operators \(\{O_{i}\}_{i=1}^{n}\) given by linear combinations of field operators, the set of operators
\[\{(\hat{O}_{i},f_{\psi}(\hat{O}_{i})\}_{i=1}^{n} \tag{70}\]
defines (at most) \(n\) modes in a pure state, provided that the field is in a pure Gaussian state \(|\psi\rangle\). Note that Eq. (69) can be explicitly confirmed for the covariances given in Eqs. (36), (37) and (38).
Based on these results, we derive the partner formula for a single mode in Sec. III.2, which reproduces the formula in [19]. We further generalize it for the partner formula for two modes in Sec. III.3, which we shall use to analyze entanglement monogamy among local modes in a field. See Fig. 3 for the schematic picture of these setups.
### Purification of a single Gaussian mode
As a practical application of the map \(f_{\psi}\), we look for a partner mode that purifies a given single mode A [Fig. 3 (a)]. In particular, we apply the partner formula for a local mode \(\hat{\mathbf{\xi}}_{\rm A}=(\hat{q}_{\rm A},\hat{p}_{\rm A})^{T}\) at a spatial point \(\mathbf{x}_{\rm A}\) defined in the previous section. As we have seen in Sec. III.1, four operators
\[\hat{q}_{\rm A},\hat{p}_{\rm A},f_{\psi}(\hat{q}_{\rm A}),f_{ \psi}(\hat{p}_{\rm A}), \tag{71}\]
Figure 3: (a) Purification of a single mode A. The mode C is a partner of A. (b) Purification of two modes AB. The modes C and D are partners of the bipartite system AB.
define a two-mode system that is in a pure state, provided that the field is in a pure Gaussian state. To identify the partner mode of \(\hat{\mathbf{\xi}}_{\rm A}\), we here construct a mode generated by the operators in Eq. (71), which is orthonormal to the mode A. The covariance matrix for \(\hat{\mathbf{\xi}}_{\rm A}\) is
(72)
Commutators and covariances between these operators are given by
\[[\hat{\mathbf{\xi}}_{\rm A},\hat{\mathbf{\xi}}_{\rm A}^{T}]=[f_{\psi}( \hat{\mathbf{\xi}}_{\rm A}),f_{\psi}(\hat{\mathbf{\xi}}_{\rm A}^{T})]=i\mathbf{J},\quad[ \hat{\mathbf{\xi}}_{\rm A},f_{\psi}(\hat{\mathbf{\xi}}_{\rm A}^{T})]=i\mathbf{m}_{\rm A}, \tag{73}\]
and
\[\left\langle\{\hat{\mathbf{\xi}}_{\rm A},\hat{\mathbf{\xi}}_{\rm A}^{T} \}\right\rangle=\left\langle\{f_{\psi}(\hat{\mathbf{\xi}}_{\rm A}),f_{\psi}(\hat{ \mathbf{\xi}}_{\rm A}^{T})\}\right\rangle=\mathbf{m}_{\rm A},\quad\left\langle\{\hat{ \mathbf{\xi}}_{\rm A},f_{\psi}(\hat{\mathbf{\xi}}_{\rm A}^{T})\}\right\rangle=-\mathbf{J}. \tag{74}\]
To extract a mode orthogonal to the original mode \(\hat{\mathbf{\xi}}_{\rm A}\) from \(f_{\psi}(\hat{\mathbf{\xi}}_{\rm A})\), we define operators
\[\hat{\mathbf{\zeta}}:=f_{\psi}(\hat{\mathbf{\xi}}_{\rm A})-\mathbf{m}_{\rm A }\mathbf{J}\hat{\mathbf{\xi}}_{\rm A}. \tag{75}\]
They indeed commute with \(\hat{\mathbf{\xi}}_{\rm A}\) as
\[[\hat{\mathbf{\zeta}},\hat{\mathbf{\xi}}_{\rm A}^{T}] =[f_{\psi}(\hat{\mathbf{\xi}}_{\rm A}),\hat{\mathbf{\xi}}_{\rm A}^{T}]-[ \mathbf{m}_{\rm A}\mathbf{J}\hat{\mathbf{\xi}}_{\rm A},\hat{\mathbf{\xi}}_{\rm A}^{T}]\] \[=-\mathbf{m}_{\rm A}-\mathbf{m}_{\rm A}\mathbf{J}[\hat{\mathbf{\xi}}_{\rm A},\hat {\mathbf{\xi}}_{\rm A}^{T}]\] \[=0. \tag{76}\]
Therefore, they define a mode orthogonal to \(\hat{\mathbf{\xi}}_{\rm A}\).
The commutator of \(\hat{\mathbf{\zeta}}\) is calculated as
\[[\hat{\mathbf{\zeta}},\hat{\mathbf{\zeta}}^{T}]=i(\mathbf{J}-\mathbf{m}_{\rm A} \mathbf{J}\mathbf{m}_{\rm A}). \tag{77}\]
If the mode A is in a pure state, it holds \(\mathbf{m}_{\rm A}\mathbf{J}\mathbf{m}_{\rm A}=\mathbf{J}\) which corresponds to the relation of the density operator \(\tilde{\rho}_{\rm A}^{2}=\hat{\rho}_{\rm A}\), implying that the commutator of \(\hat{\mathbf{\zeta}}\) is trivial. In this case, because \(\hat{\mathbf{\xi}}_{\rm A}\) is in a pure state, its partner mode does not exist. If the mode A is not pure, its partner is characterized by \(\hat{\mathbf{\zeta}}\). We normalize \(\hat{\mathbf{\zeta}}\) to make it a canonical pair of operators. For this purpose, we introduce \(\hat{\mathbf{\xi}}_{\rm C}\) by \(\hat{\mathbf{\zeta}}=\mathbf{A}\,\hat{\mathbf{\xi}}_{\rm C}\) with a matrix \(\mathbf{A}\) so that \([\hat{\mathbf{\xi}}_{\rm C},\hat{\mathbf{\xi}}_{\rm C}^{T}]=i\mathbf{J}\) holds. The condition on the matrix \(\mathbf{A}\) is given by
\[\mathbf{A}\mathbf{J}\mathbf{A}^{T}=\mathbf{J}-\mathbf{m}_{\rm A}\mathbf{J}\mathbf{m}_{\rm A}. \tag{78}\]
To obtain \(\mathbf{A}\), we consider the standard form of the covariance matrix \(\mathbf{m}_{\rm A}\):
\[\mathbf{m}_{\rm A}=\mathbf{S}\begin{bmatrix}a&0\\ 0&a\end{bmatrix}\mathbf{S}^{T}=a\mathbf{S}\mathbf{S}^{T},\quad\mathbf{S}\mathbf{J}\mathbf{S}^{T}=\mathbf{ J}, \tag{79}\]
where \(\mathbf{S}\) represents a symplectic transformation to diagonalize \(\mathbf{m}_{\rm A}\) and \(a\) is the symplectic eigenvalue of \(\mathbf{m}_{\rm A}\).
Although the partner mode C itself is unique, the matrix \(\mathbf{S}\) satisfying Eq. (78) is not uniquely determined because of the remaining freedom in fixing a canonical set of operators for the mode C. In other words, when \(\mathbf{A}\) satisfies Eq. (78), so does \(\mathbf{A}^{\prime}:=\mathbf{S}^{\prime}\mathbf{A}\), where \(\mathbf{S}^{\prime}\) is an arbitrary \(2\times 2\) symplectic matrix. Since Eq. (78) is recast into \(\mathbf{A}\mathbf{J}\mathbf{A}^{T}=(1-a^{2})\mathbf{J}\), we can choose
\[\mathbf{A}=\sqrt{a^{2}-1}\,\mathbf{X}, \tag{80}\]
or equivalently
\[\mathbf{A}^{-1}=\frac{1}{\sqrt{a^{2}-1}}\mathbf{X}, \tag{81}\]
where \(\mathbf{X}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\).
In summary, the partner mode C of mode A is obtained as the following formula:
\[\hat{\mathbf{\xi}}_{\text{C}} =\mathbf{A}^{-1}\hat{\mathbf{\zeta}}=\frac{1}{\sqrt{a^{2}-1}}\mathbf{X}(f_{\psi }(\hat{\mathbf{\xi}}_{\text{A}})-\mathbf{m}_{\text{A}}\mathbf{J}\hat{\mathbf{\xi}}_{\text{A}})\] \[=\frac{1}{\sqrt{a^{2}-1}}\begin{bmatrix}c_{2}\hat{q}_{\text{A}}-c_ {3}\hat{p}_{\text{A}}+f_{\psi}(\hat{p}_{\text{A}})\\ c_{3}\hat{q}_{\text{A}}-c_{1}\hat{p}_{\text{A}}+f_{\psi}(\hat{q}_{\text{A}}) \end{bmatrix}=:\begin{bmatrix}\hat{q}_{\text{C}}\\ \hat{p}_{\text{C}}\end{bmatrix}, \tag{82}\]
which is equivalent to the partner formula for a single mode [19] in a Gaussian state. Covariances of operators \((\hat{\mathbf{\xi}}_{\text{A}},\hat{\mathbf{\xi}}_{\text{C}})\) are given by
\[\left\langle\{\hat{\mathbf{\xi}}_{\text{A}},\hat{\mathbf{\xi}}_{\text{A }}^{T}\}\right\rangle =\mathbf{m}_{\text{A}}, \tag{83}\] \[\left\langle\{\hat{\mathbf{\xi}}_{\text{C}},\hat{\mathbf{\xi}}_{\text{A }}^{T}\}\right\rangle =\mathbf{A}^{-1}(\mathbf{J}-\mathbf{m}_{\text{A}}\mathbf{J}\mathbf{m}_{\text{A}})= \sqrt{a^{2}-1}\mathbf{Z},\] (84) \[\left\langle\{\hat{\mathbf{\xi}}_{\text{C}},\hat{\mathbf{\xi}}_{\text{C }}^{T}\}\right\rangle =-\mathbf{A}^{-1}(\mathbf{m}_{\text{A}}+\mathbf{m}_{\text{A}}\mathbf{J}\mathbf{m}_{ \text{A}}\mathbf{J}\mathbf{m}_{\text{A}})(\mathbf{A}^{-1})^{T}=\mathbf{X}\mathbf{m}_{\text{A}}\mathbf{ X}, \tag{85}\]
where \(\mathbf{Z}=\begin{bmatrix}1&0\\ 0&-1\end{bmatrix}\). In a matrix form, they are summarized as
\[\mathbf{m}_{\text{AC}}:=\left\langle\{\hat{\mathbf{\xi}}_{\text{AC}},\hat{\mathbf{\xi}}_{ \text{AC}}^{T}\}\right\rangle=\begin{bmatrix}\mathbf{m}_{\text{A}}&\sqrt{a^{2}-1} \mathbf{Z}\\ \sqrt{a^{2}-1}\mathbf{Z}&\mathbf{X}\mathbf{m}_{\text{A}}\mathbf{X}\end{bmatrix},\quad\hat{\mathbf{ \xi}}_{\text{AC}}:=\begin{bmatrix}\hat{\mathbf{\xi}}_{\text{A}}\\ \hat{\mathbf{\xi}}_{\text{C}}\end{bmatrix}. \tag{86}\]
One can explicitly confirm that this covariance matrix satisfies the following purity condition of the state AC:
\[\mathbf{m}_{\text{AC}}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{\text{AC}}=\mathbf{\Omega}_{2},\quad \mathbf{\Omega}_{2}=\bigoplus_{i=1}^{2}\mathbf{J}, \tag{87}\]
implying that it represents a pure two-mode squeezed state characterizing the pair of partners AC. The total state is decomposed as
\[\left|\psi\right\rangle=\left|\psi^{\prime}\right\rangle_{\text{AC}}\otimes \left|\psi^{\prime\prime}\right\rangle_{\overline{\text{AC}}}, \tag{88}\]
where \(\left|\psi^{\prime}\right\rangle_{\text{AC}}\) is the pure Gaussian state defined by \(\mathbf{m}_{\text{AC}}\) and has no correlation with its complement system \(\overline{\text{AC}}\) in another pure state \(\left|\psi^{\prime\prime}\right\rangle_{\overline{\text{AC}}}\).
Spatial profile of partner mode.Using Eq. (68), the spatial profiles of the partner mode can be visualized. As a window function of local mode A, we adopt \(\mathbf{W}_{\text{q}_{\text{A}}}=w_{\text{A}}(x)(1,0)^{T}\) and \(\mathbf{W}_{\text{p}_{\text{A}}}=w_{\text{A}}(x)(0,1)^{T}\), which we have used to introduce local modes from the scalar field in (14) and (15). From Eq. (68), we get
\[\mathbf{W}_{\text{f}_{\psi}(q_{\text{A}})}(x) =-\int dy\begin{bmatrix}M_{12}(x,y)&M_{22}(x,y)\\ -M_{11}(x,y)&-M_{12}(x,y)\end{bmatrix}\begin{bmatrix}w_{\text{A}}(y)\\ 0\end{bmatrix}=\int dy\begin{bmatrix}-M_{12}(x,y)\\ M_{11}(x,y)\end{bmatrix}w_{\text{A}}(y), \tag{89}\] \[\mathbf{W}_{\text{f}_{\psi}(p_{\text{A}})}(x) =-\int dy\begin{bmatrix}M_{12}(x,y)&M_{22}(x,y)\\ -M_{11}(x,y)&-M_{12}(x,y)\end{bmatrix}\begin{bmatrix}0\\ w_{\text{A}}(y)\end{bmatrix}=\int dy\begin{bmatrix}-M_{22}(x,y)\\ M_{12}(x,y)\end{bmatrix}w_{\text{A}}(y), \tag{90}\]
and
\[f_{\psi}(\hat{q}_{\text{A}}) =\int dx\mathbf{W}_{f_{\psi}(q_{\text{A}})}^{T}(x)\hat{\mathbf{R}}(x)=\int dxdy \left[-\hat{\varphi}(x)M_{12}(x,y)w_{\text{A}}(y)+\hat{\pi}(x)M_{11}(x,y)w_{ \text{A}}(y)\right], \tag{91}\] \[f_{\psi}(\hat{p}_{\text{A}}) =\int dx\mathbf{W}_{f_{\psi}(p_{\text{A}})}^{T}(x)\hat{\mathbf{R}}(x)= \int dxdy\left[-\hat{\varphi}(x)M_{22}(x,y)w_{\text{A}}(y)+\hat{\pi}(x)M_{12}(x,y )w_{\text{A}}(y)\right]. \tag{92}\]
In other words, the window functions of \(f_{\psi}(\hat{\mathbf{\xi}}_{\rm A})\) are expressed by convolutions of the window function \(w_{\rm A}\) with the covariance matrix \(M_{ij}\) of the field operators, given by
\[\int dyM_{11}(x,y)w_{\rm A}(y) = \frac{4}{\sqrt{\pi(k_{c}-k_{0})}}\int_{k_{0}}^{k_{c}}dk|f_{k}|^{2} \cos k(x-x_{\rm A}) \tag{93}\] \[= \frac{2}{\pi\sqrt{H}}\sqrt{\frac{\delta}{1-\delta}}\int_{\delta} ^{1}\frac{dz}{z}\left(1+\frac{a_{\rm sc}^{2}\,\delta^{2}}{\pi^{2}z^{2}}\right) \cos\left(z\frac{\pi H(x-x_{\rm A})}{\delta}\right),\] \[\int dyM_{22}(x,y)w_{\rm A}(y) = \frac{4}{\sqrt{\pi(k_{c}-k_{0})}}\int_{k_{0}}^{k_{c}}dk|g_{k}|^{ 2}\cos k(x-x_{\rm A})\] (94) \[= \frac{2}{\pi\sqrt{H}}\sqrt{\frac{\delta}{1-\delta}}\left(\frac{ \pi H}{\delta}\right)^{2}\int_{\delta}^{1}dzz\cos\left(z\frac{\pi H(x-x_{\rm A })}{\delta}\right),\] \[\int dyM_{12}(x,y)w_{\rm A}(y) = \frac{2}{\sqrt{\pi(k_{c}-k_{0})}}\int_{k_{0}}^{k_{c}}dk\,i(f_{k} g_{k}^{*}-f_{k}^{*}g_{k})\cos k(x-x_{\rm A})\] (95) \[= -\frac{2}{\pi\sqrt{H}}\sqrt{\frac{\delta}{1-\delta}}\left(a_{\rm sc }H\right)\int_{\delta}^{1}\frac{dz}{z}\cos\left(z\frac{\pi H(x-x_{\rm A})}{ \delta}\right).\]
From these equations, we obtain spatial profiles of the partner mode C. The upper panel of Fig. 4 shows the window function of the mode A with \(\delta=0.1\) as a function of the physical coordinate \(x_{p}:=a_{\rm sc}\,x\) (we set \(H=1\)). Because of cosmic expansion, the width of the window function (spatial size of the local mode A) increases from \(0.1H^{-1}\) to \(H^{-1}\). The lower panels are the convolution of \(w\) with the covariance of field operators, which appear in the partner formulas (91) and (92). As we can observe from the behavior of convolutions with \(M_{11},M_{12}\), amplitudes of these functions become larger as the universe expands and typical wavelengths become \(\sim 10H^{-1}\) which is far larger than the width of \(w_{\rm A}\); this behavior of the partner's window functions implies that the information of the original mode A shared with its partner C delocalizes and extends over the superhorizon scale. These facts provide the following intuitive understanding of the mechanism of disentanglement between local modes: Because the partner of a local mode A is spread over the superhorizon scale for a large scale factor, it is slightly different from another local mode B, implying that modes AB cannot share much entanglement. This observation becomes more quantitative in Sec. IV, where we analyze the disentanglement from the viewpoint of entanglement monogamy.
Figure 4: Upper panel: the window function \(w_{\rm A}\) that represents the spatial profile of mode A at \(a_{\rm sc}=1,10\). We set \(x_{\rm A}=0\). \(x_{p}=a_{\rm sc}\,x\) denotes the physical coordinate. Owing to cosmic expansion, the width of the profile increases as \(a_{\rm sc}\delta\). Lower panels: convolution of \(w_{\rm A}\) with covariances of the field operators. These functions represent spatial profiles of the partner mode C.
Negativity between AC.The standard form of the covariance matrix for the mode A is
\[\mathbf{m}_{\rm A}=\begin{bmatrix}a&0\\ 0&a\end{bmatrix}, \tag{96}\]
where \(a\) is the symplectic eigenvalue of \(\mathbf{m}_{\rm A}\). As shown in Eq. (86), the covariance matrix of A and its partner C is given by
\[\mathbf{m}_{\rm AC}=\begin{bmatrix}a&0&\sqrt{a^{2}-1}&0\\ 0&a&0&-\sqrt{a^{2}-1}\\ \sqrt{a^{2}-1}&0&a&0\\ 0&-\sqrt{a^{2}-1}&0&a\end{bmatrix}. \tag{97}\]
The smaller symplectic eigenvalue of its partial transposition of \(\mathbf{m}_{\rm AC}\) is calculated as
\[\widetilde{\nu}_{2}=a-\sqrt{a^{2}-1}=\frac{1}{a+\sqrt{a^{2}-1}}\leq 1. \tag{98}\]
Thus the negativity between the modes A and C is given by
\[N_{\rm A:C}=\frac{1}{2}(a+\sqrt{a^{2}-1}-1)\geq 0. \tag{99}\]
For \(a>1\), the bipartite state AC is entangled. Figure 5 shows the behavior of the symplectic eigenvalue \(a=\sqrt{a_{1}a_{2}-a_{3}^{2}}\) as functions of the scale factor and \(\delta\), where \(a_{1},a_{2},a_{3}\) are components of the covariance matrix (29). The left panel of Fig. 5 shows that for a fixed value of \(\delta\), the symplectic eigenvalue \(a\) and the negativity between AC increase as the scale factor increases. The right panel of Fig. 5 shows that for a fixed value of scale factor \(a_{\rm sc}\), the symplectic eigenvalue \(a\) increases as the size \(\delta\) of the region A decreases. Thus the state AC is more squeezed for a smaller value of \(\delta\). In the limit of \(\delta\to 1\), the negativity \(N_{\rm A:C}\) vanishes since \(a\to 1\), implying that the purity of the state of mode \(A\) approaches one.
In terms of the symplectic eigenvalue \(a\), the entanglement entropy of the mode A is given by
\[S_{\rm A}=\left(\frac{a+1}{2}\right)\log_{2}\left(\frac{a+1}{2}\right)-\left( \frac{a-1}{2}\right)\log_{2}\left(\frac{a-1}{2}\right). \tag{100}\]
As the symplectic eigenvalue \(a\) monotonically increases with the scale factor, the entanglement entropy of the mode A also increases with the scale factor. Thus the information shared between two modes A and C increases because the mixedness of the state A grows. This explains the delocalization of the information stored in A and the spread of the partner's window function as visualized in Fig. 4. In the limit of \(\delta\to 1\), \(S_{\rm A}\) approaches zero and A and C share no information.
Figure 5: Behavior of the symplectic eigenvalue \(a\). Left panel: dependence on the scale factor. The symplectic eigenvalue \(a\) is an increasing function of the scale factor and entanglement between AC grows as the universe expands. Right panel: dependence on the normalized comoving size \(\delta\) of the region A.
### Purification of the bipartite Gaussian mode
In this subsection, we construct partner modes \(\mathrm{CD}\) for the two-mode system \(\hat{\mathbf{\xi}}_{\mathrm{AB}}=(\hat{q}_{\mathrm{A}},\hat{p}_{\mathrm{A}},\hat{q}_{ \mathrm{B}},\hat{p}_{\mathrm{B}})^{T}\). See Fig. 3 (b) for the schematic picture of the setup. To the authors' knowledge, an explicit formula to obtain the partner modes of a two-mode system has not yet appeared in the literature. Therefore, we explain here the derivation, although it is quite similar to the arguments in the previous subsection, i.e., the derivation of the partner formula for a one-mode system.
From the arguments in Sec. III.1, the eight operators \(\hat{\mathbf{\xi}}_{\mathrm{AB}}\) and \(f_{\psi}(\hat{\mathbf{\xi}}_{\mathrm{AB}})\), i.e.,
\[\hat{q}_{\mathrm{A}},\hat{p}_{\mathrm{A}},\hat{q}_{\mathrm{B}}, \hat{p}_{\mathrm{B}},f_{\psi}(\hat{q}_{\mathrm{A}}),f_{\psi}(\hat{p}_{\mathrm{ A}}),f_{\psi}(\hat{q}_{\mathrm{B}}),f_{\psi}(\hat{p}_{\mathrm{B}}) \tag{101}\]
define a system composed of four modes ABCD, which is in a pure state. We aim to construct two modes \(\mathrm{CD}\) orthonormal to modes \(\mathrm{AB}\). Commutators between these operators are given by
\[[\hat{\mathbf{\xi}}_{\mathrm{AB}},\hat{\mathbf{\xi}}_{\mathrm{AB}}^{T}] =i\mathbf{\Omega}_{2},\quad[\hat{\mathbf{\xi}}_{\mathrm{AB}},f_{\psi}( \hat{\mathbf{\xi}}_{\mathrm{AB}}^{T})]=i\mathbf{m}_{\mathrm{AB}},\quad[f_{\psi}(\hat{ \mathbf{\xi}}_{\mathrm{AB}}),\hat{\mathbf{\xi}}_{\mathrm{AB}}^{T}]=-i\mathbf{m}_{\mathrm{AB }},\quad[f_{\psi}(\hat{\mathbf{\xi}}_{\mathrm{AB}}),f_{\psi}(\hat{\mathbf{\xi}}_{ \mathrm{AB}}^{T})]=i\mathbf{\Omega}_{2}, \tag{102}\]
where \(\mathbf{m}_{\mathrm{AB}}\) denotes the covariance matrix for the two mode system \(\hat{\mathbf{\xi}}_{\mathrm{AB}}\):
\[\mathbf{m}_{\mathrm{AB}}:=\left\langle\{\hat{\mathbf{\xi}}_{\mathrm{AB}}, \hat{\mathbf{\xi}}_{\mathrm{AB}}^{T}\}\right\rangle=\left\langle\{f_{\psi}(\hat{ \mathbf{\xi}}_{\mathrm{AB}}),f_{\psi}(\hat{\mathbf{\xi}}_{\mathrm{AB}}^{T})\}\right\rangle,\quad\left\langle\left\{\hat{\mathbf{\xi}}_{\mathrm{AB}},f_{\psi}(\hat{\mathbf{\xi}}_ {\mathrm{AB}}^{T})\right\}\right\rangle=-\mathbf{\Omega}_{2}. \tag{103}\]
To find modes orthogonal to the original mode \(\hat{\mathbf{\xi}}_{\mathrm{AB}}\), we introduce operators
\[\hat{\mathbf{\zeta}}:=f_{\psi}(\hat{\mathbf{\xi}}_{\mathrm{AB}})-\mathbf{m}_ {\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\hat{\mathbf{\xi}}_{\mathrm{AB}}. \tag{104}\]
They satisfy \([\hat{\mathbf{\zeta}},\hat{\mathbf{\xi}}_{\mathrm{AB}}^{T}]=0\) since
\[[\hat{\mathbf{\zeta}},\hat{\mathbf{\xi}}_{\mathrm{AB}}^{T}] =[f_{\psi}(\hat{\mathbf{\xi}}_{\mathrm{AB}}),\hat{\mathbf{\xi}}_{\mathrm{ AB}}^{T}]-[\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\hat{\mathbf{\xi}}_{\mathrm{AB}}, \hat{\mathbf{\xi}}_{\mathrm{AB}}]\] \[=-i\mathbf{m}_{\mathrm{AB}}-\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2}[ \hat{\mathbf{\xi}}_{\mathrm{AB}},\hat{\mathbf{\xi}}_{\mathrm{AB}}^{T}]\] \[=0 \tag{105}\]
and therefore define modes orthogonal to the original modes \(\mathrm{AB}\). The commutators and the covariances for \(\hat{\mathbf{\zeta}}\) are calculated as
\[[\hat{\mathbf{\zeta}},\hat{\mathbf{\zeta}}^{T}]=i(\mathbf{\Omega}_{2}-\mathbf{m}_ {\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{\mathrm{AB}}),\quad\left\langle\left\{ \hat{\mathbf{\zeta}},\hat{\mathbf{\zeta}}^{T}\right\}\right\rangle=-(\mathbf{m}_{\mathrm{ AB}}+\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{ \mathrm{AB}})\,. \tag{106}\]
If \(\mathbf{\Omega}_{2}=\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{\mathrm{AB}}\), the two-mode system \(\mathrm{AB}\) is in a pure state, implying their partners do not exist. We therefore assume that \(\mathbf{\Omega}_{2}\neq\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{\mathrm{AB}}\). We normalize \(\hat{\mathbf{\zeta}}\) as \(\hat{\mathbf{\zeta}}=\mathbf{A}\,\hat{\mathbf{\xi}}_{\mathrm{CD}}\) using a matrix \(\mathbf{A}\) so that the standard canonical commutation relation for modes \(\mathrm{C}\) and \(\mathrm{D}\)
\[[\hat{\mathbf{\xi}}_{\mathrm{CD}},\hat{\mathbf{\xi}}_{\mathrm{CD}}^{T}] =i\mathbf{\Omega}_{2} \tag{107}\]
is satisfied. This is equivalent to a constraint on the \(4\times 4\) matrix \(\mathbf{A}\) given by
\[\mathbf{A}\mathbf{\Omega}_{2}\mathbf{A}^{T}=\mathbf{\Omega}_{2}-\mathbf{m}_{\mathrm{AB}}\,\mathbf{ \Omega}_{2}\,\mathbf{m}_{\mathrm{AB}}. \tag{108}\]
By using matrix \(\mathbf{A}\) satisfying this condition, covariances of normalized operators \(\hat{\mathbf{\xi}}_{\mathrm{CD}}\) are expressed as
\[\left\langle\left\{\hat{\mathbf{\xi}}_{\mathrm{CD}},\hat{\mathbf{\xi}}_{ \mathrm{CD}}^{T}\right\}\right\rangle =\mathbf{A}^{-1}\left\langle\left\{\hat{\mathbf{\zeta}},\hat{\mathbf{\zeta}}^ {T}\right\}\right\rangle(\mathbf{A}^{-1})^{T}\] \[=-\mathbf{A}^{-1}(\mathbf{m}_{\mathrm{AB}}+\mathbf{m}_{\mathrm{AB}}\,\mathbf{ \Omega}_{2}\,\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{\mathrm{AB}})(\mathbf{A}^ {-1})^{T}=:\mathbf{m}_{\mathrm{CD}}, \tag{109}\] \[\left\langle\left\{\hat{\mathbf{\xi}}_{\mathrm{CD}},\hat{\mathbf{\xi}}_{ \mathrm{AB}}^{T}\right\}\right\rangle =\mathbf{A}^{-1}(\mathbf{\Omega}_{2}-\mathbf{m}_{\mathrm{AB}}\,\mathbf{\Omega}_{2} \,\mathbf{m}_{\mathrm{AB}})=:\mathbf{m}_{\mathrm{AB,CD}}^{T},\] (110) \[\left\langle\left\{\hat{\mathbf{\xi}}_{\mathrm{AB}},\hat{\mathbf{\xi}}_{ \mathrm{CD}}^{T}\right\}\right\rangle =\left[\mathbf{A}^{-1}(\mathbf{\Omega}_{2}-\mathbf{m}_{\mathrm{AB}}\,\mathbf{ \Omega}_{2}\,\mathbf{m}_{\mathrm{AB}})\right]^{T}=:\mathbf{m}_{\mathrm{AB,CD}},\] (111) \[\left\langle\left\{\hat{\mathbf{\xi}}_{\mathrm{AB}},\hat{\mathbf{\xi}}_{ \mathrm{AB}}^{T}\right\}\right\rangle =:\mathbf{m}_{\mathrm{AB}}. \tag{112}\]
These covariances define a state for the four-mode system ABCD as
\[\mathbf{m}_{\rm ABCD}=\begin{bmatrix}\mathbf{m}_{\rm AB}&\mathbf{m}_{\rm AB,CD}\\ \mathbf{m}_{\rm AB,CD}^{T}&\mathbf{m}_{\rm CD}\end{bmatrix}. \tag{113}\]
Since the four-mode system ABCD is in a pure state, the purity condition is satisfied
\[\mathbf{m}_{\rm ABCD}\,\mathbf{\Omega}_{4}\,\mathbf{m}_{\rm ABCD}=\mathbf{\Omega}_{4}, \tag{114}\]
where \(\mathbf{\Omega}_{4}=\bigoplus_{i=1}^{4}\mathbf{J}\). In this case, the state of the total system is decomposed as
\[\left|\psi\right\rangle=\left|\psi^{\prime}\right\rangle_{\rm ABCD}\otimes \left|\psi^{\prime\prime}\right\rangle_{\overline{\rm ABCD}}, \tag{115}\]
where \(\left|\psi^{\prime}\right\rangle_{\rm ABCD}\) denotes a pure state of the four-mode system ABCD defined by the covariance matrix \(\mathbf{m}_{\rm ABCD}\), while \(\left|\psi\right\rangle_{\overline{\rm ABCD}}\) is a pure state for its complement system \(\overline{\rm ABCD}\). This decomposition implies that there is no correlation between the four-mode system ABCD and its complement \(\overline{\rm ABCD}\). Therefore, all the information on the correlation between AB and its complement is confined to the four-mode system ABCD.
We look for the matrix \(\mathbf{A}\) satisfying Eq. (108) by using the standard form of the covariance matrix of a two-mode Gaussian state [33; 34]. Our aim is to find the partner modes of local modes A and B defined at spatial points \(x_{\rm A}\) and \(x_{\rm B}\). Because of the spatial translation symmetry, the symplectic eigenvalue of the covariance matrix of mode A is equal to that of B. Therefore, without loss of generality, we can assume that the covariance matrix of the bipartite system AB is given by the standard form of symmetric Gaussian state
\[\mathbf{m}_{\rm AB}=\begin{bmatrix}a&0&d_{1}&0\\ 0&a&0&d_{2}\\ d_{1}&0&a&0\\ 0&d_{2}&0&a\end{bmatrix}, \tag{116}\]
after performing a local symplectic transformation on each mode.
Using the standard form of \(\mathbf{m}_{\rm AB}\), the right-hand side of Eq. (108) is expressed as
\[\mathbf{\Omega}_{2}-\mathbf{m}_{\rm AB}\,\mathbf{\Omega}_{2}\,\mathbf{m}_{\rm AB}=\begin{bmatrix} (1-a^{2}-d_{1}d_{2})\mathbf{J}&-a(d_{1}+d_{2})\mathbf{J}\\ *&(1-a^{2}-d_{1}d_{2})\mathbf{J}\end{bmatrix}. \tag{117}\]
As noted in the previous section, the solution \(\mathbf{A}\) of Eq. (108) is not uniquely determined as there is remaining freedom for fixing canonical operators for CD. As an ansatz for the matrix \(\mathbf{A}\), we adopt
\[\mathbf{A}=\begin{bmatrix}g\mathbf{X}&h\mathbf{X}\\ h\mathbf{X}&g\mathbf{X}\end{bmatrix}. \tag{118}\]
Because
\[\mathbf{A}\mathbf{\Omega}_{2}\mathbf{A}=-\begin{bmatrix}(g^{2}+h^{2})\mathbf{J}&2gh\mathbf{J}\\ 2gh\mathbf{J}&(g^{2}+h^{2})\mathbf{J}\end{bmatrix}, \tag{119}\]
the constraint in Eq. (108) is equivalent to
\[g^{2}+h^{2}=a^{2}+d_{1}d_{2}-1,\quad 2gh=a(d_{1}+d_{2}), \tag{120}\]
and the solution is
\[g=\frac{1}{2}(\sqrt{x+y}+\sqrt{x-y}),\quad h=\frac{1}{2}(\sqrt{x+y}-\sqrt{x-y}), \tag{121}\]
where we introduced
\[x=a^{2}+d_{1}d_{2}-1,\quad y=a(d_{1}+d_{2}). \tag{122}\]
Because the inverse of \(\mathbf{A}\) is given by
\[\mathbf{A}^{-1}=\frac{1}{g^{2}-h^{2}}\begin{bmatrix}g\mathbf{X}&-h\mathbf{X}\\ -h\mathbf{X}&g\mathbf{X}\end{bmatrix}, \tag{123}\]
the covariance matrix of the pure state of the four-mode system ABCD is obtained from (109)-(112) as
\[\mathbf{m}_{\rm ABCD}=\begin{bmatrix}\mathbf{m}_{\rm AB}&\mathbf{m}_{\rm AB,CD} \\ \mathbf{m}_{\rm AB,CD}^{T}&\mathbf{m}_{\rm CD}\end{bmatrix}, \tag{124}\] \[\mathbf{m}_{\rm AB}=\begin{bmatrix}a&0&d_{1}&0\\ 0&a&0&d_{2}\\ d_{1}&0&a&0\\ 0&d_{2}&0&a\end{bmatrix},\quad\mathbf{m}_{\rm CD}=\begin{bmatrix}a&0&d_{2}&0\\ 0&a&0&d_{1}\\ d_{2}&0&a&0\\ 0&d_{1}&0&a\end{bmatrix},\quad\mathbf{m}_{\rm AB,CD}=\begin{bmatrix}g&0&h&0\\ 0&-g&0&-h\\ h&0&g&0\\ 0&-h&0&-g\end{bmatrix}. \tag{125}\]
The canonical operators describing partner modes CD are given by
\[\hat{\mathbf{\xi}}_{\rm CD} =\mathbf{A}^{-1}(f_{\psi}(\hat{\mathbf{\xi}}_{\rm AB})-\mathbf{m}_{\rm AB}\, \mathbf{\Omega}_{2}\,\hat{\mathbf{\xi}}_{\rm AB})\] \[=\frac{1}{g^{2}-h^{2}}\begin{bmatrix}g\mathbf{X}&-h\mathbf{X}\\ -h\mathbf{X}&g\mathbf{X}\end{bmatrix}(f_{\psi}(\hat{\mathbf{\xi}}_{\rm AB})-\mathbf{m}_{\rm AB }\,\mathbf{\Omega}_{2}\,\hat{\mathbf{\xi}}_{\rm AB}), \tag{126}\]
which establishes the partner formula for a two-mode symmetric Gaussian state. Note that as \(f_{\psi}(\hat{\mathbf{\xi}}_{\rm B})\) is obtained by replacing \(x_{\rm A}\to x_{\rm B}\) in \(f_{\psi}(\hat{\mathbf{\xi}}_{\rm A})\), their window functions are calculated by shifting the window functions of \(f_{\psi}(\hat{\mathbf{\xi}}_{\rm A})\) given in Eqs. (91) and (92). Therefore, their behaviors are expressed by Fig. 4, except for the shift of the centers. Since Eq.(126) implies that the window functions of the partner CD of AB are expressed by operators given by \(\hat{\mathbf{\xi}}_{\rm AB}\) and \(f_{\psi}(\hat{\mathbf{\xi}}_{\rm AB})\), we find that they are given by linear combinations of functions which are localized around \(x_{\rm A}\) and \(x_{\rm B}\), whose tails change depending on the scale factors \(a_{\rm sc}\).
## IV Monogamy and separability
We regard the bipartite system AB as a subsystem embedded in the pure four-mode state ABCD. Then an entanglement measure \(\widetilde{E}\)(A:B) between A and B (internal entanglement) and an entanglement measure \(E\)(AB:CD) between AB and CD (external entanglement) are expected to obey the following monogamy inequality [14; 15; 16; 17; 18]:
\[\widetilde{E}({\rm A}\text{:B})+E({\rm AB}\text{:CD})\leq\widetilde{E}_{\rm max}, \tag{127}\]
where \(\widetilde{E}_{\rm max}\) is the maximum of \(\widetilde{E}\)(A:B). This inequality represents a trade-off relation between internal and external entanglement and has been proven to hold for finite-dimensional Hilbert space cases, including qubit systems. For qubit cases, explicit forms of inequalities are presented in terms of various entanglement measures (concurrence, entanglement of formation, and negativity). In this paper, based on the specific representation of the four-mode Gaussian state (124) and (125) which purifies AB, we show this type of monogamy inequality also holds for Gaussian states.
### Parametrization of the bipartite Gaussian state
In the standard form, the covariance matrix (116) of the bipartite symmetric Gaussian state AB includes three parameters \(a,d_{1},d_{2}\). For later convenience, we parametrize it with \(a,x,y\) where \(x,y\) are defined by (122). By solving Eq. (122) with respect to \(d_{1},d_{2}\), we get
\[d_{1}=\frac{y}{2a}+\sqrt{\frac{y^{2}}{4a^{2}}-(x-a^{2}+1)},\quad d_{2}=\frac{ y}{2a}-\sqrt{\frac{y^{2}}{4a^{2}}-(x-a^{2}+1)}, \tag{128}\]
and \(d_{1}-d_{2}=\sqrt{y^{2}/a^{2}-4(x-a^{2}+1)}\geq 0,\quad d_{1}d_{2}=x-a^{2}+1\). Thus \(d_{1}\) and \(d_{2}\) are expressed using \(a,x,y\). Symplectic eigenvalues of the covariance matrix \(\mathbf{m}_{\rm AB}\) are given by
\[\nu_{1}^{2}=(a+d_{1})(a+d_{2})=x+y+1\geq 1,\quad\nu_{2}^{2}=(a-d_{1})(a-d_{2}) =x-y+1\geq 1, \tag{129}\]
and \(\det\mathbf{m}_{\rm AB}=(a^{2}-d_{1}^{2})(a^{2}-d_{2}^{2})=\nu_{1}^{2}\nu_{2}^{2}= \tilde{\nu}_{1}^{2}\tilde{\nu}_{2}^{2}=(x+1)^{2}-y^{2}\). Symplectic eigenvalues of the partially transposed covariance matrix \(\widetilde{\mathbf{m}}_{\rm AB}\) are expressed as
\[\tilde{\nu}_{1}^{2}=(a+d_{1})(a-d_{2}) =2a^{2}-x-1+\sqrt{y^{2}-4a^{2}(x-a^{2}+1)}, \tag{130}\] \[\tilde{\nu}_{2}^{2}=(a-d_{1})(a+d_{2}) =2a^{2}-x-1-\sqrt{y^{2}-4a^{2}(x-a^{2}+1)}. \tag{131}\]
The sum of these symplectic eigenvalues satisfies \(\nu_{1}^{2}+\nu_{2}^{2}+\tilde{\nu}_{1}^{2}+\tilde{\nu}_{2}^{2}=4a^{2}\). For real values of \(d_{1}\) and \(d_{2}\), it holds
\[y^{2}\geq 4a^{2}(x-a^{2}+1). \tag{132}\]
The modes A and B are entangled if \(0<\tilde{\nu}_{2}<1<\tilde{\nu}_{1}\), or equivalently, \((x+2)^{2}-y^{2}<4a^{2}\). The negativity of the state AB is given by
\[N_{\text{A:B}}=\frac{1}{2}\text{max}\left[\frac{1}{\tilde{\nu}_{2}}-1,0\right]. \tag{133}\]
For a fixed \(a\), the minimum of \(\tilde{\nu}_{2}\) is attained at \(x=y=0\) and given by
\[\tilde{\nu}_{2}|_{\text{min}}=a-\sqrt{a^{2}-1}. \tag{134}\]
In this case, the bipartite state AB is a two-mode squeezed pure state with a squeezing parameter \(r=\cosh^{-1}a\), and its covariance matrix is given by Eq. (116) with \(d_{1}=-d_{2}=\sqrt{a^{2}-1}\). Thus the maximum of \(N_{\text{A:B}}\) is
\[N_{\text{A:B}}|_{\text{max}}=\frac{1}{2}\left(a-1+\sqrt{a^{2}-1}\right). \tag{135}\]
The bipartite state AB becomes separable at \(\tilde{\nu}_{2}=1\), and this condition is equivalent to
\[(x+2)^{2}-y^{2}=4a^{2}. \tag{136}\]
With a fixed value of \(a\), it is possible to draw a parameter region in the \((x,y)\) plane where \(\mathbf{m}_{\text{AB}}\) represents a physical Gaussian state (Fig. 6). The region is bounded by \(x=|y|\) corresponding to \(\nu=1\) (positivity of the state) and \(y^{2}=4a^{2}(x-a^{2}+1)\) corresponding to the reality condition of \(d_{1,2}\). The region is divided into two regions: one corresponds to entangled state, and the other corresponds to separable states. A pure state is located at \(x=y=0\), corresponding to a two-mode squeezed pure state. The state becomes separable for \(a=1\).
Symplectic eigenvalues of the four-mode state ABCD are given by
\[\sqrt{(a-d_{1})(a-d_{2})-(g-h)^{2}}=1,\quad\sqrt{(a+d_{1})(a+d_{2})-(g+h)^{2} }=1, \tag{137}\]
which implies the state ABCD is pure. Symplectic eigenvalues of the partially transposed state with bipartition AB:CD are given by
\[(\tilde{\nu}_{2\pm})^{2} = (a-d_{1})(a-d_{2})+(g-h)^{2}\pm 2|g-h|\sqrt{(a-d_{1})(a-d_{2})} \tag{138}\] \[= (\sqrt{x-y+1}\pm\sqrt{x-y})^{2},\] \[(\tilde{\nu}_{1\pm})^{2} = (a+d_{1})(a+d_{2})+(g+h)^{2}\pm 2|g+h|\sqrt{(a+d_{1})(a+d_{2})}\] (139) \[= (\sqrt{x+y+1}\pm\sqrt{x+y})^{2}.\]
Figure 6: The parameter region representing bipartite Gaussian states in the \((x,y)\) plane (shaded region). The region \((x+2)^{2}-y^{2}<4a^{2}\) corresponds to entangled states and the region \((x+2)^{2}-y^{2}>4a^{2}\) corresponds to separable states.
Therefore, \(\tilde{\nu}_{2\pm}=\nu_{2}\pm\sqrt{\nu_{2}^{2}-1},\tilde{\nu}_{1\pm}=\nu_{1}\pm \sqrt{\nu_{1}^{2}-1}\), and the negativity for the bipartition AB:CD is calculated as
\[N_{\text{AB:CD}}=\frac{1}{2}\left(\nu_{1}+\sqrt{\nu_{1}^{2}-1}\right)\left(\nu_ {2}+\sqrt{\nu_{2}^{2}-1}\right)-\frac{1}{2}>0. \tag{140}\]
### Monogamy relation for Gaussian states
We examine the monogamy inequality (127) for Gaussian states. For the qubit case treated in [14], as the entanglement measure in this monogamy inequality, \(\widetilde{E}\)(A:B) can be the negativity \(N_{\text{A:B}}\). \(E\)(AB:CD) is a decreasing function of the negativity \(N_{\text{AB:CD}}\), and the explicit form of this function is presented in [14]. In the present analysis with Gaussian states, we also adopt negativity as an entanglement measure to show a monogamy inequality.
As we have already presented, negativities \(N_{\text{A:B}}\) and \(N_{\text{AB:CD}}\) are expressed as functions of \(a,x,y\):
\[N_{\text{A:B}}(x,y,a) =\frac{1}{2}\left(\frac{1}{\tilde{\nu}_{2}}-1\right),\quad\tilde{ \nu}_{2}^{2}=2a^{2}-x-1-\sqrt{y^{2}-4a^{2}(x-a^{2}+1)}, \tag{141}\] \[N_{\text{AB:CD}}(x,y) =\frac{1}{2}\left(\sqrt{x+y+1}+\sqrt{x+y}\right)\left(\sqrt{x-y+ 1}+\sqrt{x-y}\right)-\frac{1}{2}. \tag{142}\]
To capture qualitative behavior of the monogamy relation between \(N_{\text{A:B}}\) and \(N_{\text{AB:CD}}\), we randomly generate parameters \(x,y\) of bipartite Gaussian states with fixed \(a\). The left panel of Fig. 7 shows the distribution of \((N_{\text{AB:CD}},N_{\text{A:B}})\) for randomly generated bipartite Gaussian states. We observe that all bipartite Gaussian states are confined in a region surrounded by lines \(N_{\text{A:B}}=g_{1}(N_{\text{AB:CD}}),N_{\text{A:B}}=g_{2}(N_{\text{AB:CD}})\), and \(N_{\text{A:B}}=0\), i.e.,
\[\begin{cases}g_{1}(N_{\text{AB:CD}})\leq N_{\text{A:B}}\leq g_{2}(N_{\text{AB: CD}})&\text{for}\quad 0\leq N_{\text{AB:CD}}\leq\alpha,\\ 0\leq N_{\text{A:B}}\leq g_{2}(N_{\text{AB:CD}})&\text{for}\quad\alpha\leq N_{ \text{AB:CD}}\leq\beta,\\ N_{\text{A:B}}=0&\text{for}\quad\beta\leq N_{\text{AB:CD}},\end{cases} \tag{143}\]
where functions \(g_{1}\) and \(g_{2}\) define the relations between \(N_{\text{A:B}}\) and \(N_{\text{AB:CD}}\) on \(|y|=x\) and \(y=0\), respectively. They are monotonically decreasing functions of \(N_{\text{AB:CD}}\). The parameters \(\alpha\) and \(\beta\) are defined by \(g_{1}(\alpha)=0\) and \(g_{2}(\beta)=0\). When \(N_{\text{A:B}}\) attains its maximum for a fixed \(a\), \(N_{\text{AB:CD}}=0\), and hence, the bipartite state AB is pure. The explicit expression of functions \(g_{2}\) and \(\beta\) are obtained as
\[g_{2}=\frac{1}{2}\left(-1+\left(a-\sqrt{a^{2}-\frac{(N_{\text{AB: CD}}+1)^{2}}{2N_{\text{AB:CD}}+1}}\right)^{-1}\right), \tag{144}\] \[\beta=-\frac{1}{2}+\left(\sqrt{a-1/2}+\sqrt{a-1}\right)^{2}. \tag{145}\]
For \(\alpha\leq N_{\text{AB:CD}}\leq\beta\) with fixed \(a\), the following inequality holds:
\[N_{\text{A:B}}\leq g_{2}(N_{\text{AB:CD}},a). \tag{146}\]
Thus the function \(g_{2}\) determines the upper bound of \(N_{\text{A:B}}\) for given values of \(N_{\text{AB:CD}}\) and \(a\). Here, \(g_{2}\) is a decreasing function of \(N_{\text{AB:CD}}\) and becomes zero at \(N_{\text{AB:CD}}=\beta\). We rewrite this inequality as
\[N_{\text{A:B}}+\tilde{g}(N_{\text{AB:CD}},a)\leq N_{\text{A:B}}|_{\text{max}}(a), \tag{147}\]
where we introduced
\[\tilde{g}(N_{\text{AB:CD}},a):=\begin{cases}0&(N_{\text{AB:CD}}=0)\\ N_{\text{A:B}}|_{\text{max}}(a)-g_{2}(N_{\text{AB:CD}},a)&(0\leq N_{\text{AB:CD}} \leq\beta)\\ N_{\text{A:B}}|_{\text{max}}(a)&(\beta\leq N_{\text{AB:CD}})\end{cases} \tag{148}\]
and \(N_{\text{A:B}}|_{\text{max}}\) is defined by (135). Note that for fixed \(a\), \(\tilde{g}\) is a non-negative monotonically increasing function of \(N_{\text{AB:CD}}\). Furthermore, it vanishes if \(N_{\text{AB:CD}}=0\). In this sense, the function \(\tilde{g}\) defines an entanglement measure for bipartition AB:CD for each \(a\). Thus for Gaussian states with fixed \(a\), we have obtained the monogamy inequality (147) that represents a trade-off relation between the internal entanglement \(N_{\text{A:B}}\) and the external entanglement \(\tilde{g}(N_{\text{AB:CD}},a)\). When \(\beta\leq N_{\text{AB:CD}}\), \(\tilde{g}(N_{\text{AB:CD}},a)\) attains its maximum, and the negativity between A and B automatically vanishes, i.e., \(N_{\text{A:B}}=0\).
The right panel of Fig. 7 shows the behavior of \(N_{\text{A:B}}\) and \(\tilde{g}(N_{\text{AB:CD}})\) as functions of \(x\) when \(y=0\). This case corresponds to saturation of the inequality (147) and the following equality holds:
\[N_{\text{A:B}}+\tilde{g}(N_{\text{AB:CD}},a)=N_{\text{A:B}}|_{\text{max}}(a). \tag{149}\]
### Monogamy for local modes in the de Sitter universe
For the scalar field in the de Sitter universe, components of the covariance matrix of local Gaussian modes are functions of \(a_{\text{sc}}\) and \(\delta\). Figure 8 shows the evolution of \(N_{\text{A:B}}\) and \(N_{\text{AB:CD}}\) with fixed \(\delta\). The left panel shows relations between \(N_{\text{A:B}}\) and \(N_{\text{AB:CD}}\) with different values of \(\delta\). The state evolves from \(a_{\text{sc}}=0\) that corresponds to the left edges of each line. As we have already observed, \(N_{\text{A:B}}\) becomes zero when the physical size \(\delta\times a_{\text{sc}}\) of local modes exceeds the Hubble horizon scale \(H^{-1}\). On the other hand, \(N_{\text{AB:CD}}\) increases monotonically with the scale factor for a fixed value of \(\delta\); thus, \(N_{\text{A:B}}\) becomes zero as \(N_{\text{AB:CD}}\) reaches a some critical value.
Figure 7: Left panel: distribution of \((N_{\text{AB:CD}},N_{\text{A:B}})\) for randomly generated bipartite Gaussian states with fixed \(a\) [11189 sets of parameters \((x,y)\)]. States are located in the region surrounded by the dashed green line (corresponds to \(y=0\)) and the dashed red line (corresponds to \(|y|=x\)). In this case with \(a=2\), for states with \(\beta=2+\sqrt{6}<N_{\text{AB:CD}}\), \(N_{\text{A:B}}=0\); thus the external correlation between AB and CD limits the amount of the internal entanglement between A and B. The same relation also holds for any values of \(a>1\). Right panel: Behavior of \(N_{\text{A:B}}\) and \(\tilde{g}(N_{\text{AB:CD}})\) as functions of \(x\) (with \(y=0\)). A trade-off relation between the internal entanglement \(N_{\text{A:B}}\) and the external entanglement \(N_{\text{AB:CD}}\) can be observed. For \(\beta=2(a-1)\leq x\), \(N_{\text{A:B}}=0\).
The right panel of Fig. 8 shows the evolution of negativities and \(\beta\) as functions of the scale factor for \(\delta=0.2\). The behavior of \(N_{\text{A:B}}\) and \(N_{\text{AB:CD}}\) represents a trade-off relation between them. From the argument in the previous subsection, they satisfy the monogamy relation
\[N_{\text{A:B}}+\tilde{g}(N_{\text{AB:CD}},a(a_{\text{sc}},\delta))\leq N_{\text{ A:B}}|_{\text{max}}(a(a_{\text{sc}},\delta)). \tag{150}\]
Note that this inequality is essentially the same as (147), but the parameter \(a\) becomes a function of \(a_{\text{sc}}\) and \(\delta\). It explains the separable behavior of the bipartite system AB as a monogamy relation between internal entanglement and external entanglement; for \(\beta(a(a_{\text{sc}},\delta))\leq N_{\text{AB:CD}}\), the function \(\tilde{g}\) attains its maxima while \(N_{\text{A:B}}\) vanishes. Thus, this inequality provides a sufficient condition of separability for the bipartite system AB. Although \(N_{\text{A:B}}\) becomes zero before \(N_{\text{AB:CD}}\) reaches \(\beta\) (see right panel of Fig. 8), this behavior is consistent with (150). The tightness of the monogamy inequality depends on the parameter \(\delta\) in the present setup. Actually, as \(\delta\) increases, the difference between \(g_{2}(N_{\text{AB:CD}},a(a_{\text{sc}},\delta))\) and \(N_{\text{A:B}}\) decreases. In the limit of \(\delta\to 1\) (pure state limit), \(N_{\text{A:B}}=g_{2}(N_{\text{AB:CD}},a)\) holds because \(N_{\text{A:B}}\to 0\) and \(N_{\text{AB:CD}}\to 0\), which implies that equality in (150) trivially holds.
## V Summary and conclusion
We investigated the emergence of separability for local bipartite modes assigned to two spatial regions in the de Sitter universe. The bipartite mode AB becomes separable after their separation exceeds the Hubble horizon scale. To understand the emergence of this separability from the viewpoint of entanglement monogamy, we considered purification of the local mode AB and obtained the pure four-mode state ABCD applying the partner formula. Then, we found the monogamy inequality between the negativity \(N\)(A:B) and \(N\)(AB:CD) for the four-mode Gaussian state, which is an extension of Camalet's monogamy relation to continuous variable systems. It is demonstrated that the separability of the mode AB can be understood as the monogamy property between the internal and the external entanglements, and the monogamy inequality provides a sufficient condition for the separability of the local mode AB defined from the quantum field. In the stochastic approach to inflation [25], local oscillator modes are defined as long wavelength components of the inflaton field. The introduced local modes are treated as "classical" stochastic variables, and they obey a Langevin equation with a stochastic noise originating from the short wavelength quantum fluctuations. Although the stochastic approach to inflation is a phenomenological treatment of quantum fields in the de Sitter spacetime and is widely employed to investigate the physics related to cosmic inflation, its justification is still missing. Our investigation of this paper provides one reasoning to this method from the viewpoint of quantum information; local modes in the de Sitter universe lose quantum correlation when their separation exceeds the cosmological horizon, and this behavior is related to delocalization of partner modes.
The partner formula adopted in this study may provide a new perspective on information sharing in multipartite quantum systems. Indeed, as shown in Fig. 4, the spatial profiles of partner modes can be visualized and they are helpful in capturing how the information of a system is shared with its partners. The information stored in a system is lost but classical properties of the system appear as a result of decoherence via information sharing with its partners (environments). This direction of investigation is closely related to the concept of "quantum Darwinism" [35]
Figure 8: Left panel: relation between \(N_{\text{A:B}}\) and \(N_{\text{AB:CD}}\) for fixed values of \(\delta\). Right panel: evolution of \(N_{\text{A:B}}\), \(N_{\text{AB:CD}}\) and \(\beta\) as functions of the scale factor. The solid red circle denotes the location at \(N_{\text{AB:CD}}=\beta\) and the solid black line indicates a value of the scale factor. For the right side of this point, \(\beta<N_{\text{AB:CD}}\) and the monogamy inequality (150) implies \(N_{\text{A:B}}=0\).
which states that the emergence of a classical behavior of a quantum system, such as objectivity, is connected with the amount of information of the system redundantly shared or stored in the environment. Thus spatial profiles of partner modes of the system may help to quantify this redundancy of the information and to understand the quantum to classical transition in the early universe.
###### Acknowledgements.
We thank A. Matsumura for providing his valuable insight on the subject. This research was supported in part by a Grant-in-Aid for Scientific Research No. 19K03866, No. 22H05257 and No. 23H01175 (Y.N.) from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan. K.Y. acknowledges support from the JSPS Overseas Research Fellowships.
## Appendix A Conventional Monogamy Relation
We present the conventional monogamy relation for Gaussian states [10; 11; 12]. For the four-mode pure Gaussian state ABCD with the covariance matrix (124),
\[E(\text{A:B})+E(\text{A:C})+E(\text{A:D})\leq E(\text{A:BCD}), \tag{130}\]
where \(E\) denotes a suitably chosen entanglement measure and this inequality holds with the square of negativity or square of logarithmic negativity as entanglement measures. We demonstrate it for randomly generated Gaussian states by taking \(E\) as the square of negativity. Negativities are given by
\[N_{\text{A:BCD}} =\frac{1}{2}\left(a+\sqrt{a^{2}-1}-1\right), \tag{131}\] \[N_{\text{A:C}} =\max\left[\frac{1}{2a-(\sqrt{x+y}+\sqrt{x-y})}-\frac{1}{2},0 \right],\] (132) \[N_{\text{A:D}} =\max\left[\frac{1}{2a-(\sqrt{x+y}-\sqrt{x-y})}-\frac{1}{2},0 \right], \tag{133}\]
and \(N_{\text{A:B}}\) is given by (133). We can observe the monogamy inequality (130) indeed holds for this four-mode Gaussian state (Fig. 9) because generated states are located below the dashed red lines that represent equality of (130). However, Fig. 9 shows that states deviate from the dashed red lines as the parameter \(a\) increases. Therefore, the monogamy inequality (130) does not provide a useful tight constraint on the separability of the bipartite state AB.
Figure 9: Demonstration of the monogamy inequality (130) for the four-mode Gaussian state with the covariance matrix (124). The number of randomly generated states is 1500. As an entanglement measure, the square of negativity is adopted. All generated states are located below the dashed red line, which represents the equality of the relation (130). |
2302.02845 | Audio Representation Learning by Distilling Video as Privileged
Information | Deep audio representation learning using multi-modal audio-visual data often
leads to a better performance compared to uni-modal approaches. However, in
real-world scenarios both modalities are not always available at the time of
inference, leading to performance degradation by models trained for multi-modal
inference. In this work, we propose a novel approach for deep audio
representation learning using audio-visual data when the video modality is
absent at inference. For this purpose, we adopt teacher-student knowledge
distillation under the framework of learning using privileged information
(LUPI). While the previous methods proposed for LUPI use soft-labels generated
by the teacher, in our proposed method we use embeddings learned by the teacher
to train the student network. We integrate our method in two different
settings: sequential data where the features are divided into multiple segments
throughout time, and non-sequential data where the entire features are treated
as one whole segment. In the non-sequential setting both the teacher and
student networks are comprised of an encoder component and a task header. We
use the embeddings produced by the encoder component of the teacher to train
the encoder of the student, while the task header of the student is trained
using ground-truth labels. In the sequential setting, the networks have an
additional aggregation component that is placed between the encoder and task
header. We use two sets of embeddings produced by the encoder and aggregation
component of the teacher to train the student. Similar to the non-sequential
setting, the task header of the student network is trained using ground-truth
labels. We test our framework on two different audio-visual tasks, namely
speaker recognition and speech emotion recognition and show considerable
improvements over sole audio-based recognition as well as prior works that use
LUPI. | Amirhossein Hajavi, Ali Etemad | 2023-02-06T15:09:34Z | http://arxiv.org/abs/2302.02845v1 | # Audio Representation Learning by Distilling Video as Privileged Information
###### Abstract
Deep audio representation learning using multi-modal audio-visual data often leads to a better performance compared to uni-modal approaches. However, in real-world scenarios both modalities are not always available at the time of inference, leading to performance degradation by models trained for multi-modal inference. In this work, we propose a novel approach for deep audio representation learning using audio-visual data when the video modality is absent at inference. For this purpose, we adopt teacher-student knowledge distillation under the framework of learning using privileged information (LUPI). While the previous methods proposed for LUPI use soft-labels generated by the teacher, in our proposed method we use _embeddings_ learned by the teacher to train the student network. We integrate our method in two different settings: sequential data where the features are divided into multiple segments throughout time, and non-sequential data where the entire features are treated as one whole segment. In the non-sequential setting both the teacher and student networks are comprised of an encoder component and a task header. We use the embeddings produced by the encoder component of the teacher to train the encoder of the student, while the task header of the student is trained using ground-truth labels. In the sequential setting, the networks have an additional aggregation component that is placed between the encoder and task header. We use two sets of embeddings produced by the encoder and aggregation component of the teacher to train the student. Similar to the non-sequential setting, the task header of the student network is trained using ground-truth labels. We test our framework on two different audio-visual tasks, namely speaker recognition and speech emotion recognition. Through these experiments we show that by treating the video modality as privileged information for the main goal of audio representation learning, our method results in considerable improvements over sole audio-based recognition as well as prior works that use LUPI.
Deep Learning, Learning Using Privileged Information, Knowledge Distillation, Multi-modal Data, Audio-visual Representation Learning.
## I Introduction
Deep audio representation learning has recently attracted significant interest, specially in applications such as speaker recognition (SR) [9, 10, 43, 31, 11] and speech emotion recognition (SER) [1, 18, 25]. The goal in deep audio representation learning is to learn embeddings from audio or visual signals, which could be used in retrieving information such as identity or the emotional state of the speaker. This goal is generally best achieved when _multi-modal_ audio-visual inputs are used [31, 19, 2] as opposed to when only a single modality of audio or video is used [9, 10, 43, 23, 24, 11]. Nonetheless, in many real-world scenarios, both modalities may not be simultaneously available at _inference_, resulting in the inability of the model to perform effectively. To tackle this, we pose the question: _how can training with both modalities be performed effectively to benefit inference with a single modality?_
The study done by Vapnik et al. [42] defined information only available during training (and not at inference) as "privileged information". They introduced a new learning paradigm called 'learning using privileged information' (LUPI) in which a secondary SVM model trained on the privileged information helped the main SVM to perform better on its task by reducing the complexity of the problem by optimizing the slack variables. This paradigm was later adapted into deep neural networks in [28], showing that LUPI can be performed using knowledge distillation techniques proposed by Hinton et al. in [15]. In their work, a teacher model was trained using the privileged information using a textual modality. The teacher along with the ground-truth labels were then used to train the student model to perform image classification.
In this study, to perform uni-modal audio representation
Fig. 1: Overview of the proposed method. Embeddings extracted from the video modality by the teacher are used as privileged information in training the student to boost its ability in learning audio representations. At inference, only the audio modality is present and the student model is tasked with generating audio embeddings which are then used for performing SR and SER.
learning while training with multi-modal audio-visual streams, we propose a novel solution by adopting privileged information and considering video as such. An overview of our method is shown in Figure 1. Our model is built based on teacher-student knowledge distillation to allow for the video stream to be learned alongside the main audio modality during training. First, we train a teacher network using the video stream as the privileged information. Next, our student network is trained using audio as input and the ground-truth output labels as well as the video embeddings obtained from the teacher, simultaneously as outputs. By doing so, our student model can perform solely on the audio signals during inference, while having been trained with and benefited from both audio and video modalities during training. We perform extensive experiments using multiple widely used audio-visual datasets to demonstrate the effectiveness of the proposed framework in exploiting the privileged information for audio representation learning.
In summary, we make the following contributions:
* We propose a new solution for deep audio representation learning for SR and SER that utilizes video as privileged information during the training of the network to improve its performance.
* We perform SR and SER experiments on our model and observe considerable improvements versus uni-modal baselines. We also compare our approach to other studies based on privileged information and show that our method performs better for audio representation learning.
* We provide an analysis on the impact of the privilege information by adjusting its influence on training the networks.
The rest of the paper is organized as follows. Section II presents the previous work on privileged information and knowledge distillation. In section III, we present a detailed description of our proposed solution. Afterwards, we describe the performed experiments in detail and report the results in section IV. Finally we conclude the work with a summary and discussion on potential future directions.
## II Related Work
Different approaches have been taken in the literature for learning with the help of privileged information [1, 5, 6, 7, 33, 35, 41, 29, 40, 30, 32, 37, 26]. We can categorize these studies into two main groups: (1) Those that rely on knowledge distillation techniques via teacher-student models for deep representation learning; (2) Those that utilize privileged information without the use of knowledge distillation. In the following, we first review the general concept of knowledge distillation given its relevance in LUPI as well as our method. This is followed by a review of LUPI with and without adopting knowledge distillation. While our work lies in the field of audio representation learning, given the low number of works in the area of using privileged information for audio, we expand our discussion to other modalities to provide a more comprehensive review.
### _Knowledge Distillation_
Knowledge distillation was proposed by Hinton et al. [15] to enable smaller machine learning models (referred to as'student') to learn from larger machine learning models (referred to as 'teacher'). A large number of studies have since explored the use of knowledge distillation for deep representation learning. Seminal works in this area include [36, 17, 20, 47, 13, 44, 8, 14]. These studies demonstrated that the use of knowledge distillation enables student models to achieve competitive performances to the teachers, while reducing the number of learnable parameters, hence computational load and required memory.
Knowledge distillation in general is performed via two main approaches. The first approach is to use the '_soft labels_' obtained from the teacher to train the student [45, 46], while the second approach instead relies on the '_embeddings_' learned by the teacher to train the student [36, 17, 20, 47, 13, 44, 8]. In addition to these two approaches, a few other solutions have also been proposed in the literature. For instance, the activation boundaries of the neurons in the teacher were transferred to the student in [14] to reduce the training time. In [22, 34], knowledge distillation was performed by transferring the attention maps generated by the teacher to the student. This technique helped the student to find the salient areas of the input with the help of the transferred attention maps, which in turn boosted the performance of the student without the need for additional training. In [4], instead of relying only on a single embedding, the knowledge from multiple layers of the teacher was used to train the student, boosting the generalizability of the student during inference.
### _LUPI with Knowledge Distillation_
As one of the earliest solutions for LUPI with deep neural networks, the use of teacher-student knowledge distillation was proposed in [28]. In their work, it was proposed that the teacher can receive its input from the privileged information and its output can be used alongside the ground-truth labels to train the student which receives its input from the main data. A number of recent literature have used similar techniques in their studies [1, 5, 6, 7, 33, 35, 41, 29, 40, 30, 32]. The main idea behind these methods has been to use a secondary modality for the input of the teacher model as the source of privileged information.
A very limited number of prior works have targeted audio representation learning while considering '_video_' as the secondary modality [1, 30, 32]. In their solutions, the teacher model takes video frames as input and generates soft-labels which are then used as the only source for training the student. The main aim of these studies is to alleviate the need for labeled training data in the main modality (audio) by training the student models using only the soft-labels generated from a secondary modality (video). While these studies successfully achieve their goal of training audio using video as a [self-] supervisory signal, they do not explore the impact of using the secondary modality as privileged information for helping the networks in learning the main modality. However, in this work we aim to take advantage of the secondary modality to
boost the performance of the networks alongside the labeled data from the main modality instead of avoiding the use of output labels altogether.
### _LUPI without Knowledge Distillation_
A number of studies have also taken approaches other than teacher-student frameworks toward utilizing privileged information for training their networks. In the study performed by [37], multi-task learning was used for training the model. Their proposed network performs action recognition while also aiming to reconstruct the privileged information from a different modality. The main modality used in their study was the video frames of individuals performing specific actions while for the privileged information, the positions of skeletal joints of individuals in the videos were used. The use of privileged information in this work boosted the performance compared to several baselines. While their proposed method proved beneficial for secondary modalities with limited dimensionality, its integration with secondary modalities with high dimensionality, for instance video frames, has not been explored. In [26], dropout masks derived from privileged information were used in order to generalize a DNN. The study used privileged information in the form of image segments obtained from the input to generate heuristic dropout masks. These masks were then applied over the learnt representations of the DNN in order to help the model with generalization. While the method was shown to boost the generalizability of models, the dropout masks are generated using a segment of the original input. Hence this method requires the privileged information to be sourced from the same modality as the original input.
## III Method
Our objective in this study is to train a deep neural network to learn audio representations using audio-visual data under the condition that the video modality is not available during inference/testing. Our approach, adopts the paradigm of teacher-student knowledge distillation and LUPI to train a network that can handle this condition. In our approach, the student model operates on the audio modality and is trained using two outputs, one from the ground-truth labels and the other from the embeddings obtained by the teacher model which operates on the secondary modality. In this section we describe the different components of our proposed method.
### _Preliminaries_
The training data used for supervised learning often comes in the form of tuples \((x_{i},y_{i})^{i\in\{0,...n\}}\) where \(x_{i}\) is a vector representation of the training sample and \(y_{i}\) is the output label. The aim of the model is to find an optimal network \(\mathcal{F}\), that predicts the labels of the test data \(y_{i}=\mathcal{F}(x_{i})\) with the least amount of error. In this type of training the format of data stays the same during training, testing, and deployment of the model. Occasionally, during the training phase we may have access to additional information other than what is available in the test set. Vapnik et al. referred to this kind of information as privileged information and proposed a technique called "LUPI" to take advantage of this information for better training machine learning models [42]. The _training_ of the models through such a paradigm is done using the tuple \((x_{i},x_{i}^{*},y_{i})\) where \(x_{i}^{*}\) represents the privileged information. However, during _inference_ with the trained models, the data would still maintain its previous format and the additional information is not available.
The LUPI paradigm was first introduced in the context of support vector machine (SVM) [42]. The model, namely SVM+, was shown to perform better than the classical SVM, i.e. SVM trained without privileged information. The LUPI paradigm can also be implemented using teacher-student knowledge distillation. In this view of the paradigm, the original knowledge distillation method [15] is expanded by using the privileged information as the input for the teacher model. Given the tuple \((x_{i},x_{i}^{*},y_{i})\), the teacher model is trained using the tuples of privileged information and label \((x_{i}^{*},y_{i})\) at first. In the next step, soft-labels \(s_{i}\) are obtained for each training sample from the teacher as the predicted output probability, using the privileged information. Finally, any layer \(L\) of the student model is trained using the tuples \((x_{i},y_{i})\) and \((x_{i},s_{i})\) concurrently with the final gradient \(\nabla_{s}\) calculated as follows:
\[\nabla_{s}=(1-\alpha)\nabla\frac{\mathcal{L}_{i}(y_{i}^{\prime},y_{i})}{\nabla L }+\alpha\nabla\frac{\mathcal{L}_{i}(y_{i}^{\prime},s_{i})}{\nabla L}. \tag{1}\]
Here the parameter \(\alpha\) is the imitation parameter that determines how much the student model should follow the teacher.
### _Our Solution_
As described earlier, our goal is to develop a framework capable of distilling video data into an audio learner, so that training is improved given the availability of both modalities, while audio alone is used at inference. For this purpose we use teacher-student knowledge distillation in which the teacher network operates on video data, while the embeddings extracted from its intermediate layers are used to help train the student network which operates on audio data. An overview of our method was depicted earlier in Figure 1. Through the following subsection the details of each component in our approach is presented. It should be noted that similar to most works on audio representation learning, we use spectrograms extracted from the audio signals in our model. However, these spectrograms can be considered in two different formats: non-sequential and sequential. The non-sequential representation of the audio considers the entire spectrogram as one single image, while in the sequential version, the spectrograms are divided into smaller segments across time. Accordingly, we design our model for both approaches (non-sequential and sequential) and present our teacher-student method for both versions in the following subsections.
#### Iii-B1 Video Learning
Here we describe the process of video representation learning in our model, which is carried out by the teacher network. The process of training the teacher network is done separately and happens prior to the training of the student network. During the training of the student, the teacher network is frozen and its weights are unchanged.
Let us assume the training samples are represented as a set of tuples \((x_{i},x_{i}^{*},y_{i})\), where \(x_{i}\) is the \(i^{th}\) training sample from the audio modality and \(x_{i}^{*}\) is the corresponding sample from the video modality. In the non-sequential approach, the teacher network is comprised of an encoder, \(F^{T_{enc}}\), accompanied by the task header which includes a Fully Connected (FC) layer followed by softmax activation, denoted by \(F^{T_{head}}\). Here \(F^{T_{enc}}\) generates embeddings for each frame of the video from \(x^{*}\) using
\[E_{i_{j}}^{T}=F^{T_{enc}}(x_{i_{j}}^{*}), \tag{2}\]
where \(j\) is the video frame index. The accompanying \(F^{T_{head}}\) then generates the predicted labels of the model. The final output of the teacher model \(O^{T}\) is accordingly calculated as
\[O_{i_{j}}^{T}=F^{T_{head}}(E_{i_{j}}^{T}). \tag{3}\]
The final output \(O_{i_{j}}^{T}\) comes in the form of a vector with a dimensionality equal to the number of classes considered in by the model. Each index of \(O_{i_{j}}^{T}\) contains the calculated probability of \(x_{i_{j}}^{*}\) belonging to the class represented by the index.
In general, when learning audio-video representations, several video frames often correspond to a spectrogram spanning over a period of time. To address this, prior works have proposed the selection of a single frame known as _peak frame_[1, 30, 32] for each given input spectrogram. Inspired by this approach, we select a peak frame embedding in our solution as per the following:
\[E_{i_{peak}}^{T}=E_{i_{j}}^{T}|Arg\underset{j}{\max}(Arg\underset{j}{\max}(O_{i _{j}}^{T})), \tag{4}\]
where the frame with the highest calculated probability for its class is selected. The class is determined by performing an \(Arg\underset{}{\max}\) on the \(O_{i_{j}}^{T}\) by the softmax function.
On the other hand, in the sequential approach, we use the entire video as the input for the teacher network. In this version, an aggregation sub-network, \(F_{agg}^{T}\), is added to the teacher network and is placed between the encoder and the task header. Here \(F^{T_{agg}}\) generates an embedding from the output of the encoder using:
\[E_{i}^{T_{agg}}=F^{T_{agg}}(\{E_{i_{j}}^{T}|0\leq j<N_{i}\}), \tag{5}\]
where \(E_{i_{j}}^{T}\) are the encoder embeddings obtained by Eq.2 and \(N_{i}\) is the number of frames in each video.
When taking the sequential approach, we also divide the input \(x_{i}\) of the audio into fixed sized segments \(x_{i_{k}}\) where \(k\) is the index of the audio segment. As mentioned earlier, spectrograms that span across a period of time often cover multiple frames of video. Therefore, each segment of the audio \(x_{i_{k}}\) will be matched with multiple embeddings \(E_{i_{j}}^{T}\). In the sequential version of the teacher network, each frame does not have a probability score for its class and therefore selection of a peak frame is not possible. To address this issue we define the embedding \(E_{i_{k}}^{T}\) as the average, \(Avg\), of all of the embeddings in the segment:
\[E_{i_{k}}^{T}=Avg(\{E_{i_{j}}^{T}|r\times k\leq j<r\times(k+1)\}), \tag{6}\]
where \(r\) is the number of frames per audio segment. The calculated embedding \(E_{i_{k}}^{T}\) is then matched with the audio segment \(x_{i_{k}}\).
#### Iii-A2 Audio Learning
In the non-sequential approach the student network takes in the entirety of the audio signal in a single spectrogram. Figure 2 shows the general scheme for the non-sequential version of the proposed method. In this version, the student model is comprised of an encoder, \(F^{S_{enc}}\), and a task header which consists of an FC layer with a softmax activation, denoted by \(F^{S_{head}}\). The encoder extracts embeddings \(E_{i}^{S}\) from the input \(x_{i}\) using
\[E_{i}^{S}=F^{S_{enc}}(x_{i}). \tag{7}\]
The embedding \(E_{i}^{S}\) is then given to \(F^{S_{head}}\) to predict the output \(y_{i}^{\prime}\). We then define a loss function \(\mathcal{L}_{E}(E_{i}^{S},E_{i_{peak}}^{T})\), which calculates the distance between the embeddings of the encoder component of the student network and the embedding calculated using Eq.2. We define a second loss function \(\mathcal{L}_{Y}(y_{i}^{\prime},y_{i})\), which calculates the distance between \(y_{i}^{\prime}\) and the ground-truth labels \(y_{i}\). The output layer of the student model is trained using only \(\mathcal{L}_{Y}\), whereas the encoder is trained by the gradient \(\nabla_{s}\) calculated using
\[\nabla_{s}=(1-\alpha)\nabla\frac{\mathcal{L}_{Y}(y_{i}^{\prime},y_{i})}{\nabla F ^{S_{enc}}}+\alpha\nabla\frac{\mathcal{L}_{E}(E_{i}^{S},E_{i_{peak}}^{T})}{ \nabla F^{S_{enc}}}, \tag{8}\]
where \(\alpha\) acts as the imitation parameter. The value of \(\alpha\) defines the weight of the gradients while training the encoder component of the student and ranges from \(0\) to \(1\).
In the sequential approach, the audio input of the student model is divided into fixed-sized segments across time. Here, an aggregator sub-network \(F^{S_{agg}}\) is added to the student network between the encoder and the task header. The encoder \(F^{S_{enc}}\) generates embeddings \(E_{i_{k}}^{S}\) from each audio segment \(x_{i_{k}}\) using
\[E_{i_{k}}^{S}=F^{S_{enc}}(x_{i_{k}}). \tag{9}\]
The final embedding \(E_{i}^{T_{agg}}\) is then calculated using
\[E_{i}^{T_{agg}}=F^{S_{agg}}(\{E_{i_{k}}^{T}|0\leq k<M_{i}\}), \tag{10}\]
where \(M_{i}\) is number of audio segments in \(x_{i}\). Lastly, the output of the student model \(y_{i}^{\prime}\) is generated using
\[y_{i}^{\prime}=F^{S_{head}}(E_{i}^{T_{agg}}). \tag{11}\]
Figure 3 shows the two possible approaches for the sequential version of the proposed method for training the student networks: (1) distilling video information at the encoder-level
Fig. 2: The proposed framework for learning privileged information through teacher-student distillation, in non-sequential settings. In this setting the entirety of the audio signal is given to the student as a single spectrogram and the teacher network receives a single frame of the video as input. Here, the student and teacher networks are both comprised of an encoder component and a task header. The embeddings generated by the encoder component of the teacher network are used in partially training the encoder component of the student network using gradients generated by \(\mathcal{L}_{E}\). This is while the encoder component and the task header of the student network receive gradients generated from \(\mathcal{L}_{Y}\).
shown in Figure 3 (left); (2) distilling video information at the aggregator-level shown in Figure 3 (right). In the encoder-level distillation we use the loss function \(\mathcal{L}_{E}(E_{i_{k}}^{S},E_{i_{k}}^{T})\) which calculates the loss between the embedding of each audio segment \(E_{i_{k}}^{S}\) and the teacher embedding \(E_{i_{k}}^{T}\) corresponding to that segment. The encoder component of the student model \(F^{S_{enc}}\) is then trained by the gradients calculated using
\[\nabla_{s}=(1-\alpha)\nabla\frac{\mathcal{L}_{Y}(y_{i}^{\prime},y_{i})}{\nabla F ^{S_{enc}}}+\alpha\sum_{k=0}^{M_{i}}\nabla\frac{\mathcal{L}_{E}(E_{i_{k}}^{S},E _{i_{k}}^{T})}{\nabla F^{S}}, \tag{12}\]
while the aggregator and the output layer of the student network are trained only using gradients calculated by the loss function \(\mathcal{L}_{Y}(y_{i}^{\prime},y_{i})\).
In the aggregator-level distillation we use the loss function \(\mathcal{L}_{AG}(E_{i}^{S_{agg}},E_{i}^{T_{agg}})\), which calculates the distance between the embeddings generated by the aggregator component of the student model and the embedding \(E_{i}^{T_{agg}}\) calculated using Eq.5. The output layer of the student network is train using only \(\mathcal{L}_{Y}(y_{i}^{\prime},y_{i})\), while the rest of the pipeline is trained by the gradients calculated using
\[\nabla_{s}=(1-\alpha)\nabla\frac{\mathcal{L}_{Y}(y_{i}^{\prime},y_{i})}{\nabla F ^{S_{agg}}}+\alpha\nabla\frac{\mathcal{L}_{AG}(E_{i}^{S_{agg}},E_{i}^{T_{agg }})}{\nabla F^{S_{agg}}}. \tag{13}\]
Fig. 3: The proposed framework for learning privileged information through teacher-student distillation. In the sequential setting of our method, the audio signal is divided into multiple same-sized sections throughout time. Each section is then given to the encoder component of the student network. The embeddings generated by the encoder component are then collected and passed onto an aggregator which extracts time dependencies from these embeddings. The teacher network on the other hand generates embedding for each video frame as well as an overall embedding of the entire video. In the encoder-level implementation of our method (left), part of the gradients used for training the encoder of the student network are obtained from \(\mathcal{L}_{E}\), which compares the embeddings generated by the encoder of the student network and embeddings generated by the encoder of the teacher network. In aggregator-level implementation of the proposed method (right), both encoder and aggregator components of the student network receive gradients from privileged information through \(\mathcal{L}_{AG}\). This loss compares the embeddings generated by the aggregator of the student network with the embeddings generated by the aggregator of the teacher network. In both implementations, the other part of the gradients which are also used for the remainder of the student network are generated using ground truth labels.
### _Implementation Details_
The networks in the non-sequential version of our solution consist of an encoder and an output layer. We use 3 architectures based on VGG [38], ResNet [12], and Squeeze-and-Excitation (SE) Networks [16] to implement both the teacher and student networks. For the first benchmark we use a standard VGG16 network for the teacher and a VGG-based network customized for audio in the student. Table I presents the details of the VGG-based student network. In this network each convolutional layer is coupled with a batch-normalization layer. We use a stride length of 1 in the convolutional and batch-normalization layers throughout the network. For the the maxpooling layers we use a filter size of (2, 2) and a stride length of 2. We add 2 FC layers with 512 neurons after the last maxpooling layer. The output of the last FC layer is used as the student embedding \(E_{s}\), described earlier in Section III. We use Rectified Linear Unit (ReLU) as the activation function for the convolutional layers and the last FC layer. Lastly, for the task header we use an FC layer with the number of neurons equal to the number of classes in the experiments. We then use a softmax function for the this layer.
For the second benchmark we utilize a standard ResNet34 for the teacher network and a ResNet-based model customized for audio, in the student network. Details of the ResNet-based student model are presented in Table II. The first convolutional layer of the student model uses a filter size of (7, 7) with a stride length of 1. This layer is followed by a maxpooling layer with a filter size of (2, 2) and a stride length of 2. Afterwards, we use _residual blocks_ in the student model. Each block contains 3 sets of coupled convolutional and batch-normalization layers and a shortcut connection that links the input of the block to its output. The first and last convolutional layers in each block have a filter size of (1, 1) and the second convolutional layer has a filter size of (3, 3) with a stride of 1. Each block is then repeated multiple times as shown in Table II. We add a maxpooling layer with a filter size of (2, 2) and a stride of 2 after each block. The last maxpooling layer is then followed by 2 FC layers, each with 512 neurons and an FC layer in the task header with neurons equal to the number of classes. Similar to the first benchmark, we use ReLU as the activation functions for the convolutional layers and the last FC layer in the encoder, while softmax is used for the FC layer in the task header.
In the third benchmark we construct the networks using SE blocks. These blocks use the same layer format of the residual blocks with the difference that an SE module is added to each block. The SE module consists of a global pooling layer, which extracts channel information, 2 FC layers with a ReLU activation function in between, and a sigmoid activation function following the FC layers. We implement the teacher network by replacing the residual blocks from a standard ResNet34 network with SE blocks. The student network is implemented by replacing the residual blocks in the network from the second benchmark with SE blocks.
In the sequential version of our solution, the networks comprise of an encoder component and an aggregator component. Similar to the non-sequential format, we perform our experiments using 3 benchmark networks. For the encoder component of our teacher and student networks we use the teacher and student networks used in the non-sequential version, respectively. However, the task headers of the student networks are removed and the encoder component generates an embedding vector with a size of 512 for each segment of the input. The aggregator component of the networks include 2 BiLSTM layers accompanied by an attention module. Table III shows the details of the student networks for the experiments in the sequential version of the proposed method.
## IV Experiments
### _Datasets and Data Preparation_
The aim of the experiments is to evaluate the change in performance between learning audio representations alone (for audio representation learning) versus using video as privileged information. The experiments are done on two different tasks of SR and SER. We use 3 publicly available datasets, namely **VoxCeleb**[31], Ryerson Audio-Visual Database of Emotional Speech and Song (**RAVDESS**) [27], and **IEMOCAP**[3].
We use the audio-video version of the VoxCeleb dataset [31] for SR. In this task we aim to identify the speaker of a given utterance among a set of known speakers. This version of the VoxCeleb dataset is comprised of 21,819 audio-visual recordings from 1,211 individuals. We use 70% of the recordings from all the 1,211 individuals for training, 10% for validation, and 20% for testing. We use the spectrogram representations of audio as inputs to the student model. The frequency features are extracted from the audio using Short-term Fourier Transform (STFT) with a window of 25 \(ms\). The process is repeated across the entire utterance with a window overlap of 10 \(ms\). The duration of the utterances is not the same for all of the recordings. In order to rule out the complications caused by the variable length of the inputs all the recordings are cropped at 5-second durations when training the models, resulting in spectrograms of size \(257\times 500\). It should be noted that the original length of the recordings are used for inference. The shorter utterances are padded using repetition to match the desired length. The videos are recorded at a frame-rate of 25 frames per second. Each frame is annotated using automated face detection models, giving the location and boundaries of the face of the speaking person. The size of the boundaries are not equal across the dataset, therefore we crop the images and resize them to a fixed dimension of \(224\times 224\). For evaluation of the sequential implementation of the proposed method, the recordings are divided into 1-second segments. This results in a sequence of 5 smaller spectrograms with a dimensionality of \(257\times 100\) for each utterance and 5 sets of frames for each video, with 25 frames in each set.
For SER we use two datasets, RAVDESS [27] and IEMOCAP [3]. In this task, we aim to identify the emotional state of the speaker of an utterance and classify that state into different discrete emotion categories namely _Sad, Happy, Fearful, Disgusted, Surprised, Angry_, and _Neutral_. The RAVDESS dataset is comprised of recordings from 24 participants. We use cross-validation through leave-one-subject-out for training and validating our method and report the mean accuracy.
Participants are asked to deliver a sentence with 7 different emotions in two forms of normal speech and song. Each actor performs a sentence 60 times in normal speech and 44 times in singing voice with one actor exempted from singing. The total number of video recordings used for our experiments is 2,452. The video clips are recorded under controlled conditions without any environmental noise. The length of recordings are fixed to 4 seconds, therefore no additional padding or cutting is performed on the input. The frame rate of the recorded videos is set to 25 frames per second. In most of the frames, the face of the speaker is located in the center of the frame and covers at most 50% of the frame. Therefore, we crop the frames in the center and resize the resulting image to the fixed dimensions of \(224\times 224\).
Lastly, we also use IEMOCAP for evaluating our method on SER. This dataset contains a total of 6 thousand audio-visual recordings performed by 10 individuals. We use 5-fold cross-validation for evaluation of the proposed method and report the mean accuracy. The recordings contain single improvised or scripted sentences uttered by each actor. Each utterance is annotated by 3 different people and categorized into four emotional categories of _Sad_, _Happy_, _Angry_, and _Neutral_. The length of the recordings are not standard throughout the dataset. Therefor we fix the length of the recordings to 4 seconds by cutting the longer utterances and padding the shorter utterances by repetition.
### _Training Details_
The teacher and student networks are trained for 50 epochs on the same dataset. For the optimizer we use Adam optimizer [21] with \(\beta 1=0.9\) and \(\beta 2=0.99\). We use cyclical learning rates [39] to train the networks with the initial learning rate of \(10^{-4}\). We choose cyclical learning rates in order to decrease the probability of getting trapped in local minima. All networks are trained on a single Nvidia Titan RTX (24 GB vRAM) GPU with batch size of 32.
### _Baselines_
We compare our method with two re-implemented baselines: (1) "Soft-label distillation", which uses the soft-labels generated by the teacher network from video modality to train the student network. This approach has been used in knowledge distillation studies such as [1, 30, 32]; In this baseline the parameter \(\alpha\) (see Equation 1) determines how much the student model should follow the teacher. (2) "Multitask Learning", which we described in Section II-C. This approach has been previously used in [37] for LUPI to perform action recognition from videos. In this case, the student model is trained using the ground-truth labels and gradients returning from a secondary decoder component which is tasked with generating a representation of the training sample in the feature space of the secondary modality. Here the parameter \(\alpha\) represents the weight that is put on the gradients coming from the decoder component.
### _Performance_
**Speaker Recognition.** We use identification accuracy and equal error rate to evaluate the performance of the student model when the privileged information has been integrated into the framework.
We exclude other works such as [26] for the reason that their proposed method requires the privileged information and original training data to be from the same modality. Figure 4 shows the results of our experiments on the performance student models after using the proposed method and compares it with the baseline methods for different values of \(\alpha\) (imitation parameter) and different model architectures. The figure shows that our proposed method has a positive impact on the performance of the student network for all values of \(\alpha\). Moreover, we observe that the baseline methods show a negative impact when \(\alpha\) is increased. This shows that while the baseline models exhibit successful performances when video is used to provide
Fig. 4: A comparison between the effect of using our proposed method for LUPI versus using Soft-label and multitask training for speaker identification in relation to different values of \(\alpha\) in: (a) Non-Sequential settings; (b) Sequential settings.
supervision to the audio in the absence of ground-truth labels [30, 1, 32], they do not provide any benefits for the scenario where supervised training is performed with both modalities but inference is performed only on audio.
It can also be observed that our method has the highest impact when \(\alpha\) is either \(0.5\) or \(0.6\). This indicates that the best performance gain is achieved when the weight of the privileged information distillation on training of the networks is almost equal to that of the ground-truth labels.
Table IV shows the result of our experiment for integration of our method in different architectures for speaker identification on the video version of VoxCeleb dataset. We present the accuracy of the teacher \(Acc_{T}\) for person identification task using the video modality. This will allow us to investigate the impact of our proposed method on the student for different teachers with varying performances. We also show the accuracy of the student network without distillation of privileged information so that we can better observe the performance gain using the proposed method and compare it to other methods. Lastly, we show the accuracy of the student network after distillation of privileged information and compare it with the student model without distillation (\(\Delta Acc_{S}\)). As shown by the results, we observe a substantial performance increase in student networks while using the proposed method. The highest performance gain is obtained when using the sequential networks with VGG-based encoders and distillation of privileged information on the aggregator. This is achieved while the accuracy of the teacher compared to the accuracy of the student is at its highest value, i.e., 9.09%. We also observe that the highest performance gain, in comparison with the difference in the performance of teacher and student networks, occurs with the sequential network with a ResNet-based encoder and privileged information distillation on the aggregator.
Table V presents the results of our experiments for the benchmark networks for speaker verification on the VoxCeleb1[31] standard test set which includes recordings from 40 speakers outside of the training set. We present the error rate of the student model by \(EER_{S}\), before and after distillation, and compare the performance of the student model at these two points by calculating the difference and normalizing it by the EER w/o distillation (\(\Delta EER_{S}\%\)). We observe that the highest decrease in the error rate has been obtained by the ResNet-based encoders when the non-sequential implementation is used, indicating that these encoders often benefit more from being trained by the teacher models compared to other networks.
**Emotion Recognition.** We use unweighted accuracy as the metric to evaluate the performance of the networks trained using our method for SER on RAVDESS. We compare our method to the re-implemented baselines described earlier for all the values of \(\alpha\). Figure 5 shows the results of this experiment. We observe that while our method exhibits a positive impact on the performance of the student networks, the baseline methods have a negative impact. This further shows that while the baseline models that utilize the video
as the only source of supervision for learning audio are successful when the ground-truth labels are not present, they do not improve the performance of deep neural networks when training is done using both modalities but only audio is available at inference.
We also extend out experiments by comparing the performance of the proposed method integrated into different architectures. Table VI shows the result of this evaluation. We include the accuracy of the teacher network along with the accuracy of the student before and after distillation. It can be observed that using our method, the performance of the student models improve and the highest increase in the performance is achieved when the non-sequential student network using a SEResNet-based architecture is employed. We can also observe a similar behaviour to that of the previous experiment when comparing our method with the baselines.
Lastly we evaluate our method on the IEMOCAP dataset. In this experiment we intend to show the effect of our method in cases where the accuracy of the teacher network is _lower_ than the student network. We compare the proposed method to the baseline methods described earlier for all the values of \(\alpha\). Figure 6 shows the results of this experiment. As shown in the figure, when using the proposed method, the performance of the student networks are not negatively affected by weaker teacher networks for low values of \(\alpha\), whereas in the baseline methods, the negative impact is observed from very early values of \(\alpha\). Table VII shows the performance of our method integrated into different architectures. We observe that the best performance is achieved using the SEResNet architecture equipped with BiLSTM layers, and despite the teacher having a lower performance than the student, the proposed method does not negatively affect the performance of this network.
## V Conclusion and Future Work
We use teacher-student knowledge distillation for LUPI in order to take advantage of both audio and video inputs for training deep neural networks, while only using audio at inference. In our framework, embeddings are first extracted from the video input using a teacher model. The embeddings alongside the ground-truth labels are then used to train the
Fig. 5: A comparison between the effect of using our proposed method for LUPI versus using Soft-label and multitask training for SER on RAVDESS in relation to different values of \(\alpha\) in: (a) Non-Sequential settings; (b) Sequential settings.
student. We integrate our method in two different settings for non-sequential and sequential data. In the non-sequential setting, both the teacher and student networks are constructed using and encoder and a task header. We use the embeddings generated by the encoder of the teacher to train the encoder of the student, as the task header of the student is trained using the ground-truth labels. In the sequential setting, an additional aggregation component is introduced to the teacher and student networks, which is placed between the encoder and task header. We use two sets of embeddings produced by the encoder and aggregation component of the teacher to train the encoder and aggregation component of the student respectively. Similar to the non-sequential setting, the task header of the student is trained using ground-truth labels. By performing experiments on two tasks of SR and SER we show that our proposed framework leads to considerable performance gains in the student compared to previous studies. While the benchmark models rely on different aspects of the input for SR and SER, and thus different architectures exhibit different performances for each task, our method consistently improves the performance of the benchmarks. In summary, our approach opens a new path towards integration of LUPI by means of knowledge distillation into deep audio representation learning using audio-visual data, when only audio is available at inference.
Our work also introduces a new set of challenges for future work. An immediate step for future work would be to study the use of generative models such as Generative Adversarial Networks and Variational Autoencoders for creating the embeddings from the teacher model with better domain adaptation and generalization. More recent and upcoming approaches such as normalizing flows can also be explored in this context.
|
2308.09793 | Towards a Modular Architecture for Science Factories | Advances in robotic automation, high-performance computing (HPC), and
artificial intelligence (AI) encourage us to conceive of science factories:
large, general-purpose computation- and AI-enabled self-driving laboratories
(SDLs) with the generality and scale needed both to tackle large discovery
problems and to support thousands of scientists. Science factories require
modular hardware and software that can be replicated for scale and
(re)configured to support many applications. To this end, we propose a
prototype modular science factory architecture in which reconfigurable modules
encapsulating scientific instruments are linked with manipulators to form
workcells, that can themselves be combined to form larger assemblages, and
linked with distributed computing for simulation, AI model training and
inference, and related tasks. Workflows that perform sets of actions on modules
can be specified, and various applications, comprising workflows plus
associated computational and data manipulation steps, can be run concurrently.
We report on our experiences prototyping this architecture and applying it in
experiments involving 15 different robotic apparatus, five applications (one in
education, two in biology, two in materials), and a variety of workflows,
across four laboratories. We describe the reuse of modules, workcells, and
workflows in different applications, the migration of applications between
workcells, and the use of digital twins, and suggest directions for future work
aimed at yet more generality and scalability. Code and data are available at
https://ad-sdl.github.io/wei2023 and in the Supplementary Information | Rafael Vescovi, Tobias Ginsburg, Kyle Hippe, Doga Ozgulbas, Casey Stone, Abraham Stroka, Rory Butler, Ben Blaiszik, Tom Brettin, Kyle Chard, Mark Hereld, Arvind Ramanathan, Rick Stevens, Aikaterini Vriza, Jie Xu, Qingteng Zhang, Ian Foster | 2023-08-18T19:47:59Z | http://arxiv.org/abs/2308.09793v2 | # Towards a Modular Architecture for Science Factories+
###### Abstract
Advances in robotic automation, high-performance computing (HPC), and artificial intelligence (AI) encourage us to conceive of _science factories_: large, general-purpose computation- and AI-enabled self-driving laboratories (SDLs) with the generality and scale needed both to tackle large discovery problems and to support thousands of scientists. Science factories require modular hardware and software that can be replicated for scale and (re)configured to support many applications. To this end, we propose a prototype modular science factory architecture in which reconfigurable _modules_ encapsulating scientific instruments are linked with manipulators to form _workcells_, that can themselves be combined to form larger assemblages, and linked with distributed computing for simulation, AI model training and inference, and related tasks. _Workflows_ that perform sets of actions on modules can be specified, and various _applications_, comprising workflows plus associated computational and data manipulation steps, can be run concurrently. We report on our experiences prototyping this architecture and applying it in experiments involving 15 different robotic apparatus, five applications (one in education, two in biology, two in materials), and a variety of workflows, across four laboratories. We describe the reuse of modules, workcells, and workflows in different applications, the migration of applications between workcells, and the use of digital twins, and suggest directions for future work aimed at yet more generality and scalability. Code and data are available at [https://ad-sdl.github.io/wei2023](https://ad-sdl.github.io/wei2023) and in the Supplementary Information.
## 1 Introduction
We coin the term _science factory_ to denote a facility in which pervasive automation and parallelism allow for the integrated application of experiment, computational simulation, and AI inference to challenging discovery problems (see Figure 1) without bottlenecks or human-induced delays. Such systems promise greatly accelerated progress in many domains of societal importance, from clean and plentiful energy to pandemic response and climate change [6, 27].
Science factories require scale, generality, and programability in order to support large scientific campaigns and achieve economies of scale for routine tasks. These are familiar concerns in conventional manufacturing, and also for HPC centers and commercial clouds [10], which may scale to millions of processing cores and support thousands of users. We seek to develop methods for the construction of science factories that are similarly scalable, general-purpose, and programmable.
Large systems of any type are typically constructed from _modules_, simpler subsystems that can be designed and constructed independently and then combined to provide desired functionality [9]. A key concept in modular design is to hide implementation complexities behind simple interfaces [46]. In the science factory context, modules can possess both physical and digital characteristics, and thus their interfaces need to encompass both form factor and programmatic elements.
Our investigations of these issues have led us to develop designs and prototype implementations for several elements of a modular science factory architecture. These include a six-function programmatic module interface; an associated (optional) physical form factor, the cart;
Fig. 1: Accelerated discovery requires integrated simulation, inference, and experiment.
methods for incorporating experimental apparatus into modules and for combining modules into workcells; methods for integrating with other elements of research infrastructure, such as data repositories, computers, and AI models; notations for specifying module and workcell configurations; methods for defining workflows and applications; and systems software for running applications on different workcells.
In the sections that follow, we describe these various elements of a science factory architecture and the results of experiments in which we employ a prototype implementation to run biology and materials science applications. We first provide some background in Section 2. Then, we introduce the concepts and mechanisms that we have developed to support modular architecture (Section 3); describe our experiences applying these methods in applications in biology and materials science (Section 4); discuss experiences and lessons learned (Section 5); and finally conclude and suggest future directions (Section 6). Additional details and pointers to code are provided in the Supplementary information.
Much of the work reported here has been conducted in Argonne's Rapid Prototyping Lab (RPL) [49], a facility established to enable collaborative work on the design, development, and application of methods and systems for autonomous discovery.
## 2 Background
Automation has long been applied in science [32, 43] to increase throughput, enhance reliability, or reduce human effort. High-throughput experimentation systems are widely used to screen materials [15, 24] and potential drugs [50, 65, 70] for desirable properties. Autonomous discovery systems, in which experiments are planned and executed by decision algorithms without human intervention [2, 6, 12, 35, 36, 41, 56, 57, 58], are potentially the next step in this trajectory. In principle, such systems can enable faster, more reliable, and less costly experimentation, and free human researchers for more creative pursuits. However, the adoption of autonomous platforms in science has thus far been limited, due in part at least to the diversity of tasks, and thus the wide variety of instruments, involved in exploratory research. Success going forward, we believe, requires substantial increases in scale and generality (for economies of scale), autonomy (for sustained hands-off operations), programmability (for flexibility), extensibility (to new instruments), and integration with computing and data resources, as well as resilience, safety, and security. These are all issues that we address in our work.
As illustrated in Figure 2, we can identify a continuum of flexibility in automation approaches. In _integrated_ automation, a specialized device is manufactured to perform a specific task, such as for high-throughput characterization [38]. Such devices are not intended to be repurposed to other tasks. In _fixed_ automation, devices are connected in a fixed configuration; here, retooling for a new application may involve substantial design and engineering. In _flexible_ automation [33, 52, 69], devices in fixed locations are connected by programmable manipulators that can move materials to any device within their reach; thus, retooling for a new application requires only substituting devices and reprogramming manipulators. In _reconfigurable_ automation, reconfiguration is automated. (Reconfiguration, a feature of early computers [37], is also employed in microfluidics [22, 28, 45, 62].) In _mobile_ automation, mobile robots are used to route materials to devices in arbitrary positions; thus, only programming is required to retool for a new application or environment [12]. Finally, in the oxymoronic but sometimes useful _human_ automation case, humans handle movement of materials between robotic stations, an approach used, for example, in Emerald Cloud Lab [51]. In general, flexibility increases from left to right, and speed and reliability from right to left (Figure 2). Such approaches can be combined, as in Amazon's automated warehouses, in which humans are engaged only when robots fail.
Early autonomous discovery systems (e.g., the influential Adam [27]) were typically specialized for a single class of problems. Both economics and the inherent curiosity of scientists now demand multi-purpose systems that can easily be retargeted to different applications--and thus motivate solutions further to the right along the continuum of Figure 2. Given a research goal, knowledge base, and set of appropriately configured devices,
Fig. 2: Automation systems span a continuum of flexibility, speed, and reliability, from the integrated (e.g., as shown here, a microfluidic laboratory) to the fixed (e.g., Adam [27]), flexible (e.g., a bio workcell at Argonne), reconfigurable (e.g., a “cart” in Argonne’s RPL), mobile (e.g., Liverpool’s mobile robotic chemist [12]), and manual.
these systems may work iteratively to: 1) formulate hypotheses relevant to its goal; 2) design experiments to test these hypotheses, ideally based on existing protocols encoded in a reusable form [3, 48, 58, 60]; 3) manage the execution of experiments on available devices; and 4) integrate new data obtained from experiments into its knowledge base.
The third of these tasks involves automated execution of multi-step experimental protocols on multiple devices, with each step typically taking materials and/or data from previous steps as input. To avoid an explosion in the number of inter-device adapters, we want common physical and digital form factors for materials and data, respectively. Also important are uniform software interfaces, to simplify integration of new devices and reuse of code.
Conventional physical form factors are commonly used for handling samples (e.g., test tubes, multi-well plates, microcentrifuge tubes, Petri tubes, cuvettes) and for managing the experimental environment (e.g., microfluidics, Schlenk lines [19], glove boxes, fume hoods). Apparatus that are to interoperate in an SDL must either employ the same conventions or incorporate adapters, e.g., to move samples from one container to another or to move samples in and out of controlled environments.
Digitally, we need methods for specifying the actions to be performed (e.g., transfer sample, open door, turn on heater, take measurement) and for translating an action specification into commands to physical device(s). Various representations for actions have been proposed, with domain of applicability ranging from a single device [16] to classes of experiment: e.g, the chemical description language XDL [58] is an executable language for programming various experimental processes in chemistry, such as synthesis; ChemOS [48, 53] and the Robot Operating System (ROS)-based [47] ARChemist [21] have similar goals. (ROS is a potential common substrate for SDLs, but with limitations as we discuss in Section 5.) Aquarium [63] defines Aquarium Workflow Language and Krill for sequencing steps and granular control of apparatus, respectively. BioStream [60] and BioCoder [3] support the representation of biology protocols. Li et al. [31] describe a notation for materials synthesis.
The execution of a specification requires generating suitable commands for underlying devices: typically via digital communication, but in some cases, via robotic manipulation of physical controls [69]. For example, XDL procedures, which express protocols in terms of reagents, reaction vessels, and steps to be performed (e.g., add, stir, heat), are compiled to instructions for a Chemputer architecture [4, 19, 58]. Depending on the level of specification, general-purpose robotic methods (e.g., path planning) may be relevant at this stage.
An important concern in any robotic system, and certainly in automated laboratories, is monitoring to detect unexpected results: something that humans are often good at, but that can be hard to automate. Reported error rates in materials science experiments, of from 1 per 50 [34] to 1 per 500 [12] samples, show that automated detection and recovery are important. In other contexts, unexpected phenomena may be indicators of new science.
Autonomous discovery systems must also engage with computing and data resources. Vescovi et al. [61] survey and describe methods for implementing computational flows that link scientific instruments with computing, data repositories, and other resources, leveraging Globus cloud-hosted services for reliable and secure execution. The materials acceleration operating system in cloud (MAOSIC) platform [30] hosts analysis procedures in the cloud.
## 3 Towards a modular architecture
Our overarching goal is to create scalable, multi-purpose SDLs. To this end, we require methods that support: the integration of a variety of scientific instruments and other devices, and the reconfiguration of those devices to support different applications (to be _multi-purpose_); the incorporation of AI and related computational components (to be _autonomous_); and expansion of capacity and throughput by replicating components (to be _scalable_).
Modularity of both hardware and software is vital to achieving these capabilities. A modular design defines a set of components, each of which hides complexities behind an abstraction and interface [9, 46]. In principle, modularity can facilitate the integration of new components (by implementing appropriate interfaces), rapid creation of new applications (by reusing existing components), system evolution (by improving components behind their interfaces), reasoning about system behavior (by focusing on abstractions rather than implementation details), and system scaling (by replicating components).
Realizing modularity in SDLs is challenging due to the wide variety of physical and logical resources (instruments, robots, computers, data stores, digital twins, AI agents, etc.) that scientists may wish to employ, and the many experimental protocols that they may want to implement on those resources--all limited only by budget and human, perhaps AI-assisted [59], ingenuity.
\begin{table}
\begin{tabular}{l|l}
**Operation** & **Description** \\ \hline about & Return description of module and its actions \\ action & Perform specified action \\ reset & Reset the module \\ resources & Return current resource levels, if applicable \\ state & Return state: “IDLE,” “BUSY,” or “ERROR \\ admin & Module-specific actions: e.g., _home_ \\ \end{tabular}
\end{table}
Table 1: Our science factory module interface defines six operations.
Fig. 3: Architecture concepts introduced in Section 3.1. An application can engage workflows and compute and data services. A workflow involves actions on modules, grouped in workcells. A science factory would comprise many workcells, plus other components.
\begin{table}
\begin{tabular}{l|l|l|l}
**Name** & **Area** & **Description** & **Section** \\ \hline Color picker & Education & Mix liquid colors to match a target color & 4.1 \\ PCR & Biology & Polymerase chain reaction & 4.2 \\ Growth assay & Biology & Treatment effects on cellular growth & 4.3 \\ Electrochromic & Materials & Formulation, characterization of new polymer solutions & 4.4 \\ Pendant drop & Materials & Liquid sample acquisition from synchrotron beamline & 4.5 \\ \end{tabular}
\end{table}
Table 4: Five applications that we use to motivate and evaluate the architecture and implementation presented in this article.
\begin{table}
\begin{tabular}{l l|l|l}
**Class** & **Module** & **Actions** \\ \hline \multirow{3}{*}{Synthesis} & ot2 & run\_protocol \\ & solo & run\_protocol \\ & chemspeed & open\_lid, close\_lid, run\_program \\ \hline \multirow{2}{*}{Plate prep} & a4s\_sealer & seal \\ & brooks\_peeler & peel \\ \hline \multirow{4}{*}{Heat} & biometra & open\_lid, close lid, run\_program \\ & liconic & get\_current temp, set\_target temp, get\_current\_humidity, \\ & & get\_target\_humidity, set\_target\_humidity, begin\_shake, end\_shake, \\ & & load\_plate, unload\_plate \\ \hline \multirow{3}{*}{Measure} & camera & grab\_image \\ & hidex & open\_lid, close lid \\ & tecan & measure\_sample \\ \hline \multirow{3}{*}{Manipulate} & placetrane & transfer, remove\_lid, replace\_lid \\ & pf400 & explore\_workcell, transfer, remove\_lid, replace\_lid \\ \cline{1-1} & ur & transfer, run\_urp\_program \\ \cline{1-1} & sciclops & home, transfer, get\_plate \\ \hline Mobility & mir & move, dock \\ \end{tabular}
\end{table}
Table 3: The actions supported by the modules of Table 2. Each action can be invoked via the action operation of the module interface of Table 1.
In this section, we describe our approach to building modular science factories. We first introduce key concepts and mechanisms and five applications that we use in this article to motivate and evaluate our work. Then, we describe in turn how we represent modules, workcells, workflows, and applications, after which we discuss experiments with a common hardware form factor; thoughts on workcell validation, assembly, and support; and preliminary work on digital twins and simulation.
### Concepts
We introduce the central concepts that underpin our science factory architecture: see Figure 3.
**Module**: The module is the basic hardware+software building block from which we construct larger SDLs. A module comprises an internet-accessible service, or _node_, that implements the six-function interface of Table 1, plus a physical device to which the node provides access.
We list in Table 2 the modules employed in the work reported in this article. These modules encompass a considerable diversity of device types and interfaces; some diversity in sample exchange format, including 96-well plates5 and pipettes; and a variety of methods for transferring samples between modules.
**Workcell**: While instruments in an SDL can in principle be located anywhere that is reachable via a mobile robot, we find it useful to define as an intermediate-level concept the _workcell_, a set of modules, including a manipulator, placed in fixed positions relative to each other. A workcell is defined by its constituent modules plus a set of _stations_ (see next), information that allows for the use of the flexible automation model introduced in Section 2, in which a manipulator moves labware among devices.
**Station**: A station is a location within a workcell at which labware can be placed or retrieved. It is defined by its labware type (e.g., 96-well plate) and a position in 3D space relative to its workcell origin. A camera above a workcell can be used to determine positions and also whether stations are occupied.
**Science factory**: Given a suitable set of modules, a general-purpose, multi-user science factory can be constructed by assembling a variety of workcells, linking them with computational and data services and other required capabilities (e.g., supplies and waste disposal), and scheduling science campaigns onto the resulting system: see Section 3.8.
**Action**: An action is an activity performed by an instrument in response to an external request, such as (for the a4s_sealer module), "seal" (heat seal the sample plate currently located in its shuttle) or (for the platecrane module), "transfer" (move a sample plate from one station to another). An action is invoked by the action operation of the module interface of Table 1, which directs the specified request to the module, monitors execution, and returns a message when the operation is done.
We list in Table 3 the operations supported by the modules used in this work. The actions supported by a module can also be determined via the about operation.
**Workflow**: A workflow is a set of actions to be performed on one or more modules. We present examples of workflows below.
**Service**: A service is an online service that provides access to data or computational capabilities intended for use by applications during experimental campaigns.
**Application**: An application is a Python program that runs one or more workflows and that may also perform other tasks, such as data analysis and publication.
### Motivating applications
We employ the five applications listed in Table 4 to motivate and evaluate the work presented in this article. These applications cover several modalities of scientific experimentation and collec
Fig. 4: Depiction of steps involved in: deploying a module (#1-#3); creating a workcell configuration that contains the information needed to access a module (#4); and invoking an action on a module from a workflow, by using a module address retrieved from the workcell configuration (#5, #6).
tively implement a variety of tasks that underpin many SDLs, including data handling and processing.
_Color picker_ is a simple closed-loop application in which feedback from analysis of camera images is used to guide the mixing of colored liquids. _PCR_ employs several biology instruments working in tandem to implement the polymerase chain reaction. _Growth assy_ studies how treatments affect cell growth and can involve sample management over long periods without human intervention. _Electrochromic_, a material science application concerned with discovery of electrochromic polymers, employs chemspeed, tecan, and ur apparatus not used in the first three applications. _Pendant drop_ similarly involves different patterns and different apparatus, including a synchrotron beamline, in this case for study of complex fluids.
We list in Table 5 the specifics of which application uses which module. The applications also make use of Globus services, as described in Section 3.6, to perform data analyses on remote computers during experiments and to publish both experimental results and provenance metadata describing how samples were created and processed. Table 3 shows the expanded list of actions implemented for each of the modules in Table 2.
### Implementation of modules
We now describe how we implement the various concepts introduced above, starting with the module. As noted, a module provides an implementation of the module interface of Table 1. Each module is represented by a _node_, a service to which applications can make requests that should cause the device to respond appropriately to Table 1 commands.
Thus, integrating a new device, such as an a4s_sealer, involves the following steps (#1-3 in Figure 4): 1) Implement the logic required to process commands for the device; 2) Implement the logic required to route messages; 3) Deploy the software from #1 and #2 on a computer attached to the new device, start the node service, and record the address of the new node.
**Adapters**: A module node service receives requests, maps each request into one or more device-specific instructions, and returns results. Each device-specific instruction is implemented by sending an (operation, arguments) request on the appropriate interface and waiting for a response.
In order to simplify interactions with a variety of devices, which often come configured with specific software, we find it convenient to support a range of methods for handling requests and responses. For example, a4s_sealer has a REST API, so that to request a seal action we need to send it a REST message:
POST /action params = {"action_handle": seal, "action_vars": {} }
On the other hand, platecrane has a ROS interface; thus, to request that it fetch a plate from tower 1, we need to send, via a ROS service call to the platecrane ROS node's action service, the message:
{"action_handle": "get_plate", "action_vars": {"pos": "tower1"} }
\begin{table}
\begin{tabular}{l
Other devices support yet other communication protocols, such as custom TCP protocols or EPICS. To minimize the software changes required to integrate new devices, we allow the integrator to choose from among a number of _adapters_. In our work to date, we have found four classes of such adapters useful, as follows; others can easily be created:
* A **REST adapter** implements operations in terms of instrument-specific HTTP requests. Such adapters are written naturally in Python, using libraries that can handle required authentication and HTTP messaging with the specified REST endpoint.
* A **TCP adapter** maps operations into protobuf messages sent over a TCP socket to a server at a specified IP address and port.
* A **ROS adapter** translates operations into commands to a Robot Operating System (ROS) service [47] associated with the component (for action, about, resources) or that extract information from a ROS topic associated with the component (for state).
* An **EPICS adapter** maps operations into Channel Access operations used by EPICS[20]. It accesses a specified Process Variable and performs read and write operations as necessary to accomplish each operation and action.
**Organization of module software**: For ease of installation and use, we organize module software implementations into four components; the first three are shown in Figure 4:
* **interface**: Device-specific code that implements the module operations of Table I and that makes those operations available to remote clients via the module's chosen adapter.
* **adapter**: Adapter-specific code used to handle communications: currently, one of ROS, REST, TCP, or EPICS.
* **driver**: Device-specific code used to handle low-level interactions with the physical device, such as connection, raw command lists, error lists, and error handling.
* **description**: Device-specific CAD files, Universal Robot Definition File (URDF), and related configuration information, for use by simulations and for motion planning.
Given such software and a compatible physical device, a user can instantiate a module by installing the interface, adapter, and driver software on a computer that can interact with the device, and then starting the resulting node. The node is then accessible over the Internet at an address specific to the new module.
### _Specifying workcells_
We use a YAML-based notation to define workcells, as illustrated in Figure 4(b) and in more detail in Supplementary Information A.1. The YAML document lists a workcell's constituent modules and, for each, provides configuration information, including the location of modules and stations relative to the workcell origin.
### _Specifying workflows_
We use a similar YAML notation to specify workflows. As shown in Figure 4(c) and Supplementary Information A.2 and A.3, a workflow names a workcell, a list of modules within that workcell, and a sequence of actions to perform on those modules.
### _Running applications_
**Running workflows**: Given a workflow specification workflow and a running workflow executor associated with a suitable workcell and accessible at a specified wf_address and wf_port, the following Python code will run the workflow with a supplied payload. The workflow executor then handles the details of mapping from high-level workflow specifications to specific operations on workcell modules.
``` fromrpl_wei.exp_appimportExperiment experiment=Experiment(wf_address,wf_port,
experiment_name) experiment.run_job(workflow, payload=payload) ```
**Analyzing and publishing data:** An SDL must engage not only with experimental apparatus but also computers, data repositories, and other elements of a distributed scientific ecosystem--so that, for example, experimental results can be stored in an online repository and then employed, perhaps in combination with simulation results, to train a machine learning model used to choose the next experiment.
To support such interactions, we leverage capabilities of the Globus platform, a set of cloud-hosted services that provide for the single sign-on and management of identities and credentials and delegation, and for managed execution of data transfers between storage systems, remote computations, data cataloging and retrieval operations, data analysis pipelines, and other activities14. In each case, the Globus cloud service handles details such as monitoring of progress and retries on failure. These services have been used extensively, for example, to automate flows used to analyze data from, and provide on-line feedback to, x-ray source facilities61. As an example of the use of Globus services, the color picker application of Section 4.1 (and Supplementary Information A.4) employs Globus Compute13 to run a data analysis routine and Globus Search to publish experimental results to a cloud-hosted search index.
Footnote 13: [https://github.com/gid/gid/gid/gid](https://github.com/gid/gid/gid/gid).
Other methods could also be used for access to computing and data services; we employ Globus because of its broad adoption, security, and reliability.
**Logging:** An application also logs interesting events that occur during its execution to a logging service. The events include, for example, the start and end of the overall application, the start and end of a workflow, and the execution of a Globus flow14. Events are logged both in a file and via publication to a Kafka server29; the latter enables tracking of application progress by external entities.
### The cart as optional uniform hardware form factor
We have so far placed no constraints on how workcells are created, other than the practical need to have stations be accessible by manipulator(s). We can thus define highly compact assemblages of devices, such as the bio workcell depicted in Figure 6.
Footnote 14: [https://github.com/gid/gid/gid](https://github.com/gid/gid/gid).
With the goal of simplifying workcell assembly and disassembly, we have experimented with the use of a common hardware form factor, the _cart_: see Figure 7. A cart is built on a rigid chassis with horizontal dimensions 750 mm \(\times\) 750 mm and height of 1020 mm, plus an additional frame for a camera, to which are attached devices to connect the cart securely to neighboring carts or other laboratory components; lockable wheels, so that the cart can be moved and then fixed in place; a built-in computer (e.g., Intel NUC or Raspberry Pi); a downward-looking camera on the top of the chassis; a power supply; identifying markers (currently, QR codes) that also serve as fiducials, i.e., as physical reference objects in known positions; and zero or more modules, such as the a4s_sealer and brooks_peeler seen in Figure 7. Future designs might also include supplies, such as water and gas.
Given a set of carts and other equipment, we can construct a workcell by moving the carts into place and connecting them to each other. For example, we show in Figure 8 the RPL workcell organization that combines eight carts with a Precise Automation PreciseFlex 400 (PF400) on a 2 m linear rail. Our current experiments in reconfiguration are carried out manually: we add each cart to the workcell by rolling it into place, engage registration pins to secure the cart in position, and connect its onboard power distribution strip to power on the laboratory floor. In future work, we intend to perform these assembly tasks automatically by using mobile tractor robots. This level of automation will require methods for providing power and material supplies to the carts without human intervention. Drive up docking for automatic charging of mobile units has been widely deployed for many applications including home vacuuming robots; batteries and wireless power delivery are two other possibilities. Available industrial solutions for utility coupling could be used for automated secure connec
Fig. 6: The bio workcell, shown in real (left) and virtual (right) representations, comprises 1) liconic, 2) solo, 3) placercrane, 4) brooks_peeler, 5) a4s_sealer, and 6) hidex modules.
tion of power, liquids, and gases.
### Workcell validation, assembly, and supply
Having described how we specify workcells and workflows, and run workflows on a workcell, we now discuss how other operations may be implemented.
**Validation**: Given specifications for a workflow and a workcell, we can verify that they are consistent with each other as follows. First, we check that the modules listed in the workflow are defined in the workcell. Then, for each action in the workflow, we check that it is defined in the workcell, and that the associated variables are consistent (e.g., that names provided for stations exist in the workcell). In a workcell with a mobile camera, such as that shown in Figure 8, we can also check that the physical configuration matches its specification by instructing the camera to take a picture of each module in turn, extracting any QR code(s) in each picture, and then verifying that a QR code is found for each module listed in the specification. Finally, before execution, we can ping all modules to make sure that they are online. During execution, each instrument module validates each action that it receives and rejects any that are invalid.
**Assembling a workcell**: Our workcells implement the flexible automation concept introduced in Section 2, combining one or more manipulators and a set of instruments, all in fixed positions and organized so that the manipulator can move labware among instruments. Thus it is natural to think of distinct assembly and operations steps, with an assembly step placing modules in desired locations to create a workcell, and an operations step running applications on the assembled workcell. In a scalable, multi-purpose SDL, we will likely want also to automate assembly steps. If using the carts of Section 3.7, we can do this by employing tractor robots to relocate carts, automated locking mechanisms to attach carts to each other, and camera detection of fiducial markers [66, 67] (or force feedback on physical fiducials [12]) to refine module positions.
**Linking workcells**: For applications that require the use of modules in multiple workcells, mobile robots can be employed to move labware from a station in one workcell to a station in a second workcell: see Figure 9. To this end, we will want mechanisms for determining both the locations and states of different workcells, and for planning the necessary transfers.
**Supplies**: Mobile robots can also be used to replenish supplies and to remove waste: special cases of workcell linking.
**Linking with fixed instruments**: An SDL may also include devices that are too large or sensitive to relocate, such as x-ray machines, MRI machines, and microscopes. The functionality of these devices can be accessed by appropriate mobile robotics.
### Digital twins and simulation
A digital twin of an SDL mimics the state and operations of the lab in a simulated environment. This simulated implementation can then be used for purposes such as workflow testing and debugging, scaling studies, algorithm development (e.g., via reinforcement learning), and training.
Our workcell specification format includes model information for modules that can then be mapped to 3D models of the associated physical components: the description component noted in Section 3.3. Our specifications also include location information that can enable both placement of modules within a workcell and the placement of workcells in space. As discussed in Section 3.8, this location information can be obtained automatically when assembling workcells. Building on this information, we have employed NVIDIA's Omniverse platform to construct 3D models and visualizations of our workcells, as shown in Figure 6 and Figure 10. Using our workcell specification, such visualizations can be set up with ease, as many manufacturers will provide 3D models for their instruments and workcell location information can be used to automatically arrange instruments in the scene.
We use NVIDIA's Isaac Sim [42] application for simulation and digital twins, permitting exploration of both new equipment and workflows without requiring a physical deployment or the use of scarce resources. In Figure 10 we show our simulation acting as a digital twin, mimicking the actions and physics of the real laboratory as the sciclops stacker lifts a 96-well plate. The digital twin is useful as a visualization and comparison tool to verify that the laboratory is operating as expected. In the future, we plan to use digital twins to predict the results of actions before they happen in the real laboratory and thus to identify unexpected situations such as robot collisions.
These simulation tools can also be used to train vision algorithms for flexible real-world error detection, a technique known as sim-to-real transfer [72]. Many general-purpose sensors, such as cameras, cannot detect important situations out-of-the-box, and thus require training for specific situations that may arise in practice. Some such situations may be rare or difficult to replicate reliably, making the capture of real-world data impractical. Omniverse Replicator allows for the placement of (virtual) sensors in a digital twin, and thus the capture of data from a wide variety of custom-designed and randomized situations. State-of-the-art ray
Fig. 7: The Mk 1 cart, showing its chassis, wheels, and camera, plus two mounted modules, afs_sealer (left) and brooks_peeler (right). Other elements (e.g., computer, power supply strip, networking) are attached to the back supports, occluded by the instrument table.
## References
Fig. 8: A photo (left) and schematic (right) of the RPL workcell, comprising eight carts, #0–#7, plus a central pf400 plate mover for transferring sample trays among carts. Modules are labeled with the nicknames in Table 2. Cart #5 is empty, and carts #2 and #6 each contain two modules.
Fig. 9: _Left_: An SDL with three workcells (two with modules required for PCR experiments and one with modules required for growth assay experiments), plus a mobile robot that can refresh supplies and move samples between workcells, a fixed instrument, and a disposal station. _Right_: Conceptual layout for a larger science factory in which tractor robots reconfigure modules.
tracing and physics simulation packages built in to Omniverse ensure that these randomized situations look and act like real-world environments, so that training data are as realistic as possible.
## 4 Example applications
We provide implementation details, and in some cases also report results, for each of the five applications of Table 4.
### Color picker
This simple demonstration application, inspired by Roch et al. [48] and described in more detail by Baird and Sparks [7, 8] and Ginsburg et al. [23], seeks to find a mix of provided input colors that matches a specified target color. It proceeds by repeatedly creating a batch of \(B\) samples by combining different proportions of the input colors; taking a photo of the new samples; and comparing the photos with the target. The samples in the first batch are chosen at random, and then an optimization method is used to choose the samples in subsequent batches. In the study reported here, we fix the target color and the total number of samples (\(N\)=128), while varying the batch size \(B\) from 1 to 128, by powers of two.
Figure 11 depicts an implementation of the application that targets four of the modules listed in Table 2: scicolops, ot2, pf400, and camera. We present a somewhat simplified version of this application in Supplementary Information A.4. In brief, the Python program color_picker_app.py operates as follows.
1. It runs a first workflow, cp_wf_new_plate.yaml, which obtains a new plate from scicolops and places it at camera.
2. It then repeatedly: 1. calls a second workflow, cp_wf_mixcolor.yaml (with specification presented in Supplementary Information A.3) which transfers the plate from camera to ot2 and runs the ot2 protocol specified in the file combined_protocol.yaml to combine specified amounts of pigment from specified source wells to create \(B\) specified pigment mixtures; transfers the plate back to camera, and photographs the plate; 2. publishes the resulting data, by using Globus Search functions (see Figure 12); and 3. invokes an analysis program, by using Globus Compute, to evaluate the latest data and (if the termination criteria are not satisfied) chooses the next set of colors to evaluate.
3. Finally, it calls a third workflow, cp_wf_trashplate.yaml, to discard the plate.
We show in Figure 13 results from running this application with different values for the batch size, \(B\), using in each case a simple evolutionary solver. (The solver algorithm is interchangeable, allowing us to test the relative performance of different approaches; we are currently exploring the performance of alternatives.) To illustrate the use of data publication capabilities, we show in Figure 12 two screenshots from the data portal hosted at the Argonne Community Data Coop (ACDC) repository.
This application can easily be adapted to target different apparatus (e.g., different color mixing equipment, or Baird and Sparks' closed-loop spectroscopy lab [8]). It could also be modified to target multiple ot2s so as to speed up execution.
### Polymerase chain reaction
Polymerase chain reaction (PCR) [11], a technique used to amplify small segments of DNA, is important for many biological applications. Our PCR application uses six of the modules of Table 2: ot2, biometra, 4s_sealer, brooks_peeler, pf400, and scicolops. As shown in Figure 14, it is implemented by a Python program that runs a workflow that retrieves a PCR plate from the scicolops plate stack; moves that plate to an ot2, where it runs a protocol that mixes the enzymes and DNA samples in the plate; moves the plate from ot2 to a4s_sealer, where it seals the plate; moves the sealed plate to biometra, where it runs a program that heats and cools the reagents in sequence to facilitate the PCR reactions; moves the plate from biometra to brooks_peeler, where it peels the plate; moves the plate to camera, where it takes a picture; and finally transfers the plate to an exchange location where it can be used in further workflows or, after re-sealing, transported to cold storage for later use. We present this workflow's specification in Supplementary Information A.2.
### Growth assays for bacterial treatments
This application performs automated experiments to generate dose-response curves. These dose-response curves are useful for many microbiology research objectives, including cancer therapeutic development and antibiotic discovery. Our work in predicting antimicrobial response [39, 40] and tumor response to small molecules [68], coupled with laboratory screening, provides an ideal use case for automation that moves towards fully autonomous discovery.
Our growth assay application employs six modules of Table 2: solo, platecrane, 4s_sealer, brooks_peeler, liconic, and hidex. As shown in Figure 15, it is implemented by a Python program that runs two workflows per assay plate created. The first workflow contains all steps required to create the assay plate, including liquid handling actions as well as steps to take the initial absorbance readings on the assay plate, while the second runs after a timed wait for incubation and contains all steps required to take the final absorbance readings of the assay plate.
### Autonomous synthesis of electrochromic polymers
Jie Xu and her team have developed an SDL [64] for the autonomous synthesis of electrochromic polymers (ECPs), a type of polymer material employed in applications such as smart windows, displays, and energy-efficient devices [1]. The use of polymer materials for such applications can offer diversity and ease at synthesizability following simple synthetic steps. However, the interplay between multiple parameters, including the physicochemical properties of the monomers and their formulations, make it difficult to predict intuitively the performance of these systems. Thus, researchers must develop and characterize a wide range of formulation candidates through time-consuming experimentation. To
overcome these limitations, they built a self-driving laboratory to synthesize ECPs by combining different monomers in certain ratios and lengths so as to modulate the color.
This SDL employs chemspeed, ur, and tecan modules. The polymer synthesis process is coordinated by a Python application that executes a single workflow: see Figure 16. The workflow first retrieves the plate with the synthesized polymers from chemspeed. It then transfers the plate to tecan, which implements a protocol to measure the absorption spectra with a UVVis measurement device. After the completion of measurements, the plate is transferred from tecan back to chemspeed. The collected data from tecan are analyzed to determine the color coordinates of the samples. This information is provided to a neural network to obtain recommendations for the next batch of materials.
### Pendant drop for study of complex fluids
Robotic pendant drop provides an end-to-end automated, \(\mu\)s-resolved XPCS workflow for studying the dynamics and structures of complex fluids. Ozgulbas _et al._[44] recently demonstrated that Brownian dynamics of nanoparticle colloid in a pendant drop is consistent with the reference setup such as thin-walled quartz capillaries. Furthermore, the pendant drop setup can be integrated with a robotic arm (UR3e) to fully automate sample preparation, characterization, and disposal. This approach addresses limitations associated with manual sample changes at the 4th-generation coherent synchrotron x-ray sources that are being constructed and commissioned around the world.
In a robotic pendant drop setup, the use of an electronic pipette enables the dispense and withdrawal of the pendant drop into a
Fig. 11: The color-picker application. The Python code, color_picker_app.py, implements logic that runs three distinct workflows, with the second (plus associated publish and compute steps) called repeatedly until termination criteria are satisfied. The orange box below the ot2 run_protocol action gives the name of the protocol file. Module names are as in Table 2.
Fig. 10: Other views of the RPL workcell of Figure 8, shown in real (left) and virtual (right) representations. Modules, from front to back: 1) plate stack; 2) sciclops stacker; 3) a4a_sealer; 4) brooks_peeler; 5) ot2; and 6) pf400. Also visible are 7) 96-well plates: one held by the sciclops in each image, and a second on the table in the real case.
Fig. 12: Two views of a Globus Search portal for data generated by the color-picker application of Section 4.1, at [https://acdc.alcf.anl.gov](https://acdc.alcf.anl.gov). _Left_: Summary view for an experiment performed on April 27, 2023, involving 20 runs each with 8 samples, for a total of 160 experiments. The images are those taken by the camera. _Right_: Detailed data from run #16.
Fig. 13: Results of seven experiments, in each which the color picker application creates and evaluates 128 samples, in batches of an experiment-specific size \(B=\) 1, 2, 4, 8, 16, 32, and 64. In each experiment, the target color is RGB=(120,120,120), the first sample(s) are chosen at random, and later samples are chosen by applying a solver algorithm to camera images. Each dot has as x-value the elapsed time in the experiment and as y-value the Euclidean distance in three-dimensional color space between the target color and the best color seen so far in the experiment. The numbers in the graph represent selected sample sequence numbers. Results depend significantly on the original random guesses, but overall, as we might expect, the experiments with smaller batch sizes achieve lower scores but take longer to run.
pipette tip. The electronic pipette is mounted on a robotic arm that can readily access vials of the stock liquid samples and a 96-well PCR plate for precise and repeatable generation of complex fluid samples with tailored composition profiles. The end-to-end automation of the complex fluid X-ray scattering workflow also enables nescience that requires sample handling at non-ambient environments (e.g., high/low temperature, anoxic). Finally, the robotic pendant drop is programmed with workflows, which provides a modular approach that not only improves the reusability of the robotic code but also facilitates AI-driven, physics-aware self-programming robots at the Advanced Photon Source of Argonne National Laboratory in the near future.
The experiments just described used the physical apparatus and application depicted in Figure 17. The application uses a single module, an UR3e arm, to perform the following steps. The arm, initially positioned at the home base, picks up the pipette from the docking location by activating the locking mechanism of the tool changer, and attaches a tip to the pipette from the tip bin. Then, it prepares the sample on the 96-well plate by driving the pipette. Next, to obtain the measurements with the prepared sample, the pipette is placed on the docking location and a droplet is formed by dispensing the sample; with an optical microscope used to monitor the optical appearance of the drop during alignment and the SA-XPCS measurement. Lastly, the pipette is picked up from the docking location, the tip is ejected to the trash bin, and the pipette is placed back at the docking location.
The robotic pendant drop provides an end-to-end automated solution for studies of dynamics and structures of complex fluid using light-scattering techniques such as dynamic light scattering, x-ray/neutron scattering, and XPCS. This automated experimental protocol can be combined with the data managing workflow [71], high-throughput analysis [26], open-source graphical user interface (GUI) [17] and AI-assisted data interpretation [25] to provide a self-driving experimental station at future user facilities, paving the way to autonomous material discovery driven by domain-science-specific questions from facility users.
## 5 Discussion
**Ability to integrate different instruments**: As noted in Section 3.3, the integration of a new device into our architecture requires implementations of four software components to implement the operations on Table 1. To date, we have performed this integration for 15 instruments of quite different types. We found that the ease or difficulty of this integration varies a great deal across device. The easiest are those, like ot2, that provide Python libraries for interacting directly with the device. Somewhat more difficult are those, like sciclops, that expose a serial port and document a pre-defined list of commands that we can send with Python serial libraries. The most difficult are those that use custom communication protocols. For example, hider uses a custom.NET-specific connection that we can access only through C#-based connection objects from a specific.NET version. In future work, we are also interested in automating the process by which instruments are integrated into the system. This problem is arguably akin to automated interface discovery, for which fuzzing [18] could be employed. Large language models may also have promise [54, 59].
**Suitability of ROS**: We initially planned to use ROS to control and monitor all experimental apparatus. However, we found that while ROS was useful in some contexts (e.g., for controlling mobile robots like mir, for which ROS path planing libraries were helpful), it introduced unhelpful complexity in others, such as for instruments that run Windows or that produce large quantities of data. Furthermore, the generality of ROS is not needed in most cases: many of our instruments are not general-purpose robots but rather devices that each perform just a few relatively simple operations. Thus, we arrived at our architecture based on the interface of Table 1.
**Ability to retarget applications**: An important goal of our work is to enable porting of applications between workcells with different configurations, with few or no changes to application logic. As an example of successful transfer, the growth assay application was initially developed, as described in Section 4.3, on the RPL workcell of Figure 10. Once working there, we transferred it to the Bio worked of Figure 6 in another lab at Argonne, with different equipment (platecrane rather than pf400 for transfer actions, solo rather than ot2 for liquid handling). _Only the module names in the workflow needed to be changed to retarget the workflow to different hardware in a different configuration_.
**Ability to reuse workflows**: Another important goal is to enable reuse of workflows across applications. As an example of reuse, the workflow used in the growth assay application shares many steps with the workflow used in the PCR application.
**Notation**: We have chosen in this work to represent workcells and workflows as YAML documents and applications as Python programs, with the goal of simplifying the configuration (and analysis: see Section 3.8) of the first two entity types without sacrific
Fig. 14: PCR application. Module names are as in Table 2.
Fig. 15 Growth assay application. _Upper left_: A list of datasets, one per experiment, on data portal. _Lower left_: Results from a single experiment in which tetracycline solution at varying concentrations was added to _E. coli_. Y-axis gives blank-adjusted optical density at 590nm at the start of the experiment (T0) and 12 hours after start (T12). Results show mean plus error bars from four identical runs. _Right_: The application, without data analysis and publication steps.
Fig. 16_Left_: Elements of the electrochromic polymer discovery experiment. 1) The chemspeed system for polymer synthesis; 2) The ur arm for polymer sample transfer and loading; 3) The tecan for UV-vis spectrum characterization. _Right_: The electrochromic polymer discovery application runs a single workflow.
Fig. 17_Elements of the pendant drop experiment, shown in virtual (left) and real (middle) representations. 1) The UR3e arm 2) picks up a pipette from 3) the docking location by activating 4) the locking mechanism of the tool changer. Also shown: 5) the 45\({}^{\circ}\) reflective mirror with a 1 mm-diameter through-hole located upstream of the sample and 6) the optical microscope used to view the reflection of the pendant drop. (Center inset shows a pendant drop.) _Right_: The pendant drop application runs a single workflow, demo, which performs a series of ur actions.
ing the generality offered by a programming language. We have found this approach to work well for our target applications, but other approaches (e.g., a programming language for workflows, or static configurations for complete applications) may prove advantageous in other contexts.
**Education and training**: Hands-on laboratory work has long been an important element of experimental science education. Yet the role of researchers working with SDLs is not to perform experiments themselves, but to plan, monitor, and guide SDL activities--tasks that require new skills, and thus new approaches to education and training [55]. We may also wonder whether hands-on experimental skills become less important--and, if not, how those skills are to be taught if science factories or other remote SDLs reduce opportunities for hands-on access.
**Concurrency**: Our current infrastructure does not support concurrent execution of workflow steps, as would be required, for example, to drive multiple OT2s in the color-picker experiment. Providing such support will not be difficult. One approach would be to allow users to launch multiple workflows at once, and then schedule execution of individual steps within each workflow subject to appropriate constraints. For example, we might want to ensure that (a) each workflow step is scheduled only after the preceding step in the workflow has completed, and (b) a transfer step that is to retrieve a sample holder from station \(a\) and deposit it at station \(b\) is scheduled only when \(a\) is occupied and \(b\) is empty.
**Failures**: The abilities first to detect errors and then to respond to them without human intervention are crucial requirements for any autonomous discovery system. We find it useful to distinguish among three types of error based on how the error evidences itself during an experiment: 1) A _software error_ is detected and reported by an instrument or its control software in a way that allows high-level software to respond programmatically. For example, a response to an action command indicating that an instrument is offline can allow the workflow executor to reset the instrument or request human assistance to restart it. 2) An _operational error_ is one that prevents a workflow from proceeding but that is not detected and reported as a software error. For example, a misaligned manipulator might drop rather than deposit a sample during a transfer command, but report correct completion. One approach to detecting such errors is monitoring, out of band from the instrument, with cameras or other sensors. Monitoring results can then be used to diagnose errors and perhaps even to drive remedial actions. 3) An _experiment error_ occurs when a workflow performs its actions completely and correctly, but produces an unexpected result: e.g., cells do not grow or PCR does not take place. Such occurrences may require changes to the experimental workflow or may represent new knowledge.
Ideally, all erroneous conditions would be detected and reported as software errors or operational errors, so that only true experiment errors are reported as such. To this end, we continue to review operational errors and, wherever possible either eliminate them (e.g., by fixing race conditions in device interfaces) or transform them into software errors (e.g., by adding checks for exhausted reagents).
**Continuous operation**: Large-scale, long-term SDL operation requires the automation of support functions (e.g., replenishing consumables, disposing of waste, correcting operational errors) that in simpler settings might be handled by humans. We propose time-without-human-intervention as a useful metric for quantifying the level of automation achieved for both individual applications and a complete science factory running a mixed workload.
## 6 Summary and conclusions
We have reported on concepts and mechanisms for the construction and operation of science factories: large-scale, general-purpose, simulation- and AI-enabled self-driving laboratories. We presented methods for defining individual modules, grouping modules to form workcells, and running applications on workcells. We described how a variety of instruments and other devices can work with these methods, and how modules can be linked with AI models, data repositories, and other computational capabilities. We also demonstrated the ability to reuse modules and workcells for different applications, to migrate applications between workcells, and to reuse workflows within applications for different purposes.
We are working to expand the range of devices, workflows, environments, and applications supported by our science factory architecture; link multiple workcells with mobile robots; incorporate support functions such as supply and waste disposal; run increasingly ambitious science studies; and evaluate performance and resilience. We are also working to expand our simulation capabilities to enable investigation of scaling issues and ultimately the design and validation of science factories in which hundreds or thousands of workcells support many concurrent experiments.
The science factory architecture that we present here is a work in progress. Its modularity makes it easy to extend with new instruments, AI and other computational methods, and new workflows and applications, and its simplicity enables rapid deployment in new settings. We welcome collaboration on any aspect of its implementation and application.
## Data and code availability
Data and code associated with this article are at [https://ad-sdl.github.io/wei2023](https://ad-sdl.github.io/wei2023), as described in the Supplementary Information.
## Author contributions
IF, RV, BB, TB, MH, AR, and RS contributed to the conception of the modular autonomous discovery architecture. RV, CS, TG, KH, DO, AS, RB, BB, TB, KC, MH, AR, and IF contributed to the design of the system described. RV, MH, IF, and DO designed the modular carts and table. RV, IF, DO, KH, CS, TB, and AS selected and designed the exemplar workflows. DO and QZ led the development and data collection for the Pendant Drop application. DO and RB designed and produced the Pendant Drop simulation and video. DO, AV, and JX led the development and data collection for the Autonomous Synthesis of Electrochromic Polymers application. RV and TG developed the data portals associated with the experiments. RV and CS managed the design and development team. The developers of each software module, maintained in
github, are: RV, TG, KH for the main module; RV, DO, and TG (camera); DO and RV (pf400, a4s_sealer, brooks_peeler, ur); DO, AS, and RV (platecrane, sciclops); KH, AS, and DO, RV (ot2); AS and DO (biometra); DO and AV (chemspeed, tecan); RB (rpl_omniverse, the virtual reality simulation of the modular workcell). DO designed and implemented the ROS RViz real time visualization. IF led the writing effort with RV, TG, KH, DO, CS, AS, RB, TB, KC, MH, AR, and AV contributing to the writing, editing, and reviewing.
## Conflicts of interest
There are no conflicts of interest to declare.
## Acknowledgements
We are grateful to Argonne colleagues with whom we have worked on SDLs, including Gyorgy Babnigg, Pete Beckman, Max Delferro, Magali Ferrandon, Millie Firestone, Kawtar Hafidi, David Kaplan, Suresh Narayanan, Mike Papka, Young Soo Park, Rick Stevens, and Logan Ward. We thank also Eric Codrea, Yuanjian Liu, Priyanka Setty, and other students for their contributions, and Ryan Chard, Nickolaus Saint, and others in the Globus team for their ongoing support. We have benefited from conversations with many working in this area, including Sterling Baird, Andy Cooper, Lee Cronin, Jason Hattrick-Simpers, Ross King, Phil Maffettone, and Joshua Schreier. This work would not have been possible without much appreciated support from the leadership and staff of Argonne's Leadership Computing Facility and Advanced Photon Source. This work was supported in part by Laboratory Directed Research and Development funds at Argonne National Laboratory from the U.S. Department of Energy under Contract DE-AC02-06CH11357.
|
2305.17551 | Statistical Study of Uncontrolled Geostationary Satellites Near an
Unstable Equilibrium Point | The growth of the population of space debris in the geostationary ring and
the resulting threat to active satellites require insight into the dynamics of
uncontrolled objects in the region. A Monte Carlo simulation analyzed the
sensitivity to initial conditions of the long-term evolution of geostationary
spacecraft near an unstable point of the geopotential, where irregular behavior
(e.g., transitions between long libration and continuous circulation) occurs. A
statistical analysis unveiled sudden transitions from order to disorder,
interspersed with intervals of smooth evolution. There is a periodicity of
approximately half a century in the episodes of disorder, suggesting a
connection with the precession of the orbital plane, due to Earth's oblateness
and lunisolar perturbations. The third-degree harmonics of the geopotential
also play a vital role. They introduce an asymmetry between the unstable
equilibrium points, enabling the long libration mode. The unpredictability
occurs just in a small fraction of the precession cycle, when the inclination
is close to zero. A simplified model, including only gravity harmonics up to
degree 3 and the Earth and Moon in circular coplanar orbits is capable of
reproducing most features of the high-fidelity simulation. | Roberto Flores, Mauro Pontani, Elena Fantino | 2023-05-27T18:58:57Z | http://arxiv.org/abs/2305.17551v1 | # Statistical Study of Uncontrolled Geostationary Satellites Near an Unstable Equilibrium Point1
###### Abstract
The growth of the population of space debris in the geostationary ring and the resulting threat to active satellites require insight into the dynamics of uncontrolled objects in the region. A Monte Carlo simulation analyzed the sensitivity to initial conditions of the long-term evolution of geostationary spacecraft near an unstable point of the geopotential, where irregular behavior (e.g., transitions between long libration and continuous circulation) occurs. A statistical analysis unveiled sudden transitions from order to disorder, interspersed with intervals of smooth evolution. There is a periodicity of approximately half a century in the episodes of disorder, suggesting a connection with the precession of the orbital plane, due to Earth's oblateness and lunisolar perturbations. The third-degree harmonics of the geopotential also play a vital role. They introduce an asymmetry between the unstable equilibrium points, enabling the long libration mode. The unpredictability occurs just in a small fraction of the precession cycle, when the inclination is close to zero. A simplified model, including only gravity harmonics up to degree 3 and the Earth and Moon in circular coplanar orbits is capable of reproducing most features of the high-fidelity simulation.
**Nomenclature**
\[a = \text{semi-major axis (km)}\] \[A = \text{cross sectional area (km^{2})}\] \[arcsec = \text{arcsecond (4.84814\cdot 10^{-6}\,\text{rad})}\] \[dpy = \text{degrees per year}\] \[i = \text{orbital inclination (rad)}\] \[m = \text{mass (kg)}\] \[N = \text{degree of expansion of the gravity field}\] \[S = \text{stable equilibrium points}\] \[t = \text{time (s)}\] \[U = \text{unstable equilibrium points}\] \[\Delta\sigma = \text{jump in longitude standard deviation (rad)}\] \[\Delta t = \text{duration of episode of sudden scatter increase (s)}\] \[\Delta T = \text{difference between TT and UT1 time standards (s)}\] \[\{r,\theta\lambda\} =\text{geocentric spherical coordinates \{radius, colatitude, longitude\} (km, rad, rad)}\] \[\sigma = \text{standard deviation}\] \[<x> = \text{mean value of }x\]
_Subscripts_
\[0 = \text{initial value}\]
**I. Introduction**
The continuous expansion of the space debris population has become a major concern for the scientific community. Over the years, the Inter-Agency Space Debris Coordination Committee has published reports [1] and recommendations [2] to the interested bodies (companies and agencies) to mitigate the proliferation of space debris. The geostationary orbit is one of the most congested regions. While operators schedule disposal to a graveyard orbit for geostationary spacecraft at the end of operations, the maneuver is not always successful. Furthermore, old satellites were simply abandoned at their original station. Given the serious implications for the continued safe operation of active satellites, numerous algorithms for the planning and optimization of station-keeping maneuvers [3, 4], collision avoidance [5, 6], station change for on-orbit servicing [7] and active debris removal [8] have appeared in recent
literature. Moreover, the long-term dynamics of decommissioned spacecraft in the geostationary orbit has become an area of active research (for example, see Refs. [9; 10]).
Two characteristic features of the orbital evolution of geostationary satellites are: (a) precession of the orbital plane and, (b) drift of the geographical longitude. Effect (a) is related to Moon and Sun gravitational perturbations, combined with Earth's oblateness. Early studies of this phenomenon are due to Allan and Cooke [11] and van der Ha [12], who employed double averaging (over the satellite orbital period and the periods of the perturbing bodies). Hechler [13] pointed out the occurrence of precession motion, with a period of 53 years, around an axis inclined 7.3 degrees from Earth's rotation axis. Friesen et al. [14] performed long-term propagations (up to 1000 years) of near-geostationary satellites and observed inclination changes of up to 15\({}^{\circ}\) over a cycle of 53 years. Recently, Proietti et al. derived simple analytical expressions for the evolution of inclination and right ascension of the ascending node of uncontrolled geostationary spacecraft [15].
Effect (b) is related to irregularities of Earth's gravity field. The \(J_{22}\) harmonic of the geopotential, associated with the ellipticity of the terrestrial equator [16], plays a major role, due to resonance between Earth's rotation and orbital motion. It gives rise to two stable and two unstable equilibrium longitudes (for example, see Refs. [16; 17; 18]). The resulting orbital dynamics is either (i) librational or (ii) circulating. Separatrices divide these two qualitatively different behaviors. In case (i) the spacecraft oscillates around one of the stable longitudes, whereas in case (ii) it traverses all longitudes, moving either westward or eastward. With the action of \(J_{22}\) alone, the longitudinal dynamics is completely predictable and depends on the initial conditions in a straightforward way. Lara and Elipe [19] proved the existence of stable periodic orbits emanating from the two unstable equilibria. However, \(J_{22}\) alone is not sufficient to explain the complex longitude evolution of uncontrolled geostationary spacecraft. This was first noticed by Vashkoy'yak and Lidov [20], who pointed out the existence of librational motion spanning both stable points. They ascribed this phenomenon to higher-degree harmonics of the geopotential. Kiladze and Sochilina [21] and Kiladze et al. [22] confirmed this behavior, describing the occurrence of three types of dynamics depending on the initial conditions: simple libration around the closest stable position, long libration encompassing both stable positions, and circulation.
Kuznetsov and Kaizer [23] addressed the orbital motion of geostationary satellites located in the proximity of the separatrices. They tracked the regions where the separatrices migrate due to perturbations, and analyzed the dynamical behavior as a function of initial inclination and surface-to-mass ratio of the spacecraft. In a recent contribution, Celletti and Gales [9] employed Hamiltonian formalism and fast Lyapunov indicators to establish the amplitudes of the
libration islands. Gachet et al. [24] modeled the orbital dynamics including solar radiation pressure and harmonics of the geopotential up to second degree. They focused on forced equilibrium solutions, proving they lie on a 5-dimensional torus. Colombo and Gkolias [10] investigated the long-term stability of the geostationary region for the purpose of designing effective disposal maneuvers.
A previous investigation of the complex longitudinal dynamics of decommissioned satellites [15] showed that initial positions sufficiently close to the unstable points trigger irregular dynamical behavior, with transitions between different types of motion (simple libration, long libration and circulation). This phenomenon requires harmonics of degree 3 of the geopotential, which are responsible for the asymmetry in energy of the two unstable points. The time-dependent third-body gravitational effects introduce a complex modulation of the total potential, allowing sporadic motion across unstable equilibrium points. Spectral analysis confirmed that the main contributors to this irregular behavior are gravity harmonics up to third degree and lunisolar perturbations. An interesting result of [15] was that small changes of the initial conditions can lead to vastly different longitudinal motion patterns. For some combinations of initial longitude and epoch (which influence the solution through the positions of Sun and Moon) the sensitivity to numerical perturbations is so extreme that it was impossible to obtain robust propagations beyond a horizon of 60 years. The observed phenomena are reminiscent of the complex structure underlying the dynamics of the GPS satellites, investigated by Daquin et al. [25] using analytical and semi-analytical techniques. Also, the analysis of the long-term evolution of some Molniya satellites (in 2:1 resonance with terrestrial rotation) has revealed the existence of a hyperbolic tangle in phase space and the lunisolar perturbation plays a key role in its structure [26].
This contribution seeks a deeper understanding of the phenomena observed in the geostationary regime, aiming to a better characterization of the sensitivity to initial conditions. It presents a Monte Carlo simulation of large collections of uncontrolled spacecraft released near one of the unstable equilibrium points. The evolution of the spatial coherence of the satellite cloud is characterized with a statistical analysis of the trajectories, identifying the physical effects responsible for the observed behavior. The main steps are: (a) propagation of the trajectories of a large collection of satellites, with initial positions randomly distributed near an unstable longitude; (b) statistical analysis of the trajectory set to quantify the spatial coherence and its long-term evolution; (c) identification of the relations between the statistical properties and the different perturbation sources; (d) use of simplified physical models to identify the dominant perturbations; (e) comparison of the simple models against high-fidelity numerical results to determine if they retain the qualitative dynamical behavior.
This paper is organized as follows. Section II deals with the physical model and the associated mathematical framework used for orbit propagation. Section III presents the numerical results obtained for a reference set of initial conditions. It focuses on the evolution of statistical parameters characterizing the distributions of geographical longitude and inclination. Section IV investigates the effect of initial conditions, i.e., reference epoch and initial longitude. Section V centers on simplified dynamical modeling, seeking to identify the physical effects that dominate the dynamical behavior. Finally, the main conclusions are drawn in section VI.
## II Physical Model and Orbit Propagation
The trajectory is propagated in Cartesian coordinates using an adaptive embedded Runge-Kutta scheme of order 7(8) derived by Fehlberg [27]. The software has been validated against other propagators, and demonstrated the required level of accuracy [15]. The perturbations considered, following [15], are Moon and Sun third-body effects and solar radiation pressure.
While the satellite vector is formulated in the Earth-centered International Celestial Reference Frame (ICRF), the model of the terrestrial gravity field is expressed in the International Terrestrial Reference Frame (ITRF). The transform between the two frames follows the IAU 2000/2006 combined precession-nutation model [28]. Polar motion is neglected, because it cannot be reliably predicted into the future [29]. This introduces an uncertainty on the order of 0.3 arcsec in the calculation [29], but does not change the dynamical properties of the system. The spin of the Earth is computed according to [30], assuming \(\Delta T\) increases 0.38 s per year (the approximate trend during the 2010-2020 period1). Instead of the complete IAU 2000/2006 precession-nutation model, a concise formulation based on Ref. [31] is used to reduce the computational burden while maintaining an accuracy of 1 arcsec.
Footnote 1: [https://www.iers.org/IERS/EN/DataProducts/EarthOrientationData/eop.html](https://www.iers.org/IERS/EN/DataProducts/EarthOrientationData/eop.html)
The acceleration of gravity is obtained from a sum of spherical harmonics using the modified forward row recursion scheme [32]. Hereafter, whenever the expansion degree of the geopotential (\(N\)) is mentioned, it means that the geopotential model is complete to degree and order \(N\) (i.e., it includes all the relevant zonal, tesseral and sectorial harmonics). Gravity calculations conform strictly to the International Earth rotation and Reference Systems Services (IERS) recommended practice IERS Technical Note No. 36 [33]. It establishes EGM2008 [34] as the geopotential model of choice, and includes corrections for the secular drift of the low-degree zonal harmonics, as well as an
improved value of the first zonal harmonic. The maximum degree of harmonic expansion in the calculations is \(N=8\), which yields the highest accuracy achievable with EGM2008 for a geostationary orbit [35].
To minimize rounding errors in the calculation of the tidal forces of the Sun and Moon (third-body perturbations), the well-conditioned expression found in [36] is used. The positions of the celestial bodies are interpolated with cubic splines using tabulated state vectors from JPL's Solar System Dynamics website1.
Footnote 1: [https://ssd.jpl.nasa.gov/horizons.cgi](https://ssd.jpl.nasa.gov/horizons.cgi)
The acceleration due to radiation pressure is computed assuming the satellite is a sphere ("cannonball" model, see [37]) with 100% specular reflectivity. An area-to-mass ratio of 6.67\(\cdot\)10\({}^{-3}\) m\({}^{2}\)/kg, reasonable for a communications satellite, was used for the calculations. The eclipses were modeled with a cylindrical shadow approximation.
## III Numerical Results from Orbit Propagations
A previous study [15] suggested that the long-term evolution in longitude for uncontrolled satellites released near unstable equilibrium points is very difficult to predict. Some combinations of initial longitude and epoch were so sensitive to numerical perturbations (e.g., rounding errors) that it was not possible to obtain reliable solutions beyond 60 years into the future. While strongly hinting at the complexity of the long-term behavior, the original study was limited to two initial epochs, and widely-spaced (2 degrees apart) initial longitudes. To better characterize the unpredictability of the system, this work presents a detailed sensitivity analysis by means of a Monte Carlo simulation, propagating the trajectories of large clouds of spacecraft with very similar initial conditions. A statistical analysis of the results provides better insight into the long-term evolution towards disorder.
### Choice of Initial Conditions
This work assumes that a spacecraft inside its operational window loses control abruptly. It is common practice among operators to maintain a window of +/- 0.05\({}^{\circ}\) centered on the nominal position. This is more stringent than International Telecommunication Union (ITU) requirements [38], which allow for a window twice as large. Furthermore, it is assumed that leaving the window during standard operations is a rare occurrence, considered a 3\(\sigma\) event. Thus, the standard deviation of the initial angular position (latitude and longitude) would be \(\sigma_{\lambda,\rho}=0.016^{\circ}\) (57.6 arcsec). A binormally-distributed sample of longitudes and latitudes centered on the nominal position is generated using Marsaglia's polar scheme [39]. This sample provides the initial conditions for the trajectory
propagation. In reality, the longitude and latitude of an active satellite are not independent random variables. Instead, they are controlled by the station-keeping strategy. However, the goal is to study the dynamical properties of the system irrespective of the peculiarities of any particular operator. In this respect, this approach is appropriate. Furthermore, it illustrates how the complex dynamics affect the normality of the initial distribution.
In a first approximation, the longitudinal dynamics of geostationary satellites can be described qualitatively by considering only harmonics of the gravitational field up to degree 2. The \(J_{22}\) term, which represents the ellipticity of the equator, gives rise to four equilibrium points in the equator; two stable (75\({}^{\circ}\)E and 108\({}^{\circ}\)W) and two unstable (165\({}^{\circ}\)E and 15\({}^{\circ}\)W) - see [15] for a detailed derivation. Under the influence of \(J_{22}\) alone, the motion of a satellite is a simple libration around the closest stable point. The third-degree sectorial and tesseral harmonics of the gravity field introduce an asymmetry in the potential of the unstable points [21]. Consequently, two additional modes of motion appear: "long" libration encompassing both stable points, and continuous circulation along the equator.
Additionally, Earth's polar flattening (quantified by the \(J_{2}\) zonal harmonic) would cause the orbital plane to precess around the celestial pole. Third-body perturbations, on the other hand, force a precession around the pole of the orbit of the perturbing body. The combined effect of lunisolar perturbations and \(J_{2}\) causes a 53-year precession cycle about the pole of the Laplacian plane, located between the pole of the ecliptic and the axis of rotation of the planet. It is inclined approximately 7\({}^{\circ}\) from Earth's axis of figure [11; 40] (this value is not really constant, due to the precession of the lunar orbit). As a result, the inclination of the spacecraft orbit varies between zero and 14\({}^{\circ}\) over the 53-year cycle. The interaction between the inclination cycle and the longitudinal dynamics (governed by the tesseral harmonics of the gravity field) modulates the potential barrier the spacecraft faces to move across the unstable points, enabling transitions between the different modes of motion [21]. This work focuses on trajectories starting close to the point of maximum instability, which offers the potential for highest complexity, enabling transitions between continuous circulation and long libration [15]. To this effect, 165.3\({}^{\circ}\)E is chosen as nominal initial longitude for the sensitivity analysis. A survey of the literature reveals slightly different values for the position of the unstable point, depending on the model used to determine it (e.g., expansion degree of the gravity field). It is worth noting that, for a geostationary satellite, the order of magnitude of lunisolar perturbations is comparable to the irregularities of the gravity field [41]. Consequently, there is really no fixed equilibrium point, given that the relative orientations of Earth, Moon and Sun change continuously. Therefore, the exact value of the initial longitude is not critical as long as it is sufficiently close to the region of maximum instability. Due to the irregularities of the gravity field, the altitude where the centrifugal
force acting on a geostationary satellite balances the inward acceleration due to gravity is different from the theoretical value for a spherical Earth. Using the theoretical height would induce a longitudinal drift right from the beginning of the computation. An iterative solver finds the appropriate height for each initial position in the cloud to ensure the initial motion is in sync with the rotation of the planet. The corrected starting position is approximately 600 m higher than the theoretical value for a spherical mass distribution.
Due to the importance of lunisolar perturbations, the initial epoch has a marked effect on the evolution of the cloud. The baseline propagation starts on 1 January 2020 at 0:00 UTC (JDN 2458849.5) because it is one of the dates used in Ref. [15]. In later sections, the effect of varying the initial epoch will be analyzed in detail.
### Longitude Evolution for the Baseline Case
The reference solution includes a set of 600 trajectories starting on JDN 2458849.5, with the initial position bi-normally distributed around \(<\lambda>=165.3^{\text{o}}\text{E}\) and \(<\theta>=90^{\text{o}}\), and standard deviation \(\sigma_{\lambda,\theta}=0.016^{\text{o}}\). The length of propagation was set to 120 years, to encompass two complete cycles of precession of the orbital plane. The calculation considers harmonics of the gravity field up to degree and order 8, solar radiation pressure, lunisolar perturbation with precomputed ephemerides, and includes precession and nutation of Earth's axis. As indicated before, under the combined effects of lunisolar perturbations and Earth's polar flattening, the orbital inclination of a geosynchronous satellite will experience a 53-year cycle. Whenever the inclination is different from zero, seen from Earth the spacecraft will describe a figure eight trajectory (analemma) [42]. The North-South motion is due to the change in latitude along the orbit, while the East-West oscillation is caused by variations in the angular rate of motion of the projection of the spacecraft over the equator (the eastward velocity being minimal at the nodes and maximal at the points of extreme latitude). To isolate this diurnal modulation from the secular trend (which is the focus of the study), the trajectories have been sampled at an integer multiple of Earth's stellar period1 (i.e., the rotational period in inertial space). This way, the snapshots of the trajectory are taken roughly at the same point of the diurnal cycle, effectively removing it from the output data. An output interval of 5 stellar days has been chosen, which provides a sufficient sampling of the lunar cycle.
The longitudinal evolution of the satellite cloud (in terms of mean value and standard deviation) is depicted in Fig. 1. Note that, while the graph restricts longitude to the [0\({}^{\rm o}\), 360\({}^{\rm o}\)[ interval, the calculations use a continuous representation (i.e., longitude is allowed to vary from -\(\infty\) to +\(\infty\)) to compute the standard deviation. As far as position is concerned, values of the angular dispersion above 180\({}^{\rm o}\) are not meaningful. Because two points in a circle can never be separated more than 180\({}^{\rm o}\), a large dispersion only indicates that the satellites are scattered all over the equator. However, the interest here is the dynamics of the system. The continuous representation longitude is used as an indicator of the accumulated East-West angular displacement. In that sense, it is relevant. For example, two spacecraft that move Eastwards 360\({}^{\rm o}\) and 720\({}^{\rm o}\) end up in the same longitude, but their dynamical behaviors are clearly different. The standard deviation of the continuous longitude remains sensitive to differences in drift rate, even when the spacecraft are uniformly distributed around the equator. Because the variations in scatter span six orders of magnitude Fig. 1 uses a logarithmic representation for the standard deviation. The most striking feature of the chart is the abrupt jump in scatter that takes place approximately half a century after the initial epoch (\(t_{0}\)). Initially, the longitudinal motion is relatively ordered (\(\sigma_{\lambda}\) -1\({}^{\rm o}\) ) but, at the 50-year mark, a sudden transition to disorder takes place. 15 years later, the standard deviation reaches 250\({}^{\rm o}\) (an increase of more than two orders of magnitude). This behavior is in stark contrast with the initial expectations of the authors, who anticipated a progressive evolution towards disorder characterized by an instability scale (i.e., exponential behavior). After the 65-year mark, the scatter grows monotonically (approximately 45\({}^{\rm o}\) per year, the trend is very close to linear) until the next transition (see below).
There is a smaller transition 5 years after the initial epoch in Fig. 1 (the standard deviation changes only by a factor of five). Both occurrences (at 5 and 50 years) coincide with reversals of the direction of motion of the cloud. These reversals correspond to transitions between continuous circulation and long libration modes [15, 21], pointing strongly to a connection with the orbital plane precession cycle [40]. In fact, there is another change in trend, also accompanied by reversals, one century after the initial epoch. This is further evidence of a relation with the precession cycle, which has a period close to 50 years. Note that the 100-year transition takes place after the cloud has completely lost its spatial coherence (the standard deviation is above five revolutions). The state prior to this transition is so randomized that interpreting the results becomes difficult. Therefore, moving forward, the analysis shall prioritize the first two transitions (5 and 50 years after the initial epoch).
A detail of the first 16 years of the simulation is shown in Fig. 2, to illustrate subtler features. The dashed horizontal lines in the chart indicate the stable (S) and unstable (U) equilibrium position, for reference. Besides the jump in scatter from 0.4\({}^{\circ}\) to 2\({}^{\circ}\) after five years, there is a cyclic variation of the standard deviation with a period of 4.2 years. It coincides with the time the cloud takes to circle the equator once.
For each cycle, there are two maxima and two minima of the scatter. The maxima coincide with passages through the stable equilibrium longitudes 75\({}^{\circ}\)E/108\({}^{\circ}\)W (\(S_{1}\)/\(S_{2}\) in the graph). When a spacecraft approaches a stable point, its
Figure 1: Longitude evolution for baseline configuration.
Figure 2: Longitude evolution for baseline configuration, detail for first 16 years.
longitude drift rate increases. Therefore, the leading satellites (those that arrive first to the equilibrium position) accelerate relative to those that lag behind. The end result is an increase in the spread of longitudes, giving rise to the maxima in the standard deviation curve. Conversely, when the cloud approaches an unstable point (165\({}^{\circ}\)E/15\({}^{\circ}\)W, \(U_{1}\)/\(U_{2}\)), the leading satellites slow down first, causing the rest to catch up and diminish the scatter. The asymmetry of the potential extrema, caused by the harmonics of the gravity field of degree 3 and higher [40], translates into differences in the magnitude of the peaks and valleys of the standard deviation. The highest maxima correspond to passages through \(S_{1}\), while the deepest minima are associated with \(U_{1}\).
Histograms of longitude help visualize the evolution of the cloud. The green circles with letters in Fig. 2 signal the times selected for plotting the histograms. Points A to E correspond to extrema of the standard deviation of longitude (A coinciding the initial epoch) which take place when the cloud crosses equilibrium points. Points E to H are evenly spaced at 6-month intervals to highlight the evolution after the direction of motion is reversed.
To improve the resolution of the histograms, they are computed with a cloud of 5000 satellites (using the same parameters for the random distribution of initial conditions). The mean value is subtracted from the longitudes to keep the curves centered on zero. Each histogram contains 300 bins of uniform size.
The histograms for points A-D are displayed in Fig. 3. They illustrate the cycle of expansion-compression due to passages through the stable (curves B and D) and unstable (curves A and C) equilibrium points, respectively. The shape of the distribution does not change substantially, but the spread experiences large variations. The highest curve is clipped to reveal details of the others (the maximum value for histogram A is 1183). For points A to D, the changes
Fig. 3: **Longitude histograms for points A-D.**
in scatter, while substantial, are largely reversible. Thus, the motion of the cloud remain highly ordered. The behavior changes dramatically when the longitudinal drift is reversed (point E) as shown in Fig. 4. Again, curve E is clipped for clarity, because its maximum value is \(2145\lx@math@degree\). The compression effect at E is remarkable: it reduces the standard deviation of longitude to 12 arsec, 80% smaller than the initial value (point A, 58 arcsec). This extreme squeezing does not preserve the order of the cloud; in fact, it causes a severe degradation. Once the motion is reversed (curves F to H), the shape of the distribution changes substantially (it becomes multi-modal) and the scatter experiences a rapid increase (see Fig. 2). This irreversible behavior is very different from that observed in Fig. 3. The reason is that, having started near the point of maximum instability, the spacecraft have enough energy to climb almost to the top of the potential barrier. Therefore, they can linger near the maximum for a comparatively long time before reversing course. However, the satellites do not start with the exact same energy (due to the random distribution of initial conditions). Furthermore, they do not reach the unstable point simultaneously, meaning they face slightly different potential barriers due to the continuously varying lunisolar perturbations. The net result is that those spacecraft that reach closer to the equilibrium condition (which, as explained before, is not even a fixed point in space) will slow down for longer. On the other hand, those with lower energy get deflected earlier and start to accelerate backwards first. Therefore, once all spacecraft have reversed course, the scatter of the cloud is increased substantially.
The transition 50 years after the initial epoch, which completely disrupts the order of the cloud, is driven by the same mechanism. The change in scatter is larger because the spread of the satellite cloud when it reaches the unstable point is significantly wider. Besides a greater range of spacecraft energies due to this spread, the lunar orbit experiences large variations over a 50 year interval, meaning the potential barrier to overcome is different from the one encountered at 5 years. This results in most spacecraft reversing course as before, but some are able to move across the unstable point. The motion of the cloud becomes incoherent, as there are satellites moving in both directions. This behavior is difficult to illustrate with longitude histograms, due to the extreme disorder involved. Fortunately, the statistics of the semi-major axis (Fig. 5) provide a clear, albeit indirect, depiction of the phenomenon.
Satellites higher than the geosynchronous radius have orbital periods longer than one stellar day. This causes Earth rotation to overtake them and, seen from the ground, they drift West. Conversely, spacecraft at lower altitudes spin faster than the planet and move towards the East. Comparing Fig. 5 with Fig. 1, it becomes clear that the periods of eastward drift (at the start of the propagation and after 50 years) indeed coincide with the minima of semi-major axis. It is also noteworthy that the changes in altitude are comparatively small, tens of kilometers at most. Thus, moderate variations of height can translate into longitudinal motion reversals. Fig. 5 shows that, for the first 50 years, the standard deviation of the semi-major axis remains below 1 km, indicating that all spacecraft are moving in the same direction. Then, the scatter rapidly increases to tens of kilometers. This means there is a mix of satellites in high and low orbits, confirming that the direction of drift is no longer unique. These two populations of satellites move apart from each other continuously, causing the linear increase in standard deviation observed after 65 years.
Figure 5: Semi-major axis evolution for baseline configuration.
### Inclination Evolution for the Baseline Case
Fig. 6 presents statistics for the orbital plane inclination. Comparison with shows that the episodes of longitudinal drift reversal and scatter increase take place near the minima of the mean inclination. The evolution of inclination follows closely the simplified analytical models (53-year cycle with maximum of 14\({}^{\text{o}}\), [15]). The scatter is very small, remaining below 2 minutes of arc throughout the 120 years of simulation. The times of minimum inclination coincide with brief anomalies in the standard deviation. These anomalies are largely due to the definition of inclination. When the satellite orbits are very close to the equator, there will be positive and negative spacecraft latitudes at a given instant. However, the inclination of all orbits is positive (it is always inside the interval [0\({}^{\text{o}}\),180\({}^{\text{o}}\)], by definition). This introduces an asymmetry in the distribution of inclinations, affecting the statistics. Nevertheless, it does not imply a change in physical behavior. For example, the initial standard deviation of inclination in Fig. 6 is 0.01\({}^{\text{o}}\), smaller than the initial scatter in latitude (0.016\({}^{\text{o}}\)).
### Normality Evolution for the Baseline Case
This section explores how the shape of the initial distribution of longitudes and inclinations is affected by the evolution towards disorder. It also examines in more detail the effect of the apparent asymmetry introduced by the definition of inclination, using normality indicators (skewness and excess kurtosis) of the longitude and inclination (Fig. 7 and Fig. 8). Skewness characterizes the asymmetry of the distribution, while kurtosis is a measure of the importance of the tails (it increases with the presence of outliers). The \(b_{i}\) estimator is used for sample skewness and
Fig. 6: Inclination evolution for baseline configuration.
the adjusted Fisher-Pearson standardized moment coefficient \(G_{2}\) for kurtosis [44]. Both skewness and excess kurtosis should be close to zero for a representative sample drawn from a normal distribution. That is indeed the case for the initial longitude, as shown in in Fig. 7. On the other hand, the initial skewness of the orbital inclination is 0.98, due to the aforementioned asymmetry introduced by its definition. In fact, the skewness of a half-normal distribution1 is approximately 1 [45], which agrees very well with the observed value. Fig. 8 shows that, over most of the first 70 years, both skewness and excess kurtosis of the inclination remain close to zero. This indicates that the spikes at the start of the simulation and after 50 years are mostly mathematical artifacts caused by the definition of inclination, and do not involve a physical change in the precession of the orbital planes.
Footnote 1: Given a normally-distributed variable of zero mean \(x\), the probability density of \(\left|x\right|\) is the half-normal distribution. That is, a zero-mean normal distribution is folded about the origin.
Fig. 7 reveals a strong correlation between the longitudinal drift reversal episodes and the loss of normality of the longitude. There is a moderate spike in skewness and kurtosis 5 years after the initial epoch. This is followed by a period of relative stability where excess kurtosis stabilizes at -0.3 (coherent with the change from a normal to a multi-modal distribution, see Fig. 4) and skewness remains close to 0 (meaning the symmetry of the distribution is less affected). Finally, after 50 years, both parameters experience wild fluctuations and eventually stabilize at values quite different from zero, meaning even the symmetry of the distribution has been lost. Closer examination (right pane of Fig. 7) unveils a periodic modulation of skewness that spans the first 50 years of simulation.
Figure 7: **Normality indicators of longitude with detail for initial 16 years.**
The skewness cycle reflects passages through \(U_{1}\) and \(U_{2}\). When the cloud of spacecraft approaches an unstable point, the leading satellites slow down first, compressing the forward tail of the distribution. Conversely, when the cloud leaves the equilibrium point, the leading spacecraft start to accelerate earlier. Thus, the forward tail stretches while the rear one remains squished. This alternating asymmetry translates into the skewness cycle. The spikes 5 years after the initial epoch are caused by the same compression-expansion effect, but exacerbated by the fact that the cloud comes to a complete stop before reversing direction (i.e., the squeezing is much more pronounced).
## 4 Effect of Initial Conditions
To achieve a good understanding of the system, it is imperative to characterize the influence of the initial conditions and try to eliminate those perturbation sources whose effect on the overall behavior is minor. This section starts with an overview of the effects of the initial longitude and epoch. Then, it removes layers of complexity from the physical model, to obtain the simplest formulation that retains the fundamental characteristics of the original system. This reduced model serves to narrow down the possible causes of the phenomenology observed.
### Effect of Initial Longitude
From the simplified analytical formulations [40], transitions between continuous circulation and long libration (i.e., the pattern observed for the baseline case) are expected for initial longitudes close to the point of maximum instability (163\({}^{\text{o}}\)E). Moving away from it, the long libration mode should gain preponderance, with the episodes of continuous circulation becoming shorter (because farther from the equilibrium point the initial energy is lower and it
Figure 8: **Normality indicators of orbital inclination.**
becomes more difficult to move across the potential barrier). For spacecraft starting far enough from the unstable point, continuous circulation is no longer possible, the motion becomes permanent long libration. These predictions agree well with the numerical simulations. For initial longitudes in the range 153degE - 171degE, the behavior is qualitatively similar to the baseline case, with the episodes of sudden disorder becoming less pronounced as the bounds of the interval are approached. For a more comprehensive review on the effect of initial longitude, see Ref. [15]. Note that the interval of longitudes where the complex behavior is possible changes with the positions of Sun and Moon, so it may be slightly different for other initial epochs. As an example, Fig. 9 displays the behavior for an initial longitude of 152degE, just outside the interval of complex behavior. It shows continuous long libration without sudden increases in scatter. The cyclic variation of standard deviation due to passage across equilibrium points is present, like in the baseline configuration. On top of that cycle, there is a slow secular trend, with the average standard deviation over the cycle increasing by 2.5deg over 120 yr.
### Effect of Initial Epoch
To verify if the episodes of disorder are really linked to the orbital plane precession cycle, one can plot the time of occurrence of these episodes against the initial epoch of the simulation. Given that the satellites always start in equatorial orbits, the initial epoch coincides with the beginning of the 53-year inclination cycle. Therefore, the plot should be a straight line with unit slope. To build the plot, a date must be assigned to the transition. This requires defining an arbitrary convention because the episode is not instantaneous. In fact, there can be several consecutive reversals of the direction of motion over a period of a decade or longer.
Figure 9: Longitude evolution: initial position 152°E, initial epoch 1 Jan. 2020.
of the standard deviation plot. The initial and final peaks are those that mark a change in the overall trend of the plot. As an example, Fig. 10 shows this criterion applied to the second transition of the baseline dataset. According to the rules stated above, the episode begins at the local maximum before the first jump in scatter (\(t_{0}\)+46 year) and finishes when the linear trend of the standard deviation starts (\(t_{0}\)+60 year). There is some degree of subjectivity in this choice, the point where the scatter starts to stabilize (\(t_{0}\)+56 year) could also be considered the end of the transition. What really matters is that the large variation in dispersion occurs in the neighborhood of 54 years after the initial epoch. In that respect, both choices are acceptable. The computation were repeated for sets of 200 trajectories with the initial epochs spaced 5 years apart over a period of 50 years. The propagations run for 130 years, in order to fully capture three disorder episodes (at approximately 5, 50 and 100 years after the initial epoch). The results are shown in Fig. 11, where the error bars indicate the start and end of the episodes, with the markers placed at the midpoint. The first disorder episode is denoted with a blue cross, the second with a red triangle and the third with a gray square The linear regression to the simulated data shows, as expected, a very strong correlation between the initial epoch and the occurrence of the transitions. The disorder episodes take place approximately 4.3, 54 and 104 years after the start of the propagation, which agrees very well with the length of the inclination cycle (53 years). Note that the transition at 4.3 years is actually a residual of an episode that would occur at 0 years, but it is suppressed because the initial condition forces the spacecraft to start tightly packed and with zero drift. Therefore, to determine the period between disorder episodes without contamination from the initial condition, one should consider the values 0, 54 and 104 years.
Figure 10: **Convention for measuring duration (\(\Delta\theta\)) and strength (\(\Delta\phi\)) of longitude jumps.**
Keep in mind also that the episodes are not instantaneous, they span several years (~10), so there is uncertainty in the date. All considered, the concordance of the period between episodes with the theoretical inclination cycle is extremely good.
**Fig. 11**: **Time of the transitions vs. initial epoch (initial longitude 165.3"E).**
A rough estimate of the strength of the transitions is obtained recording the magnitude of the largest jump in scatter that takes place during the transition (see \(\Delta\sigma\) in Fig. 10). It was decided to measure a single jump in standard deviation instead of the total change across the transition because, when two consecutive direction reversals take place, they may partially cancel each other (one squeezes the left tail of the cloud, while the other compresses the right end). Thus, the net change can be smaller than the individual jumps. The authors believe that the magnitude of individual jumps gives a better indication of the strength of the transition. In any case, what really matters is the order of magnitude of the jumps. Exact values are not relevant because, due to the extreme sensitivity to initial conditions, different samples from the same initial distribution yield slightly different statistical parameters. As far as the order of magnitude is concerned, the net change and the individual jumps give comparable estimates. Thus, they are both acceptable strength measures. The strength vs. initial epoch plot is shown in Fig. 12. For reference, whenever a linear rate of increase of the standard deviation develops after the transition (e.g., in the baseline configuration) it has been recorded as "xx dry", with "dpy" standing for degrees per year. Because the transition strength spans several orders of magnitude, the vertical axis uses a logarithmic scale. This has the added benefit of making the general appearance of the chart more robust. As indicated above, taking a different sample changes the jumps, typically by a factor less than 2 for a sample size of 200 spacecraft. Therefore, represented in logarithmic scale, the differences from sample to
sample are limited. The strength of the first transition is relatively uniform, on the order of 1 degree (with all the values contained between 0.5 and 6.7 degrees). The second transition shows much higher variability (with jumps between 2.4 and 151 degrees). The highest strengths are for initial epochs around 2020, 2045 and 2070, giving rise to a subsequent linear drift in scatter. The third transition displays even higher variability, causing jumps between 2.8 and 327 degrees.
## 5 Simplified Modeling
As explained in a previous study [15], the most relevant harmonics of the gravity field are those of degree below 4. In particular, the degree-3 terms introduce the asymmetry between the two unstable equilibrium points that enables transitions between continuous circulation and long libration. Radiation pressure was found to play a minor role. Thus, a simplified physical model with only lunisolar perturbations and gravitational harmonics up to degree three was prepared. It also ignored Earth's precession and nutation effects for the sake of simplicity. This reduced model is expected to behave very close to the baseline setup.
Figure 12: Strength of the transitions vs. initial epoch (initial longitude 165.3°E).
coincide with the baseline solution (Fig. 1). Differences in mean longitude emerge after 60 years, but at that point the order of the cloud is already destroyed. Removing lunisolar perturbations (Fig. 14) makes the system autonomous in the Earth-fixed frame. Therefore, the transitions between modes of motion disappear leaving only continuous long libration without sudden increases in scatter. The cyclic variation of standard deviation due to passages across equilibrium points remains. On top, there is a slow secular trend, with the average standard deviation over one cycle increasing by less than 7\({}^{\circ}\) per century. This linear trend is just a reflection of the distribution of initial conditions, and does not involve any complex dynamical behavior.
Figure 14: Longitude evolution for simplified model (N=3 only).
Figure 13: Longitude evolution for simplified model (N=3 + lunisolar perturbations).
Retaining lunisolar perturbations but only gravity harmonics of degree 2 makes the stable and unstable point pairs symmetric. This suppresses the long libration mode (see [15] for more details) forcing continuous circulation (Fig. 15). Due to the lunisolar perturbation, the potential barrier increases twice per century, causing a more pronounced deceleration of the cloud as it crosses the unstable positions. As explained before, the enhanced compression of the cloud results in increased scatter once the spacecraft accelerate. In this case, however, the increase is smooth because the motion is not reversed. The results in Fig. 14 and Fig. 15 indicate that, to preserve the qualitative behavior of the system, lunisolar perturbations and gravity harmonics up to degree 3 are required.
### _Simple Analytical Model for Earth and Moon Orbits_
There is a good agreement between the time between transitions and the theoretical period of orbital precession. This suggests that, replacing precomputed ephemeris with a simple analytical model for the positions of Moon and Sun, should yield a reasonable approximation to the behavior of the system. It would also support the hypothesis that the orbital inclination cycle causes the sudden episodes of disorder. This was tested assuming coplanar (i.e., the lunar orbit is contained in the ecliptic) circular orbits for both Earth and Moon (with radii of \(150\cdot 10^{6}\) km and \(385\cdot 10^{3}\) km, respectively) and repeated the calculations. The physical model ignores radiation pressure and harmonics of the gravity field above degree 3. The results are presented in Fig. 16.
Fig. 15: **Longitude evolution for simplified model (N=2 + lunisolar perturbations).**
As expected, the simple model is able to reproduce the timing of the changes in scatter very well. Also, the observed transition times agree even better with the theoretical cycle of precession, with the second and third episodes occurring 53 and 106 years after the initial epoch. The transition strength becomes more uniform, with the first jump remaining close to 1\({}^{\circ}\) while the second is on the order of 10\({}^{\circ}\). Also, most of the transitions at the 50-year mark are followed by a linear increase in scatter. The third transition shows more variability but, as mentioned before, this data is harder to interpret because the changes occur after the cloud has become highly disordered.
### _Refined Analytical Model for Earth and Moon Orbits_
Fig. 16 strongly suggests that the changes in transition strength are connected to the variability of the lunar orbit, neglected by the simple analytical model. This hypothesis was tested maintaining the circular orbits for Sun and Moon, but including the precession of the lunar orbital plane. Nodal precession is expected to be the most important factor, as it affects the inclination of the orbital plane of the Moon relative to the equator. Therefore, it changes the position of the pole of the Laplacian plane, which governs the inclination cycle of the spacecraft. The analytical model for the lunar orbit assumes a constant inclination relative to the ecliptic of 5.15\({}^{\circ}\) and a period of nodal precession of 18.6 years. The rest of the orbital parameters of the Moon were tuned to obtain the best match of the precomputed ephemeris on 1 January 2020. Like in the previous subsection, solar radiation pressure and gravity harmonics above degree 3 are not included. The results of this model are presented in Fig. 17, showing that the variability in strength is enhanced relative to the coplanar model. For reference, the right pane of Fig. 17 includes the inclination of the lunar orbit relative to the equator (dashed black line). The strongest transitions take place around 2020, 2045 and 2065; considering the simplicity of the model, the agreement with the reference solution (Fig. 11 and Fig. 12) is remarkable. Roughly
Fig. 16: Transition time and strength vs. initial epoch for simple model.
speaking, cases starting when the inclination of the lunar orbit is highest tend to show stronger transitions. This aligns with the idea that the orbital inclination of the Moon plays an important role in the strength of the episodes.
## VI Conclusions
The long-term evolution of geostationary spacecraft abandoned near the region of maximum instability of the geopotential (165\({}^{\circ}\)E) was studied with a Monte Carlo simulation. The combined effect of lunisolar perturbations and harmonics of the gravity field up to degree 3 makes the system susceptible sudden episodes of disorder. These occur approximately every half century, at the points of minimum inclination of the precessional cycle of the orbital plane. The disorder is triggered by transitions between continuous circulation and long libration modes, which are extremely sensitive to perturbations, to the point of becoming effectively unpredictable.
While individual trajectories cannot be predicted accurately, the statistical behavior of the ensemble of spacecraft is significantly more robust. This is demonstrated by models of widely different levels of fidelity showing the same qualitative behavior. Therefore, the observed phenomena are easy to reproduce, because they depend weakly on the subtleties of the physical model.
The main conclusion is that the longitudinal motion of satellites drifting from the 165\({}^{\circ}\)E position is unpredictable over scales of 40 years, requiring careful tracking to prevent accidental collisions. On a positive note, the unpredictability is only an issue when the orbital plane is close to the equator. For the rest of the spacecraft inclination cycle (which is itself extremely regular) the motion is easily predictable. This means only a small fraction of uncontrolled satellites require frequent updates of the trajectory analysis, those that at a given point in time have small inclinations (say, below 8\({}^{\circ}\)).
Figure 17: **Transition time and strength vs. initial epoch for enhanced simple model.**
**Funding Sources**
The work of R. Flores and E. Fantino has been supported by Khalifa University of Science and Technology's internal grant CIRA-2021-65 / 8474000413. R. Flores also acknowledges financial support from the Spanish Ministry of Economy and Competitiveness "Severo Ochoa Programme for Centres of Excellence in R&D" (CEX2018-000797-S). In addition, E. Fantino received support from the Spanish Ministry of Science and Innovation under projects PID2020-112576GB-C21 and PID2021-123968NB-100.
|
2303.03872 | Large Time Behavior of Solutions to Hamilton-Jacobi Equations on
Networks | Starting from Namah and Roquejoffre (1999) and Fathi (1998), the large time
asymptotic behavior of solutions to Hamilton--Jacobi equations has been
extensively investigated by many authors, mostly on smooth compact manifolds
and the flat torus. We extend it to the case where the ambient space is a
network. For the well posedness of time dependent problems on networks, the
equation must be coupled with a "flux limiter", that is the choice of
appropriate constants on each vertex of the network. We will investigate the
effects of it on the asymptotic analysis. | Marco Pozza | 2023-03-07T13:17:53Z | http://arxiv.org/abs/2303.03872v2 | # Large Time Behavior of Solutions to Hamilton-Jacobi Equations on Networks
###### Abstract
Starting from Fathi [9] and Namah and Roquejoffre [17], the large time asymptotic behavior of solutions to Hamilton-Jacobi equations has been extensively investigated by many authors, mostly on smooth compact manifolds and the N-dimensional torus. Following recent development due to Pozza and Siconolfi [19], we extended this asymptotic analysis to time dependent problems on networks. The main difference between this and more traditional settings is that, for the well posedness of the evolutive problem on networks, the equation must be coupled with a "flux limiter", that is the choice of appropriate constants on each vertex of the network. These constants, among other things, bond from above the time derivatives of any subsolution on the vertices. In this paper we will show how this new condition impacts the asymptotic behavior of the solutions to the Hamilton-Jacobi problem on networks.
**2020 Mathematics Subject Classification:** 35B40; 35R02; 49L25; 37J51.
**Keywords:** Hamilton-Jacobi equations; Large time behavior; Aubry set; Embedded networks.
## 1 Introduction
This paper is about the large time behavior of solutions to time dependent Hamilton-Jacobi equations posed on networks.
This subject has been extensively investigated on both smooth compact manifolds and the \(N\)-dimensional torus, first in [9, 17] and subsequently in many other papers, among which we cite [2, 3, 8, 20]. They all show that under suitable assumptions, if \(v\) is a solution to the time dependent Hamilton-Jacobi equation
\[\begin{cases}\partial_{t}v+H(x,Dv)=0,\\ v(x,0)=\phi(x),\end{cases}\]
then, as \(t\) positively diverges, \(v(x,t)+ct\) uniformly converges to a solution \(u\) of the stationary equation
\[H(x,Du)=c,\]
where \(c\) is the so called _critical value_ of the Hamiltonian, i.e. it is the unique value of \(a\) such that \(H(x,Du)=a\) admits a solution.
We consider a connected network \(\Gamma\) embedded in \(\mathds{R}^{N}\) with a finite number of vertices, making up a set denoted by \(\mathbf{V}\), linked by regular simple curves \(\gamma\) parametrized in \([0,1]\), called arcs of \(\Gamma\). In our setting an Hamiltonian on \(\Gamma\) is a collection of Hamiltonians \(H_{\gamma}:[0,1]\times\mathds{R}\to\mathds{R}\) indexed by
the arcs, depending on state and momentum variable, with the crucial feature that Hamiltonians associated to arcs possessing different support, are totally unrelated.
The equations we deal with are accordingly of the form
\[\partial_{t}U(s,t)+H_{\gamma}(s,\partial_{s}U(s,t))=0,\qquad\text{on }(0,1)\times(0,\infty), \tag{1}\]
on each arc \(\gamma\), and a solution on \(\Gamma\) is a continuous function \(v:\Gamma\times[0,\infty)\to\mathds{R}\) such that, for each arc \(\gamma\), \(v(\gamma(s),t)\) solves (1) in the viscosity sense and satisfies suitable additional conditions on the discontinuity interfaces
\[\{(x,t),t\in[0,+\infty)\}\qquad\text{with }x\in\mathbf{V}.\]
It has been established in [12] in the case of junctions and in [22] for general networks that to get existence and uniqueness of solutions, equations (1) must be coupled not only with a continuous initial datum at \(t=0\), but also with a flux limiter, that is a choice of appropriate constants \(c_{x}\) for \(x\) varying in \(\mathbf{V}\). We also report the contribution of [14, 16], where the time dependent problem is studied in junctions, possibly multidimensional, with Kirchoff type Neumann conditions at vertices.
In [22] flux limiters crucially appear in the conditions a solution must satisfy on the interfaces and, among other things, bond from above the time derivatives of any subsolution on it. Even if an initial datum is fixed, solutions can change according to the choice of flux limiter, so that they must be taken into account in the analysis of the large time behavior. Recently in [19] is provided a Lax-Oleinik type representation formula for the flux limited solutions to this evolutive problem, extending the result of [13] where a Lax-Oleinik type representation formula is given for the problem on a junction.
We prove here that, if \(v\) is a solution to the time dependent problem on \(\Gamma\), then there is a unique constant \(a\) depending on the flux limiter such that, as \(t\) positively diverges, \(v(x,t)+at\) uniformly converges to an opportune continuous function \(u\) on \(\Gamma\). The main property of this function is that, for every arc \(\gamma\), \(u\circ\gamma\) is a viscosity solution to the local problem
\[H_{\gamma}(s,\partial_{s}U(s,t))=a,\qquad\text{on }(0,1). \tag{2}\]
While it should be possible to identify \(u\) with the unique solution to some ad hoc stationary problem posed on \(\Gamma\), we prefer to relate to the eikonal equation studied in [23]. The main advantages of this approach is that we have a complete characterization of the eikonal problem by a dynamic point of view and an Hopf-Lax type representation formula for solutions. The drawback, however, is that in general \(u\) is a solution to the eikonal problem on the whole network except some vertices, which depend on the choice of the flux limiter.
Nevertheless it is possible to retrieve the classical convergence result even in this setting. Indeed, it is shown in [23] that the eikonal problem admits a unique critical value \(c\) for which there are solutions to the relative equation, namely opportune continuous functions \(u\) such that \(u\circ\gamma\) solves (2) for each arc \(\gamma\). We prove in Theorem 5.1 that, with a natural choice of the flux limiter, \(v(x,t)+ct\) uniformly converges to a solution \(u\) to the eikonal equation on the whole network.
Another difference with respect to the problem posed on more traditional settings is that the geometry of the network permits, under specific circumstances, a finite time convergence. This could be useful for future applications and numerical analysis.
The large time behavior has been studied, in the quoted literature, either by using dynamical techniques or viscosity solutions methods.
The former approach can be found in [8, 9, 17, 20] and relies on the existence of Lax-Oleinik type representation formulas for solutions to the time dependent problem. As first pointed out in [9], a crucial role is played by the _Aubry set_, which encodes some of the dynamical properties
of the Lax-Oleinik representation formulas in its structure. This type of analysis requires some strong regularity assumptions on the Hamiltonian, like strict convexity and superlinear coercivity on the second variable. It is indeed known, see [2, 8], that simple convexity alone is not enough to ensure, in general, to ensure the convergence phenomenon.
In [2, 3] the use of pure PDE methods permits the authors to relax some of these conditions. In particular they were able to prove the convergence even for non-convex Hamiltonians.
Exploiting the Lax-Oleinik type representation formula given in [19] and the dynamic characterization of the eikonal problem and the relative Aubry set in [23], here we will use a dynamic approach. The conditions adopted in this paper are a combination of the ones given in [19, 23] without any addition. To our knowledge there is no previous literature about the large time behavior of solutions to Hamilton-Jacobi equations on networks.
The paper is organized as follows: in section 2 we fix some notation and conventions. In section 3 we provide some basic facts about network and Hamiltonian defined on them, the main assumptions on the model are given as well. Section 4 introduces the eikonal equation and the time dependent problem, together with some results relevant to our analysis. Section 5 is devoted to our main result: there we present and prove the already mentioned convergence of \(v(x,t)+at\) as \(t\to\infty\). In particular we distinguish three main cases based on the values of the flux limiter, the initial datum and the dynamical properties of the Aubry set. In section 6 we briefly discuss a characterization of the critical value involving the large time behavior.
In the appendices are given some auxiliary results needed for our study. In appendix A we write down some facts on the eikonal problem inferred from [23]. The reparametrization of curves on \(\Gamma\) and its relationship with the dynamic of the problem is the subject of appendix B. Appendix C is about the regularity of the minimal action functional.
## 2 Preliminaries
We fix a dimension \(N\) and \(\mathds{R}^{N}\) as ambient space. We also define
\[\mathds{R}^{+}=[0,+\infty),\qquad\mathcal{Q}=(0,1)\times(0,\infty).\]
Notice that \(\partial\mathcal{Q}=\{0\}\times[0,1]\cup\mathds{R}^{+}\times\{0,1\}\).
If \(E\subset\mathds{R}^{N}\) is a measurable set we denote with \(|E|\) its _Lebesgue measure_. We say that a property holds _almost everywhere_ (_a.e._ for short) if it holds up to a set of measure zero.
For all \(f\in C(E)\), we define \(\|f\|_{\infty}:=\max\limits_{x\in E}|f(x)|\).
Given two real numbers \(a\) and \(b\), we set
\[a\wedge b:=\min\{a,b\},\qquad a\lor b:=\max\{a,b\}.\]
By curve we mean throughout the paper an _absolutely continuous_ curve with support contained in \(\mathds{R}^{N}\) or \(\mathds{R}\). Let \(\xi:[0,T]\to\mathds{R}^{N}\) and \(\xi^{\prime}:[0,T^{\prime}]\to\mathds{R}^{N}\) be two curves such that \(\xi(T)=\xi^{\prime}(0)\). We define their _concatenation_ as the curve \(\xi*\xi^{\prime}:[0,T+T^{\prime}]\to\mathds{R}^{N}\) such that
\[\xi*\xi^{\prime}(t):=\begin{cases}\xi(t),&\text{if }t\in[0,T),\\ \xi^{\prime}(t-T),&\text{if }t\in\left[T,T+T^{\prime}\right].\end{cases}\]
Notice that \(*\) is an associative operation.
Given an open set \(\mathcal{O}\) and a continuous function \(u:\overline{\mathcal{O}}\to\mathds{R}\), we call _supertangents_ (resp. _subtangents_) to \(u\) at \(x\in\mathcal{O}\) the viscosity test functions from above (resp. below). If needed, we take, without explicitly mentioning, \(u\) and test function coinciding at \(x\) and test function strictly greater (resp. less) than \(u\) in a punctured neighborhood of \(x\). We say that a subtangent \(\varphi\) to
at \(x\in\partial\mathcal{O}\) is _constrained to \(\overline{\mathcal{O}}\)_ if \(x\) is a minimizer of \(u-\varphi\) in a neighborhood of \(x\) intersected with \(\overline{\mathcal{O}}\).
If \(f\) is a locally Lipschitz continuous function we denote with \(\partial f\) its Clarke's generalized gradient, see [5, 6]. We point out that convex functions are locally Lipschitz continuous.
## 3 Networks
### Basic Definitions
An _embedded network_, or _continuous graph_, is a subset \(\Gamma\subset\mathds{R}^{N}\) of the form
\[\Gamma=\bigcup_{\gamma\in\mathcal{E}}\gamma([0,1])\subset\mathds{R}^{N},\]
where \(\mathcal{E}\) is a finite collection of regular (i.e., \(C^{1}\) with non-vanishing derivative) simple oriented curves, called _arcs_ of the network, that we assume, without any loss of generality, parameterized on \([0,1]\). Note that we are also assuming existence of one-sided derivatives at the endpoints \(0\) and \(1\). We stress out that a regular change of parameter does not affect our results.
Observe that on the support of any arc \(\gamma\), we also consider the inverse parametrization defined as
\[\widetilde{\gamma}(s):=\gamma(1-s),\qquad\text{for }s\in[0,1].\]
We call \(\widetilde{\gamma}\) the _inverse arc_ of \(\gamma\). We assume
\[\gamma((0,1))\cap\gamma^{\prime}((0,1))=\emptyset,\qquad\text{whenever }\gamma^{\prime}\neq\gamma,\widetilde{\gamma}. \tag{3}\]
We call _vertices_ the initial and terminal points of the arcs, and denote by \(\mathbf{V}\) the sets of all such vertices. Note that (3) implies that
\[\gamma((0,1))\cap\mathbf{V}=\emptyset,\qquad\text{for any }\gamma\in\mathcal{E}.\]
We assume that the network is connected, namely given two vertices there is a finite concatenation of arcs linking them. A _loop_ is an arc with initial and final point coinciding. The unique restriction we require on the geometry of the network is
**(A1)**: \(\mathcal{E}\) does not contain loops.
For each \(x\in\mathbf{V}\), we define \(\Gamma_{x}:=\{\gamma\in\mathcal{E}:\gamma(1)=x\}\).
The network \(\Gamma\) inherits a geodesic distance, denoted with \(d_{\Gamma}\), from the Euclidean metric of \(\mathds{R}^{N}\). It is clear that given \(x\), \(y\) in \(\Gamma\) there is at least a geodesic linking them. The geodesic distance is in addition equivalent to the Euclidean one.
We also consider a differential structure on the network by defining the _tangent bundle_ of \(\Gamma\), \(T\Gamma\) in symbols, as the set made up by the \((x,q)\in\Gamma\times\mathds{R}^{N}\) with \(q\) of the form
\[q=\lambda\dot{\gamma}(s),\qquad\text{if }x=\gamma(s),\,s\in[0,1],\text{ with } \lambda\in\mathds{R}.\]
Note that \(\dot{\gamma}(s)\) is univocally determined, up to a sign, if \(x\in\Gamma\setminus\mathbf{V}\) or in other words if \(s\neq 0,1\).
As in [18, 19, 23] our analysis heavily relies on the curves of the network. It is clear that the regularity of curves on \(\Gamma\) is closely linked with the regularity of the arcs and how they acts on those arcs, thus we report here a result from [19] on this topic.
**Lemma 3.1**.: _For any given arc \(\gamma\) and curve \(\eta:[0,T]\to\gamma([0,1])\), the function_
\[\gamma^{-1}\circ\eta:[0,T]\to[0,1]\]
_is absolutely continuous, and_
\[\frac{d}{dt}\gamma^{-1}\circ\eta(t)=\frac{\dot{\gamma}\left(\gamma^{-1}\circ \eta(t)\right)\dot{\eta}(t)}{|\dot{\gamma}(\gamma^{-1}\circ\eta(t))|^{2}}, \qquad\text{for a.e. }t\in[0,T].\]
_Remark 3.2_.: We observe that any curve \(\xi:[0,T]\to\Gamma\) can be seen, thanks to its continuity and the separability of \(\mathds{R}\), as a concatenation of an at most countable amount of curves with support on a single arc of the network. More precisely we can write
\[\xi=(\gamma_{1}\circ\eta_{1})*(\gamma_{2}\circ\eta_{2})*\cdots,\]
where, for each index \(i\), \(\gamma_{i}\) is an arc of the network and \(\eta_{i}\) is a function from \([0,T_{i}]\) into \([0,1]\). Since \(\xi\) is absolutely continuous Lemma 3.1 shows that each \(\eta_{i}\) is also absolutely continuous.
### Hamiltonians on \(\Gamma\)
A Hamiltonian on \(\Gamma\) is a collection of Hamiltonians \(\mathcal{H}:=\{H_{\gamma}\}_{\gamma\in\mathcal{E}}\), where
\[H_{\gamma}:[0,1]\times\mathds{R} \longrightarrow\mathds{R}\] \[(s,\mu) \longmapsto H_{\gamma}(s,\mu),\]
satisfying
\[H_{\widetilde{\gamma}}(s,\mu)=H_{\gamma}(1-s,-\mu),\qquad\text{for any arc }\gamma. \tag{4}\]
We emphasize that, apart the above compatibility condition, the Hamiltonians \(H_{\gamma}\) are _unrelated_.
We require any \(H_{\gamma}\) to be:
**(H1)**: continuous in both arguments;
**(H2)**: \(\lim_{|\mu|\to\infty}\inf_{s\in[0,1]}\frac{H_{\gamma}(s,\mu)}{|\mu|}=\infty\) for any \(\gamma\in\mathcal{E}\);
**(H3)**: convex in \(\mu\);
**(H4)**: strictly quasiconvex in \(\mu\), which means that, for any \(\gamma\in\mathcal{E}\), \(s\in[0,1]\), \(\mu,\mu^{\prime}\in\mathds{R}\) and \(\rho\in(0,1)\),
or equivalently, under our assumptions,
\[\operatorname{Int}\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)\leq a\}=\{\mu\in \mathds{R}:H_{\gamma}(s,\mu)<a\},\qquad\text{for any }a\in\mathds{R},\]
where \(\operatorname{Int}\) denotes the interior of a set.
We define the support functions
\[\sigma^{+}_{\gamma,a}(s):=\max\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)=a\},\qquad \sigma^{-}_{\gamma,a}(s):=\min\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)=a\}, \tag{5}\]
with the assumption that when \(\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)=a\}\) is empty \(\sigma^{+}_{\gamma,a}(s)=-\infty\) and \(\sigma^{-}_{\gamma,a}(s)=\infty\). It follows from (4) that
\[\sigma^{+}_{\widetilde{\gamma},a}(s)=-\sigma^{-}_{\gamma,a}(1-s). \tag{6}\]
Notice that \(\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)=a\}\) is not empty if and only if \(a\geq\min_{\mu\in\mathds{R}}H_{\gamma}(s,\mu)\), thus \(\sigma^{+}_{\gamma,a}(s)\neq-\infty\) for any \(s\in[0,1]\) if and only if \(a\geq a_{\gamma}\), where
\[a_{\gamma}:=\max_{s\in[0,1]}\min_{\mu\in\mathds{R}}H_{\gamma}(s,\mu).\]
**Proposition 3.3**.:
* _For each_ \(\gamma\in\mathcal{E}\) _and_ \(s\in[0,1]\) _the function_ \(a\mapsto\sigma^{+}_{\gamma,a}(s)\) _is continuous and increasing in_ \(\left[\min_{\mu\in\mathds{R}}H_{\gamma}(s,\mu),\infty\right)\)_;_
* _for each_ \(\gamma\in\mathcal{E}\) _and_ \(a\geq a_{\gamma}\) _the function_ \(s\mapsto\sigma^{+}_{\gamma,a}(s)\) _is continuous in_ \([0,1]\)_._
Proof.: Preliminarily we fix \(\gamma\in\mathcal{E}\). Since \(\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)=a\}\) is not empty if and only if \(a\geq\min_{\mu\in\mathds{R}}H_{\gamma}(s,\mu)\), **(H4)** implies the continuity and the monotonicity of \(a\mapsto\sigma^{+}_{\gamma,a}(s)\).
Next we fix \(a\geq a_{\gamma}\) and assume by contradiction that \(s\mapsto\sigma^{+}_{\gamma,a}(s)\) is not continuous. Then there exist two sequences \(\{s_{n}\}_{n\in\mathds{N}}\) and \(\{s^{\prime}_{n}\}_{n\in\mathds{N}}\) converging to the same \(s\in[0,1]\) and such that
\[\mu:=\lim_{n\to\infty}\sigma^{+}_{\gamma,a}(s_{n})<\lim_{n\to\infty}\sigma^{+} _{\gamma,a}\left(s^{\prime}_{n}\right)=:\mu^{\prime}. \tag{7}\]
By the continuity of \(H_{\gamma}\) we have that \(H_{\gamma}(s,\mu)=a=H_{\gamma}(s,\mu^{\prime})\), therefore **(H4)** yields
\[\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)\leq a\}=\left[\mu,\mu^{\prime}\right].\]
If \(\mathrm{Int}\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)\leq a\}\neq\emptyset\) we have, thanks to the continuity of \(H_{\gamma}\), that there exist a \(\overline{\mu}\in(\mu,\mu^{\prime})\) and an \(N\in\mathds{N}\) such that,
\[\overline{\mu}>\sigma^{+}_{\gamma,a}(s_{n})\quad\text{and}\quad H_{\gamma}(s _{n},\overline{\mu})<a,\qquad\text{for any }n>N,\]
in contradiction with **(H4)**. If instead \(\mathrm{Int}\{\mu\in\mathds{R}:H_{\gamma}(s,\mu)\leq a\}=\emptyset\) we have that \(\mu=\mu^{\prime}\) in contradiction with (7). This shows that \(s\mapsto\sigma^{+}_{\gamma,a}(s)\) is continuous and **(ii)** follows from the arbitrariness of \(a\) and \(\gamma\).
We further define
\[a_{0}:=\max_{\gamma\in\mathcal{E}}a_{\gamma}\]
and require the additional condition
**(H5)**: for any \(\gamma\in\mathcal{E}\) with \(a_{\gamma}=a_{0}\) the map \(s\mapsto\min_{p\in\mathds{R}}H_{\gamma}(s,p)\) is constant in \([0,1]\).
Notice that if \(\gamma\in\mathcal{E}\) is such that \(a_{\gamma}=a_{0}\) then assumptions **(H4)** and **(H5)** imply that
\[\sigma^{+}_{\gamma,a_{\gamma}}=\sigma^{-}_{\gamma,a_{\gamma}},\qquad\text{on }[ 0,1]. \tag{8}\]
Under assumptions **(H1)** to **(H3)** it is natural to define, for any \(\gamma\in\mathcal{E}\), the _Lagrangian_ corresponding to \(H_{\gamma}\) as
\[L_{\gamma}(s,\lambda):=\sup_{\mu\in\mathds{R}}(qp-H_{\gamma}(s,\mu)),\]
where the supremum is actually achieved thanks to **(H2)**. We have for each \(\lambda\in\mathds{R}\) and \(s\in[0,1]\),
\[L_{\gamma}(s,\lambda)=L_{\widetilde{\gamma}}(1-s,-\lambda). \tag{9}\]
We also derive that the Lagrangians \(L_{\gamma}\) are superlinear.
It follows from the definition that, for any \(\gamma\in\mathcal{E}\),
\[\min_{s\in[0,1]}L_{\gamma}(s,0)=\min_{s\in[0,1]}\max_{\mu\in\mathds{R}}(-H_{ \gamma}(s,\mu))=-\max_{s\in[0,1]}\min_{\mu\in\mathds{R}}H_{\gamma}(s,\mu)=-a _{\gamma}. \tag{10}\]
## 4 Hamilton-Jacobi Equations on Networks
### Eikonal HJ Equations
Here we are interested in equations of the form
\[\mathcal{H}(x,Du)=a,\qquad\text{on }\Gamma,\] ( \[\mathcal{H}Ja\] )
where \(a\in\mathds{R}\). This notation synthetically indicates the family, for \(\gamma\) varying in \(\mathcal{E}\), of Hamilton-Jacobi equations
\[H_{\gamma}(s,\partial_{s}U)=a,\qquad\text{on }[0,1].\] ( \[HJ_{\gamma}a\] )
This problem is thoroughly analyzed in [23], where the following definition of solution is given.
**Definition 4.1**.: We say that \(w:\Gamma\to\mathds{R}\) is a _viscosity subsolution_ to \((\mathcal{H}Ja)\) if
* it is continuous on \(\Gamma\);
* \(s\mapsto w(\gamma(s))\) is a viscosity subsolution to \((HJ_{\gamma}a)\) in \((0,1)\) for any \(\gamma\in\mathcal{E}\).
We say that \(u\) is a _viscosity solution_ to \((\mathcal{H}Ja)\) if
* it is continuous;
* it is continuous;
* is a viscosity solution of \((HJ_{\gamma}a)\) in \((0,1)\) for any \(\gamma\in\mathcal{E}\);
* for every vertex \(x\) there is at least one arc \(\gamma\in\Gamma_{x}\) such that \[H_{\gamma}(1,\partial_{s}\varphi(1))\geq a\] for any \(\varphi\) that is a constrained \(C^{1}\) subtangent to \(s\mapsto u(\gamma(s))\) at \(1\).
In order to provide a representation formula for solution to \((HJ_{\gamma}a)\) we extend the support functions defined in (5) to the tangent bundle \(T\Gamma\) in the following sense: for any \(a\in\mathds{R}\) we set the map \(\sigma_{a}:T\Gamma\to\mathds{R}\) such that
* if \(x=\gamma(s)\) for some \(\gamma\in\mathcal{E}\) and \(s\in(0,1)\) then \[\sigma_{a}(x,q):=\max\left\{\mu\frac{q\dot{\gamma}(s)}{|\dot{\gamma}(s)|^{2}}: \mu\in\mathds{R},H_{\gamma}(s,\mu)=a\right\}.\] It is manifest that if \(\{\mu\in\mathds{R},\,H_{\gamma}(s,\mu)=a\}\neq\emptyset\) then \[\sigma_{a}(x,q)=\left(\sigma_{\gamma,a}^{+}(s)\frac{q\dot{\gamma}(s)}{|\dot{ \gamma}(s)|^{2}}\right)\vee\left(\sigma_{\gamma,a}^{-}(s)\frac{q\dot{\gamma}(s )}{|\dot{\gamma}(s)|^{2}}\right),\] otherwise we assume that \(\sigma_{a}(x,q)=-\infty\).
* If \(x\in\mathbf{V}\) and \(q\neq 0\) then \[\sigma_{a}(x,q):=\min\max\left\{\mu\frac{q\dot{\gamma}(1)}{|\dot{\gamma}(1)|^{ 2}}:\mu\in\mathds{R},\,H_{\gamma}(1,\mu)=a\right\},\] where the minimum is taken over the \(\gamma\in\Gamma_{x}\) with \(\dot{\gamma}(1)\) parallel to \(q\);
* if \(x\in\mathbf{V}\) and \(q=0\) then \[\sigma_{a}(x,q):=0.\]
Note that the case \(x\in\mathbf{V}\), \(q\neq 0\) is more involved because there is a problem to take into account, namely different arcs ending at \(x\) could have parallel tangent vectors, in this case we should have
\[q=\lambda_{1}\dot{\gamma}_{1}(1)=\lambda_{2}\dot{\gamma}_{2}(1),\qquad\text{for arcs $\gamma_{1}\neq\gamma_{2}$ and scalars $\lambda_{1},\lambda_{2}$.}\]
We point out that thanks to (6) \(\sigma_{a}\) is a well defined function in \(T\Gamma\).
We further define, for \(a\geq a_{0}\), the semidistance on \(\Gamma\)
\[S_{a}(y,x):=\min\left\{\int_{0}^{T}\sigma_{a}\left(\xi,\dot{\xi}\right)d\tau: \xi:[0,T]\to\Gamma\text{ is a curve from $y$ to $x$}\right\},\]
whose importance is highlighted by the next Proposition.
**Proposition 4.2**.: _A function \(w:\Gamma\to\mathds{R}\) is a subsolution to (\(\mathcal{HJ}a\)) if and only if_
\[w(x)-w(y)\leq S_{a}(y,x),\qquad\text{for any $x,y\in\Gamma$.}\]
Proof.: This is a consequence of [23].
It is known that there is a unique value \(c\geq a_{0}\), called the _Mane critical value_, such that (\(\mathcal{HJ}c\)) (which is equation (\(\mathcal{HJ}a\)) with \(a=c\)) admits solutions in the sense of Definition 4.1. This critical value is characterized by being the only \(c\geq a_{0}\) such that, for all closed curves \(\xi:[0,T]\to\Gamma\),
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq 0\]
and for at least one closed curve the inequality above is an identity. Hereafter \(c\) will always denote the critical value for our eikonal problem.
The following set, whose definition is deeply related to the critical value \(c\), is crucial for our analysis.
**Definition 4.3**.: We call _Aubry set_ on \(\Gamma\), the closed set \(\mathcal{A}_{\Gamma}\) made up by the \(x\in\Gamma\) incident to a closed curve \(\xi:[0,T]\to\Gamma\) with \(\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau=0\). It follows from our previous discussion on the critical value that the Aubry set is nonempty.
The Aubry set is partitioned into _static classes_, defined as the equivalence classes with respect to the relation
\[\left\{x,y\in\Gamma:x\text{ and $y$ are incident to a closed curve $\xi:[0,T]\to\Gamma$ with }\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau=0\right\}.\]
Static classes provide a nice characterization of subsolutions, namely [23, Theorem 7.5] proves that a subsolution is completely determined on a static class by a single value on it. We restate this result here for reader's convenience.
**Proposition 4.4**.: _Let \(w\) be a subsolution to (\(\mathcal{HJ}c\)) and \(\Gamma^{\prime}\) be a static class of \(\mathcal{A}_{\Gamma}\). Then_
\[w(x)=w(y)+S_{c}(y,x),\qquad\text{for every $x,y\in\Gamma^{\prime}$.}\]
The main contribution of the Aubry set is due to the next Theorem.
**Theorem 4.5**.: _Let \(\Gamma^{\prime}\) be a closed subset of \(\Gamma\), \(w\in C(\Gamma)\) be a subsolution to (\(\mathcal{H}Jc\)) and define_
\[u(x):=\min_{y\in\Gamma^{\prime}}(w(y)+S_{c}(y,x)),\qquad\text{for $x\in\Gamma$.}\]
_Then \(u\) is both a solution in \(\Gamma\setminus(\Gamma^{\prime}\setminus\mathcal{A}_{\Gamma})\) and the maximal subsolution to (\(\mathcal{H}Jc\)) agreeing with \(w\) on \(\Gamma^{\prime}\). In particular, if \(\Gamma^{\prime}\subseteq\mathcal{A}_{\Gamma}\), then \(u\) is a solution on the whole \(\Gamma\). Furthermore, if \(\Gamma^{\prime}\) has nonempty intersection with all static classes of the Aubry set, then \(u\) is the unique solution in \(\Gamma\setminus(\Gamma^{\prime}\setminus\mathcal{A}_{\Gamma})\) agreeing with \(w\) on \(\Gamma^{\prime}\)._
Proof.: It is shown in Theorem A.3 that
\[x\longmapsto\min_{y\in\Gamma^{\prime}\setminus\mathcal{A}_{\Gamma}}(w(y)+S_{c}(y,x))\]
is a solution in \(\Gamma\setminus(\Gamma^{\prime}\setminus\mathcal{A}_{\Gamma})\). Similarly we can prove, using [23, Proposition 7.4], that
\[x\longmapsto\min_{y\in\Gamma^{\prime}\cap\mathcal{A}_{\Gamma}}(w(y)+S_{c}(y,x))\]
is a solution on \(\Gamma\). It follows that \(u\) satisfies the subtangent test in \(\Gamma\setminus(\Gamma^{\prime}\setminus\mathcal{A}_{\Gamma})\) as the minimum of those two solutions. Moreover Proposition 4.2 and Theorem A.3 yield that \(u\) is both a solution in \(\Gamma\setminus(\Gamma^{\prime}\setminus\mathcal{A}_{\Gamma})\) and the maximal subsolution agreeing with \(w\) on \(\Gamma^{\prime}\).
Finally, if \(\Gamma^{\prime}\) has nonempty intersection with all static classes of the Aubry set, then \(u\) is the unique solution in \(\Gamma\setminus(\Gamma^{\prime}\setminus\mathcal{A}_{\Gamma})\) agreeing with \(w\) on \(\Gamma^{\prime}\). Indeed, if \(v\) is another solution agreeing with \(w\) on \(\Gamma^{\prime}\), [23] shows that for any \(x\in\Gamma\) there is an \(y\in\Gamma^{\prime}\) such that
\[v(x)=w(y)+S_{c}(y,x)\geq u(x),\]
therefore, by the maximality of \(u\), \(u=v\) on \(\Gamma\).
An analogue result holds for the supercritical case.
**Theorem 4.6**.: [23, Theorem 7.9(ii)] _Let \(\Gamma^{\prime}\) be a closed subset of \(\Gamma\), \(a>c\), \(w\in C(\Gamma)\) be a subsolution to \((\mathcal{HJ}a)\) and define_
\[u(x):=\min_{y\in\Gamma^{\prime}}(w(y)+S_{a}(y,x)),\qquad\text{for $x\in\Gamma$.}\]
_Then \(u\) is both the unique solution in \(\Gamma\setminus\Gamma^{\prime}\) and the maximal subsolution to \((\mathcal{H}Ja)\) agreeing with \(w\) on \(\Gamma^{\prime}\)._
### Time Dependent HJ Equations
Now let us focus on the following time dependent problem on \(\Gamma\):
\[\partial_{t}v(x,t)+\mathcal{H}(x,Dv)=0,\qquad\text{on $\Gamma\times\mathds{R}^{+} $}.\] ( \[\mathcal{H}JE\] )
This notation synthetically indicates the family (for \(\gamma\) varying in \(\mathcal{E}\)) of Hamilton-Jacobi equations
\[\partial_{t}U(s,t)+H_{\gamma}(s,\partial_{s}U(s,t))=0,\qquad\text{on $ \mathcal{Q}$}.\] ( \[HJ_{\gamma}E\] )
Following [12] we call _flux limiter_ any function \(x\mapsto c_{x}\) from \(\mathbf{V}\) to \(\mathds{R}\) satisfying
\[c_{x}\geq\max_{\gamma\in\Gamma_{x}}a_{\gamma},\qquad\text{for any $x\in\mathbf{V}$.}\]
The definition of (sub/super)solutions to \((\mathcal{H}JE)\) is as follows:
**Definition 4.7**.: We say that \(w:\mathds{R}^{+}\times\Gamma\to\mathds{R}\) is a _viscosity subsolution_ to \((\mathcal{H}JE)\) with flux limiter \(c_{x}\) if
* it is continuous;
* \((s,t)\mapsto w(\gamma(s),t)\) is a viscosity subsolution to \((HJ_{\gamma}E)\) in \(\mathcal{Q}\) for any \(\gamma\in\mathcal{E}\);
* for any \(t\in(0,\infty)\) and vertex \(x\), if \(\phi\) is a \(C^{1}\) supertangent to \(w(x,\cdot)\) at \(t\) then \(\partial_{t}\phi(t)\leq-c_{x}\).
**Definition 4.8**.: We say that \(v\) is a _viscosity supersolution_ to \((\mathcal{HJE})\) if
* it is continuous;
* \((s,t)\mapsto v(\gamma(s),t)\) is a viscosity supersolution of \((\mathcal{HJ}_{\gamma}E)\) in \(\mathcal{Q}\) for any \(\gamma\in\mathcal{E}\);
* for every vertex \(x\) and \(t\in(0,\infty)\), if \(\phi\) is a \(C^{1}\) subtangent to \(v(x,\cdot)\) at \(t\) such that \(\partial_{t}\phi(t)<-c_{x}\), then there is a \(\gamma\in\mathcal{E}\) such that \(\gamma(1)=x\) and \[\partial_{t}\varphi(1,t)+H_{\gamma}(1,\partial_{s}\varphi(1,t))\geq 0\] for any \(\varphi\) that is a constrained \(C^{1}\) subtangent to \((s,t)\mapsto v(\gamma(s),t)\) at \((1,t)\). We stress out that this condition does not require the existence of constrained subtangents.
We say that \(u\) is a _viscosity solution_ to \((\mathcal{HJE})\) if it is both a viscosity subsolution and supersolution.
We also have a results concerning the existence of solutions.
**Theorem 4.9**.: [19, Theorem 7.2] _Given an initial datum \(\phi\in C(\Gamma)\) and a flux limiter \(c_{x}\), \((\mathcal{HJE})\) admits a unique solution \(v\) with flux limiter \(c_{x}\) such that \(v(0,x)=\phi(x)\) for every \(x\in\Gamma\)._
Hereafter we will usually assume that it is given a flux limiter \(c_{x}\) for any \(x\in\mathbf{V}\). In view of the previous Theorem we define, for every \(t\in\mathds{R}^{+}\), the nonlinear operator \(\mathcal{S}(t)\) on \(C(\Gamma)\) such that, for each \(\phi\in C(\Gamma)\), \(\mathcal{S}(t)\phi\) is the unique solution at the time \(t\) to \((\mathcal{HJE})\) with initial datum \(\phi\) and flux limiter \(c_{x}\). The family of operators \(\{\mathcal{S}(t)\}_{t\in\mathds{R}^{+}}\) form a semigroup whose main properties are summarized below.
**Proposition 4.10**.:
* (Semigroup property) _For any_ \(t,t^{\prime}\in\mathds{R}^{+}\) _we have_ \(\mathcal{S}(t+t^{\prime})=\mathcal{S}(t)\circ\mathcal{S}(t^{\prime})\)_;_
* (Monotonicity property) _for every_ \(\phi_{1},\phi_{2}\in C(\Gamma)\) _such that_ \(\phi_{1}\leq\phi_{2}\) _in_ \(\Gamma\)__ \[\mathcal{S}(t)\phi_{1}\leq\mathcal{S}(t)\phi_{2},\qquad\text{for any $t\in \mathds{R}^{+}$};\]
* _for any_ \(\phi\in C(\Gamma)\)_,_ \(t\in\mathds{R}^{+}\) _and_ \(a\in\mathds{R}\)_,_ \(\mathcal{S}(t)(\phi+a)=\mathcal{S}(t)\phi+a\)_._
Proof.: The proof of this Proposition is trivial in view of the formula (11) given below.
We will provide a representation formula for solution to \((\mathcal{HJE})\) using a Lagrangian defined on the whole tangent bundle \(T\Gamma\) of the network, namely the map \(L:T\Gamma\to\mathds{R}\) such that
* if \(x=\gamma(s)\) for some \(\gamma\in\mathcal{E}\) and \(s\in(0,1)\) then \[L(x,q):=L_{\gamma}\left(s,\frac{q\dot{\gamma}(s)}{|\dot{\gamma}(s)|^{2}} \right);\]
* if \(x\in\mathbf{V}\) and \(q\neq 0\) then \[L(x,q):=\min L_{\gamma}\left(1,\frac{q\dot{\gamma}(1)}{|\dot{\gamma}(1)|^{2}} \right),\] where the minimum is taken over the \(\gamma\in\Gamma_{x}\) with \(\dot{\gamma}(1)\) parallel to \(q\);
* if \(x\in\mathbf{V}\) and \(q=0\) then \[L(x,q):=-c_{x}.\]
We notice that thanks to (9) \(L\) is a well defined function in \(T\mathrm{T}\).
Following [19] the operators \(\mathcal{S}(t)\) can then be represented through the integral formula
\[(\mathcal{S}(t)\phi)(x)=\min\left\{\phi(\xi(0))+\int_{0}^{t}L\left(\xi,\dot{\xi }\right)d\tau:\text{$\xi$ is a curve with $\xi(t)=x$}\right\}. \tag{11}\]
We stress out that the minimum in the previous formula implies that there exists an optimal curve for \((\mathcal{S}(t)\phi)(x)\).
Intuitively the distance between the endpoints of an optimal curve must be bounded according to the time in order to minimize the cost. This observation is formalized in the following Lemma.
**Lemma 4.11**.: _Let \(\phi\in C(\Gamma)\) and \(\zeta\) be an optimal curve for \((\mathcal{S}(t)\phi)(x)\), then there is a constant \(C\) depending only on \(L\) and \(\phi\) such that_
\[d_{\Gamma}(\zeta(0),\zeta(t))\leq C(t+1).\]
Proof.: From the coercivity of \(L\) we get that there is a constant \(A\geq 0\) satisfying
\[|q|-A\leq L(y,q),\qquad\text{for any $(y,q)\in T\Gamma$},\]
thus, if \(\xi:[0,t]\to\Gamma\), is any curve connecting \(y\) to \(y^{\prime}\)
\[d_{\Gamma}\left(y,y^{\prime}\right)-tA\leq\int_{0}^{t}\left|\dot{\xi}(\tau) \right|d\tau-tA\leq\int_{0}^{t}L\left(\xi,\dot{\xi}\right)d\tau. \tag{12}\]
Next we notice that by definition there is a positive \(B\) such that
\[-B\leq\phi(y)-\phi\left(y^{\prime}\right),\qquad\text{for any $y,y^{\prime}\in \Gamma$}, \tag{13}\]
while, by the optimality of \(\zeta\),
\[\phi(\zeta(0))+\int_{0}^{t}L\left(\zeta,\dot{\zeta}\right)d\tau\leq\phi(x)+tL (x,0). \tag{14}\]
Finally (12), (13) and (14) yield
\[d_{\Gamma}(\zeta(0),\zeta(t)) \leq\int_{0}^{t}L\left(\zeta,\dot{\zeta}\right)d\tau+tA\leq\int_{ 0}^{t}L\left(\zeta,\dot{\zeta}\right)d\tau+\phi(\zeta(0))-\phi(x)+tA+B\] \[\leq t(L(x,0)+A)+B\leq t\left(\sup_{x\in\Gamma}L(x,0)+A\right)+B,\]
which proves our claim.
Exploiting Lemma 4.11 we can prove a result about the regularity of the solutions to \((\mathcal{H}JE)\).
**Theorem 4.12**.: _Given \(\phi\in C(\Gamma)\) we have that_
\[\Gamma\times\mathrm{R}^{+}\ni(x,t)\longmapsto(\mathcal{S}(t)\phi)(x) \tag{15}\]
_is uniformly continuous._
Proof.: It is shown in [19, Proposition 6.6] that (15) is continuous, therefore it is uniformly continuous in \(\Gamma\times[0,2]\) by the Heine-Cantor Theorem. We will also prove that (15) is uniformly continuous in \(\Gamma\times[2,\infty)\), which together with our previous statement will imply our claim.
Let \((x,t),(x^{\prime},t^{\prime})\in\Gamma\times[2,\infty)\) be such that \(|t-t^{\prime}|\leq 1\) and denote with \(\zeta\) the optimal curve for \((x,t)\) in (11). We observe that by Lemma 4.11 there is a constant \(C\) such that \((\zeta(0),x,t)\in A_{2C}\), where \(A_{2C}\) is defined in (76), and
\[d_{\Gamma}\left(x^{\prime},\zeta(0)\right)\leq \,d_{\Gamma}\left(x,x^{\prime}\right)+d_{\Gamma}(x,\zeta(0))\leq \operatorname{diam}\Gamma+C(t+1)\leq(C+\operatorname{diam}\Gamma)\left(t^{ \prime}+2\right)\] \[\leq \,2(C+\operatorname{diam}\Gamma)t^{\prime},\]
thus we have that \((\zeta(0),x,t),(\zeta(0),x^{\prime},t^{\prime})\in A_{2(C+\operatorname{diam} \Gamma)}\). Next we let \(\xi\) be an optimal curve for the action functional (75) in \((\zeta(0),x^{\prime},t^{\prime})\), then Proposition C.3 yields that there is a constant \(\ell\) such that
\[\left(\mathcal{S}\left(t^{\prime}\right)\phi\right)\left(x^{\prime}\right)-( \mathcal{S}(t)\phi)(x)\leq\int_{0}^{t^{\prime}}L\left(\xi,\dot{\xi}\right)d \tau-\int_{0}^{t}L\left(\zeta,\dot{\zeta}\right)d\tau\leq\ell\left(d_{\Gamma} \left(x,x^{\prime}\right)+\left|t-t^{\prime}\right|\right).\]
Interchanging the roles of \((x,t)\) and \((x^{\prime},t^{\prime})\) in the previous analysis we get that the map in (15) is locally \(\ell\)-Lipschitz continuous, and consequently uniformly continuous, in \(\Gamma\times[2,\infty)\).
## 5 Convergence to Steady States
We start this section presenting the main results of this article, their proofs will be given later.
Preliminarily we assume that the flux limiter is minimal, i.e.
\[c_{x}=\max_{\gamma\in\Gamma_{x}}a_{\gamma},\qquad\text{for any }x\in\mathbf{V}. \tag{16}\]
In this case we retrieve the classic convergence result of [9] adapted to our setting.
**Theorem 5.1**.: _Given a flux limiter \(c_{x}\) satisfying (16) and a \(\phi\in C(\Gamma)\), we define_
\[u(x):=\min_{y\in\mathcal{A}_{\Gamma}}\left(\min_{z\in\Gamma}(\phi(z)+S_{c}(z,y ))+S_{c}(y,x)\right),\qquad\text{for }x\in\Gamma. \tag{17}\]
_Then \(\mathcal{S}(t)\phi+ct\) uniformly converges, as \(t\) goes to \(\infty\), to \(u\)._
We point out that, setting
\[w(x):=\min_{y\in\Gamma}(\phi(y)+S_{c}(y,x)),\qquad\text{for }x\in\Gamma, \tag{18}\]
Theorems 4.5 and A.3 yield that \(u\) in (17) is the unique solution in \(\Gamma\) to (\(\mathcal{H}\)_Jc_) agreeing with \(w\) on \(\mathcal{A}_{\Gamma}\).
One of the most important achievement of [23] is the reduction of the eikonal problem (\(\mathcal{H}\)_Ja_) to an appropriate discrete functional equation defined on an abstract graph having the same vertices as \(\Gamma\) and edges corresponding to the arcs. The solution of said discrete functional equation is a discrete function defined on \(\mathbf{V}\) which can be uniquely extended to a solution to (\(\mathcal{H}\)_Ja_) and vice versa. Following this observation it is reasonable to assume that the asymptotic limit in (17) could be inferred from the values of \(\mathcal{S}(t)\phi+ct\) on the vertices, ignoring the behavior of the initial datum outside of them. In general this is not true, as can be seen in the next Example.
**Example 5.2**.: Let \(\Gamma\) be a network with only two vertices and a single arc \(\gamma\) connecting them. If we define
\[H_{\gamma}(\mu)=\mu^{2},\qquad\text{for }\mu\in\mathds{R},\]
it is a simple check that the critical value of the eikonal problem defined by this Hamiltonian is \(0\). Moreover we have that \(\sigma_{\gamma,0}^{+}\equiv 0\equiv\sigma_{\gamma,0}^{-}\), thus \(S_{0}(y,x)=0\) for any \(x,y\in\Gamma\). Then, given \(\phi\in C(\Gamma)\), Theorem 5.1 yields that as \(t\) goes to \(\infty\)\(\mathcal{S}(t)\phi\) uniformly converges to the minimum of \(\phi\), independently from where this value is attained.
The presence of the flux limiter for our evolutive problem, which is a novelty with respect to time dependent Hamilton-Jacobi equations in more traditional environment, introduces a parameter which greatly influence the convergence to the steady states. Assume for instance that
\[c_{x}\leq c,\qquad\text{for any }x\in\mathbf{V}, \tag{19}\]
and define \(\widetilde{\mathbf{V}}:=\{x\in\mathbf{V}\setminus\mathcal{A}_{\Gamma}:c_{x}=c\}\). In this case we have that, asymptotically, the solutions to \((\mathcal{HJE})\) do not discern \(\widetilde{\mathbf{V}}\) from the Aubry set, therefore we define the _extended Aubry set_
\[\widetilde{\mathcal{A}}_{\Gamma}:=\mathcal{A}_{\Gamma}\cup\widetilde{\mathbf{ V}}\]
and obtain the following generalization of Theorem 5.1.
**Theorem 5.3**.: _Given a flux limiter \(c_{x}\) satisfying (19) and a \(\phi\in C(\Gamma)\), we define_
\[u(x):=\min_{y\in\widetilde{\mathcal{A}}_{\Gamma}}\left(\min_{z\in\Gamma}(\phi (z)+S_{c}(z,y))+S_{c}(y,x)\right),\qquad\text{for }x\in\Gamma. \tag{20}\]
_Then \(\mathcal{S}(t)\phi+ct\) uniformly converges, as \(t\) goes to \(\infty\), to \(u\)._
If \(w\) is defined by (18), Theorems 4.5 and A.3 yields that, in this case, \(\mathcal{S}(t)\phi+ct\) converges to a function \(u\) which is both the unique solution in \(\Gamma\setminus\widetilde{\mathbf{V}}\) and the maximal subsolution to \((\mathcal{HJC})\) agreeing with \(w\) on \(\widetilde{\mathcal{A}}_{\Gamma}\).
Now assume that there is one \(y\in\mathbf{V}\) such that \(c_{y}>c\). Let \(x\in\Gamma\) and \(\xi:[0,T]\to\Gamma\) be a curve from \(y\) to \(x\), then, for any \(t\geq T\) and \(\phi\in C(\Gamma)\),
\[(\mathcal{S}(t)\phi)(x)+ct\leq\phi(y)+\int_{0}^{T}\left(L\left(\xi,\dot{\xi} \right)+c\right)d\tau+(c-c_{y})(t-T).\]
Since \(c-c_{y}<0\), this shows that
\[\lim_{t\to\infty}(\mathcal{S}(t)\phi)(x)+ct=-\infty,\qquad\text{for any }\phi\in C( \Gamma),\]
i.e. \(\mathcal{S}(t)\phi+ct\) diverges as \(t\) goes to \(\infty\). We can, however, retrieve convergence to supercritical solutions, as proved in the next Theorem.
**Theorem 5.4**.: _Given a flux limiter \(c_{x}\) such that \(c_{x}>c\) for some \(x\in\mathbf{V}\) and a \(\phi\in C(\Gamma)\) we define_
\[a:=\max_{x\in\mathbf{V}}c_{x},\qquad\mathbf{V}_{a}:=\{x\in\mathbf{V}:c_{x}=a\}\]
_and_
\[u(x):=\min_{y\in\mathbf{V}_{a}}\left(\min_{z\in\Gamma}(\phi(z)+S_{a}(z,y))+S_{ a}(y,x)\right),\qquad\text{for }x\in\Gamma.\]
_We have that there exists a time \(T_{\phi}\), depending on \(\phi\) and \(a\), such that \(\mathcal{S}(t)\phi+at\equiv u\) on \(\Gamma\) for any \(t>T_{\phi}\)._
Similarly to what happens in Theorem 5.3, if we set
\[w(x):=\min_{z\in\Gamma}(\phi(y)+S_{a}(y,x)),\qquad\text{for $x\in\Gamma$},\]
Theorems 4.6 and A.3 yield that \(\mathcal{S}(t)\phi+at\) converges to a function \(u\) which is both the unique solution in \(\Gamma\setminus\mathbf{V}_{a}\) and the maximal subsolution to (\(\mathcal{H}Ja\)) agreeing with \(w\) on \(\mathbf{V}_{a}\). Moreover in this case we have that the convergence occurs in a finite times. Roughly speaking, this is due to the fact that
\[L(x,0)+a=0,\qquad\text{for any $x\in\mathbf{V}_{a}$},\]
which is to say that, for a curve, the cost of standing still at a point in \(\mathbf{V}_{a}\) is zero, meanwhile the cost of being outside \(\mathbf{V}_{a}\) increases with time.
Under specific circumstances we have that, when (19) holds, the optimal curves for \(\mathcal{S}(t)\phi\) mimic, in a certain sense, the behavior of the optimal curves for the previous case. We can exploit this to prove a finite time convergence result for \(\mathcal{S}(t)\phi+ct\).
**Proposition 5.5**.: _Assume that \(c>a_{0}\), let \(c_{x}\) be a flux limiter satisfying (19) and \(u\) be defined by (20). If in each static class of \(\mathcal{A}_{\Gamma}\) there is a vertex \(x\) with \(c_{x}=c\), then, for any \(\phi\in C(\Gamma)\), there is a constant \(T_{\phi}\) depending on \(\phi\) and such that \(\mathcal{S}(t)\phi+ct\equiv u\) on \(\Gamma\) whenever \(t\geq T_{\phi}\)._
Finite time convergence can also be achieved for some special initial data, namely subsolutions to (\(\mathcal{H}Jc\)).
**Proposition 5.6**.: _Given a flux limiter \(c_{x}\) satisfying (19) and a subsolution \(w\) to (\(\mathcal{H}Jc\)), we define_
\[u(x):=\min_{y\in\widetilde{\mathcal{A}}_{\Gamma}}(w(y)+S_{c}(y,x)),\qquad \text{for $x\in\Gamma$}.\]
_Then there is a time \(T_{w}\) depending on \(w\) such that_
\[(\mathcal{S}(t)w)(x)+ct=u(x),\qquad\text{for any $x\in\Gamma$},\,t\geq T_{w}. \tag{21}\]
### Convergence in Finite Time
The purpose of this section is to provide the proofs of Propositions 5.5 and 5.6, using some auxiliary results.
**Proposition 5.7**.: _Let \(c_{x}\) be a flux limiter satisfying (19) and \(u\) be a solution to (\(\mathcal{H}Jc\)) in \(\Gamma\setminus\widetilde{\mathbf{V}}\), then \(\mathcal{S}(t)u=u-ct\) on \(\Gamma\times\mathds{R}^{+}\)._
Proof.: First of all we fix \(\gamma\in\mathcal{E}\) and let \(\varphi\) be a \(C^{1}\) supertangent to \(u(\gamma(s))-ct\) at some \((s^{*},t^{*})\in\mathcal{Q}\). It is apparent that \(s\mapsto\varphi(s,t^{*})\) is a supertangent to \(u\circ\gamma\) at \(s^{*}\), therefore, since \(u\) is a subsolution to (\(\mathcal{H}Jc\)), we have
\[H_{\gamma}(s^{*},\partial_{s}\varphi(s^{*},t^{*}))\leq c. \tag{22}\]
Next we notice that for each \(h>0\) small enough
\[\frac{\varphi(s^{*},t^{*}-h)-\varphi(s^{*},t^{*})}{-h}\leq\frac{u(\gamma(s^{* }))-c(t^{*}-h)-u(\gamma(s^{*}))+ct^{*}}{-h}=-c,\]
which shows, together with (22), that
\[\partial_{t}\varphi(s^{*},t^{*})+H_{\gamma}(s^{*},\partial_{s}\varphi(s^{*},t ^{*}))\leq 0.\]
This, thanks to arbitrariness of \(\varphi\), \(\gamma\) and \((s^{*},t^{*})\), yields that \(u-ct\) satisfies **(ii)** in Definition 4.7. Similarly we can show that \(u-ct\) satisfies **(ii)** in Definition 4.8. Moreover, it follows from (19) that **(iii)** in Definition 4.7 holds true for \(u-ct\). Finally, by definition, whenever \(x\in\mathbf{V}\setminus\widetilde{\mathbf{V}}\), i.e. \(c_{x}<c\), **(iii)** in Definition 4.1 holds true, therefore \(u-ct\) also satisfies **(iii)** in Definition 4.8. This yields that \(u-ct\) is a solution to (\(\mathcal{H}JE\)), which proves our claim.
Fixed \(\phi\in C(\Gamma)\), let \(u\) be as in (20) and set \(\alpha:=0\vee\max_{x\in\Gamma}(\phi(x)-u(x))\). Then Propositions 5.7 and 4.10 yield that
\[(\mathcal{S}(t)\phi)(x)\leq u(x)-ct+\alpha,\qquad\text{for any }(x,t)\in\Gamma \times\mathds{R}^{+}. \tag{23}\]
The next Lemma is a consequence of this inequality.
**Lemma 5.8**.: _Given a flux limiter \(c_{x}\) satisfying (19) and a \(\phi\in C(\Gamma)\), there is a \(T_{\phi}>0\) depending only on \(\phi\) such that, for any \(x\in\Gamma\), \(t\geq T_{\phi}\) and optimal curve \(\xi\) for \((\mathcal{S}(t)\phi)(x)\), \(\xi([0,t])\cap\widetilde{\mathcal{A}}_{\Gamma}\neq\emptyset\)._
Proof.: Preliminarily we fix \((x,t)\in\Gamma\times\mathds{R}^{+}\) and an optimal curve \(\xi\) for \((\mathcal{S}(t)\phi)(x)\), then we will show that the identity
\[\xi([0,t))\cap\widetilde{\mathcal{A}}_{\Gamma}=\emptyset \tag{24}\]
can only occur when \(t\leq T_{\phi}\), where \(T_{\phi}\) is a constant depending only on \(\phi\). Thanks to the arbitrariness of \((x,t)\) and \(\xi\) this is enough to prove our claim.
We start defining the constant
\[l_{0}:=\max\left\{c_{x}:x\in\mathbf{V}\setminus\widetilde{\mathcal{A}}_{ \Gamma}\right\}\vee\max\{a_{\gamma}:\gamma\in\mathcal{E},\,\gamma((0,1))\cap \mathcal{A}_{\Gamma}=\emptyset\},\]
then, by Lemma A.2 and (10),
\[L(x,0)+c\geq c-l_{0}>0,\qquad\text{for any }x\in\Gamma\setminus\widetilde{ \mathcal{A}}_{\Gamma}. \tag{25}\]
Assuming that (24) holds, we have that if
\[L(x,q)\geq-l_{0},\qquad\text{for every }(x,q)\in T\Gamma\text{ with }x\in\Gamma \setminus\widetilde{\mathcal{A}}_{\Gamma}, \tag{26}\]
then (23) yields
\[\min_{x\in\Gamma}\phi(x)+(c-l_{0})t\leq\phi(\xi(0))+\int_{0}^{t}\left(L\left( \xi,\dot{\xi}\right)+c\right)d\tau\leq\max_{x\in\Gamma}u(x)+\alpha,\]
which implies that
\[t\leq\frac{\max_{x\in\Gamma}u(x)-\min_{x\in\Gamma}\phi(x)+\alpha}{c-l_{0}}=:T_ {\phi}.\]
This proves our claim when (26) holds.
Next we assume that \(L(x,q)<-l_{0}\) for some \((x,q)\in T\Gamma\) with \(x\in\Gamma\setminus\widetilde{\mathcal{A}}_{\Gamma}\), then set
\[r_{\delta}:=\min\left\{|q|:(x,q)\in T\Gamma\text{ for some }x\in\Gamma \setminus\widetilde{\mathcal{A}}_{\Gamma},\,L(x,q)\leq-l_{0}-\delta\right\}\]
and \(C_{\delta}:=c-l_{0}-\delta\), where \(\delta>0\) is such that \(r_{\delta}>0\). We point out that the existence of such \(\delta\) is a consequence of (25). It follows that
\[L(x,q)+c>C_{\delta}>0,\qquad\text{for any }(x,q)\in T\Gamma\text{ with }|q|<r_{\delta},x\in\Gamma\setminus\widetilde{\mathcal{A}}_{\Gamma}. \tag{27}\]
We assume that (24) holds true and define the set
\[E:=\left\{\tau\in[0,t]:\left|\dot{\xi}(\tau)\right|<r_{\delta}\right\},\]
then (23), Proposition B.5 and Lemma B.11 show that there exist two positive constant \(A\) and \(B\), independent from \(\xi\), such that
\[A(t-|E|)r_{\delta}-B \leq A\int_{0}^{t}\left|\dot{\xi}(\tau)\right|d\tau-B\leq\int_{0}^{t} \sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\leq\int_{0}^{t}\left(L\left(\xi, \dot{\xi}\right)+c\right)d\tau\] \[\leq\max_{x\in\Gamma}u(x)-\min_{x\in\Gamma}\phi(x)+\alpha.\]
This in turn yields that there is a constant \(C_{1}\) depending on \(\phi\) such that
\[t-C_{1}\leq|E|. \tag{28}\]
Similarly by (23) and (27) we have
\[(t-|E|)\min_{x\in\Gamma\setminus\widetilde{\mathcal{A}}_{\Gamma},|q|\geq r_{ \delta}}(L(x,q)+c)+C_{\delta}|E|\leq\int_{0}^{t}\left(L\left(\xi,\dot{\xi} \right)+c\right)d\tau\leq\max_{x\in\Gamma}u(x)-\min_{x\in\Gamma}\phi(x)+\alpha,\]
thus if we define
\[C_{2}:=0\vee\left(-\min_{x\in\Gamma\setminus\widetilde{\mathcal{A}}_{\Gamma},|q|\geq r_{\delta}}(L(x,q)+c)\right)\]
we get
\[-C_{2}t+(C_{\delta}+C_{2})|E|\leq\max_{x\in\Gamma}u(x)-\min_{x\in\Gamma}\phi(x )+\alpha. \tag{29}\]
Finally we combine (28) with (29) to obtain that
\[C_{\delta}t-(C_{\delta}+C_{2})C_{1}\leq\max_{x\in\Gamma}u(x)-\min_{x\in\Gamma }\phi(x)+\alpha,\]
which proves that there is a constant \(T_{\phi}\), depending only on \(\phi\), such that (24) holds true only if \(t\leq T_{\phi}\).
Lemma 5.8 shows that an optimal curve can stay outside the extended Aubry set only for a finite time. This is the foundation from which we retrieve the aforementioned finite time convergence results.
Proof of Proposition 5.6.: By Lemma 5.8 there is a constant \(T_{w}\) such that, fixed \(x\in\Gamma\), \(t\geq T_{w}\) and an optimal curve \(\xi\) for \((\mathcal{S}(t)w)(x)\), there is a \(t^{\prime}\in[0,t]\) such that \(\xi(t^{\prime})\in\widetilde{\mathcal{A}}_{\Gamma}\). Then it follows from Lemma B.11 and Proposition 4.2 that
\[(\mathcal{S}(t)w)(x)+ct= \,w(\xi(0))+\int_{0}^{t^{\prime}}\left(L\left(\xi,\dot{\xi} \right)+c\right)d\tau+\int_{t^{\prime}}^{t}\left(L\left(\xi,\dot{\xi}\right)+c \right)d\tau\] \[\geq \,w(\xi(0))+S_{c}(\xi(0),\xi(t^{\prime}))+S_{c}(\xi(t^{\prime}), x)\geq w(\xi(t^{\prime}))+S_{c}(\xi(t^{\prime}),x)\] \[\geq \,\min_{y\in\widetilde{\mathcal{A}}_{\Gamma}}(w(y)+S_{c}(y,x))=u (x).\]
Thanks to the arbitrariness of \((x,t)\), this shows that
\[(\mathcal{S}(t)w)(x)+ct\geq u(x),\qquad\text{for any }x\in\Gamma,t\geq T_{w}. \tag{30}\]
Finally, since \(w\leq u\), (23) yields
\[(\mathcal{S}(t)w)(x)+ct\leq u(x),\qquad\text{for any }(x,t)\in\Gamma\times \mathds{R}^{+},\]
which, together with (30), proves (21).
We conclude this section proving a more general version of Proposition 5.5 using Lagrangian parametrization, see Definition B.9.
**Proposition 5.9**.: _Assume that every \(\gamma\in\mathcal{E}\) admits a \(c\)-Lagrangian reparametrization, let \(c_{x}\) be a flux limiter satisfying (19) and \(u\) be defined by (20). If in each static class of \(\mathcal{A}_{\Gamma}\) there is a vertex \(x\) with \(c_{x}=c\), then, for any \(\phi\in C(\Gamma)\), there is a constant \(T_{\phi}\) depending on \(\phi\) and such that \(\mathcal{S}(t)\phi+ct\equiv u\) on \(\Gamma\) whenever \(t\geq T_{\phi}\)., where \(u\) is defined by (20)_
Notice that, by Lemma A.2 and Theorem B.13, this Proposition depends on the dynamical properties of the Aubry set as well as on the flux limiter. In particular, if \(c>a_{0}\), then each \(\gamma\in\mathcal{E}\) has a \(c\)-Lagrangian reparametrization, i.e. Proposition 5.9 implies Proposition 5.5.
Proof.: Preliminarily observe that if every \(\gamma\in\mathcal{E}\) admits a \(c\)-Lagrangian reparametrization Let \(w\) be as in (18), then Theorem A.3 and Proposition 4.10 yield
\[(\mathcal{S}(t)w)(x)+ct\leq(\mathcal{S}(t)\phi)(x)+ct,\qquad\text{for any }(x,t)\in\Gamma\times\mathds{R}^{+}.\]
It follows from Proposition 5.6 that, for any \(x\in\Gamma\) and \(t\) big enough,
\[(\mathcal{S}(t)\phi)(x)+ct\geq u(x),\qquad\text{for any }x\in\Gamma,\,t\geq T_{w}, \tag{31}\]
where \(T_{w}\) is a constant depending on \(\phi\). Next we fix \(x\in\Gamma\) and let \(y\in\widetilde{\mathcal{A}}_{\Gamma}\) and \(z\in\Gamma\) be such that
\[u(x)=w(y)+S_{c}(y,x)=\phi(z)+S_{c}(z,y)+S_{c}(y,x), \tag{32}\]
then we have that there exist two simple curves \(\xi_{1}:[0,T_{1}]\to\Gamma\) and \(\xi_{2}:[0,T_{2}]\to\Gamma\) optimal for \(S_{c}(z,y)\) and \(S_{c}(y,x)\), respectively. Exploiting Proposition 4.4 and our assumptions, we assume without loss of generality that \(y\in\mathbf{V}\) and \(c_{y}=c\). Moreover observe that since every \(\gamma\in\mathcal{E}\) admits a \(c\)-Lagrangian reparametrization, then, by Remark 3.2, Proposition B.4, and Lemma B.3, we can also assume that \(\xi_{1}\) and \(\xi_{2}\) have a \(c\)-Lagrangian parametrization. If we define, for any \(t\geq T_{1}+T_{2}\),
\[\xi_{t}(r):=\begin{cases}\xi_{1}(r),&\text{if }r\in[0,T_{1}],\\ y,&\text{if }r\in(T_{1},t-T_{2}),\\ \xi_{2}(r-(t-T_{2})),&\text{if }r\in[t-T_{2},t],\end{cases}\]
it is then apparent that
\[\int_{0}^{t}\left(L\left(\xi_{t},\dot{\xi}_{t}\right)+c\right)d\tau=S_{c}(z,y) +S_{c}(y,x),\qquad\text{for any }t\geq T_{1}+T_{2},\]
thus in view of (32) we get
\[(\mathcal{S}(t)\phi)(x)+ct\leq u(x),\qquad\text{for any }t\geq T_{1}+T_{2}. \tag{33}\]
Finally Theorem B.13 yields that
\[T_{\gamma}(c):=\{t>0:\gamma\text{ has a }c\text{-Lagrangian reparametrization }\vartheta:[0,t]\to\Gamma\}\]
is a compact interval for all \(\gamma\in\mathcal{E}\), i.e.
\[T_{\gamma}(c)=\left[\underline{T}_{\gamma}(c),\overline{T}_{\gamma}(c)\right];\]
hence it is a simple check that the independent constant \(T_{c}:=\sum_{\gamma}\overline{T}_{\gamma}(c)\) is bigger than both \(T_{1}\) and \(T_{2}\). Then, by (33) and the arbitrariness of \(x\),
\[(\mathcal{S}(t)\phi)(x)+ct\leq u(x),\qquad\text{for any }x\in\Gamma,\,t\geq 2T_{ c}, \tag{34}\]
therefore, setting \(T_{\phi}:=2T_{c}\lor T_{w}\), (31) and (34) conclude the proof.
### Convergence in the General Case
Here we will prove Theorem 5.3. In order to do so we introduce a uniform limits set and analyze the dynamical properties of the extended Aubry set. This analysis is similar to the one performed in [8].
First of all we observe that the vertices in \(\widetilde{\mathbf{V}}\) and the static classes of \(\mathcal{A}_{\Gamma}\) form a partition of \(\widetilde{\mathcal{A}}_{\Gamma}\), whose elements we will henceforth refer to as _static classes_ of the extended Aubry set.
**Lemma 5.10**.: _If \(\Gamma^{\prime}\) is a static class of \(\widetilde{\mathcal{A}}_{\Gamma}\) and \(w\) is a subsolution to (\(\mathcal{H}\)Jc) then_
\[w(x)=w(y)+S_{c}(y,x),\qquad\text{for any }x,y\in\Gamma^{\prime}. \tag{35}\]
Proof.: If \(\Gamma^{\prime}\subseteq\widetilde{\mathbf{V}}\), i.e. \(\Gamma^{\prime}\) is a singleton, (35) is manifest, otherwise \(\Gamma^{\prime}\subseteq\mathcal{A}_{\Gamma}\) and our claim is a consequence of Proposition 4.4.
The asymptotic character of our analysis require the use of a special class of curves.
**Definition 5.11**.: We call _critical curve_ any curve \(\zeta:\mathds{R}\to\Gamma\) with support contained in the extended Aubry set and such that
\[\int_{t_{1}}^{t_{2}}\left(L\left(\zeta,\dot{\zeta}\right)+c\right)d\tau=S_{c} (\zeta(t_{1}),\zeta(t_{2})),\qquad\text{for any }t_{2}\geq t_{1}.\]
As a consequence of Lemma B.11 we have that \(\zeta\) has \(c\)-Lagrangian parametrization.
**Proposition 5.12**.: _Each static class of the extended Aubry set contains a periodic critical curve._
Proof.: Given a static class \(\Gamma^{\prime}\) there is a closed simple curve \(\xi:[0,T]\to\Gamma\) with support contained in it such that
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau=0.\]
If \(c\) is admissible for \(\xi\) (see Definition B.12) we can assume, thanks to Theorem B.13, that \(\xi\) has \(c\)-Lagrangian parametrization. We then have, thanks to Lemma A.1, that
\[\zeta(t):=\xi\left(t-\left\lfloor\frac{t}{T}\right\rfloor T\right),\qquad \text{for }t\in\mathds{R},\]
is a periodic critical curve contained in \(\Gamma^{\prime}\).
If \(c\) is not admissible there is an \(x\in\Gamma^{\prime}\) such that \(L(x,0)=-c\), thus the curve \(\zeta:\mathds{R}\to\{x\}\) is a periodic critical curve contained in \(\Gamma^{\prime}\).
Now assume that (19) holds, then, given a \(\phi\in C(\Gamma)\), \(u\) defined by (20) and \(w\) as in (18), we have by (23) and Proposition 4.10 that
\[(\mathcal{S}(t)w)(x)+ct\leq(\mathcal{S}(t)\phi)(x)+ct\leq u(x)+\alpha,\qquad \text{for any }(x,t)\in\Gamma\times\mathds{R}^{+}. \tag{36}\]
Thanks to this and Theorem 4.12, the Arzela-Ascoli Theorem yields that for any positive and diverging sequence \(\{t_{n}\}_{n\in\mathds{N}}\), up to subsequences, \(\mathcal{S}(t_{n})\phi+ct_{n}\) converges uniformly to some continuous function \(f\). We denote with \(\omega_{\mathcal{S}}(\phi)\) the set made up by the uniform limits of \(\mathcal{S}(t)\phi+ct\). We point out that by (36) and Proposition 5.6
\[f(x)\geq u(x),\qquad\text{for any }f\in\omega_{\mathcal{S}}(\phi),x\in\Gamma. \tag{37}\]
We further set the semilimit
\[\underline{\phi}(x):=\sup\left\{\limsup_{n\to\infty}(\mathcal{S}(t_{n})\phi)(x_{n} )+ct_{n}\right\}, \tag{38}\]
where the supremum is taken over the sequences \(\{x_{n}\}_{n\in\mathds{N}}\) converging to \(x\) and the positive diverging sequences \(\{t_{n}\}_{n\in\mathds{N}}\). In view of the uniform continuity of \((\mathcal{S}(t)\phi)(x)\) proved in Theorem 4.12, \(\underline{\phi}\) is continuous and the sequences \(\{x_{n}\}\) may be chosen identically equal to \(x\). It follows that
\[\underline{\phi}(x)=\sup\{f(x):f\in\omega_{\mathcal{S}}(\phi)\}. \tag{39}\]
**Proposition 5.13**.: _Given \(\phi\in C(\Gamma)\) and a flux limiter satisfying (19), let \(\underline{\phi}\) be as in (38). Then \(\underline{\phi}\) is a subsolution to (\(\mathcal{H}\)Jc)._
Proof.: To prove our claim it is enough to show that \(\underline{\phi}\circ\gamma\) is a subsolution to \((HJ_{\gamma}c)\) for any \(\gamma\in\mathcal{E}\).
We start fixing a \(\gamma\in\mathcal{E}\), a supertangent \(\varphi\) to \(\underline{\phi}\circ\gamma\) at a point \(\overline{s}\in(0,1)\), a \(\delta>0\) and a sequence \(\{t_{n}\}_{n\in\mathds{N}}\) such that \(t_{n}>\delta\) for all \(n\in\mathds{N}\) and
\[\lim_{n\to\infty}(\mathcal{S}(t_{n})\phi)(\gamma(\overline{s}))+ct_{n}= \underline{\phi}(\gamma(\overline{s})).\]
We further set for each \(n\in\mathds{N}\)
\[v_{n}:[0,1]\times[-\delta,\delta] \longrightarrow\mathds{R},\] \[(s,t) \longmapsto(\mathcal{S}(t_{n}+t)\phi)(\gamma(s))+c(t_{n}+t),\]
then (36), Theorem 4.12 and the Arzela-Ascoli Theorem yield that, up to subsequences, \(\{v_{n}\}\) uniformly converges to a \(v\in C([0,1]\times[-\delta,\delta])\). It is manifest that each \(v_{n}\) is a viscosity solution to
\[\partial_{t}U(s,t)+H_{\gamma}(s,\partial_{s}U(s,t))=c,\qquad\text{on }(0,1) \times(-\delta,\delta) \tag{40}\]
and standard stability properties of the viscosity solutions (see e.g. [1, Proposition II.2.2]) show that also \(v\) is a viscosity solution to (40). By definition we have that
\[v(\overline{s},0)=\underline{\phi}(\gamma(\overline{s}))\qquad\text{and} \qquad v(s,t)\leq\underline{\phi}(\gamma(s)),\quad\text{for any }(s,t)\in[0,1]\times[-\delta,\delta],\]
therefore \(\varphi\) is a supertangent to \(v\) at \((\overline{s},0)\). Since \(v\) is a viscosity solution to (40) it follows that
\[H_{\gamma}(\overline{s},\partial_{s}\varphi(\overline{s}))\leq c,\]
then the arbitrariness of \(\varphi\) and \(\overline{s}\) proves that \(\underline{\phi}\circ\gamma\) is a subsolution to \((HJ_{\gamma}c)\).
The next results concern the behavior of subsolution to (\(\mathcal{H}\)Jc) and elements of \(\omega_{\mathcal{S}}\) on critical curves.
**Lemma 5.14**.: _Given \(\phi\in C(\Gamma)\) and a flux limiter satisfying (19), let \(f\in\omega_{\mathcal{S}}(\phi)\) and \(w\) be a subsolution to (\(\mathcal{H}\)Jc). For any periodic critical curve \(\zeta\) the function_
\[t\longmapsto(\mathcal{S}(t)f)(\zeta(t))+ct-w(\zeta(t))\]
_is constant._
Proof.: Let \(\{t_{n}\}_{n\in\mathds{N}}\) and \(\{t^{\prime}_{n}\}_{n\in\mathds{N}}\) be two positive diverging sequences such that \(\lim_{n}\zeta(t_{n})=\zeta(0)\) and \(\lim_{n}\left\|\mathcal{S}(t^{\prime}_{n})\phi+ct^{\prime}_{n}-f\right\|_{ \infty}=0\). We also assume, without loss of generality, that \(\lim_{n}t^{\prime}_{n}-t_{n}=\infty\) and \(\mathcal{S}(t^{\prime}_{n}-t_{n})\phi+c(t^{\prime}_{n}-t_{n})\) uniformly converges to \(g\in\omega_{\mathcal{S}}(\phi)\). It follows from (11) that
\[\left\|\mathcal{S}\left(t^{\prime}_{n}\right)\phi+ct^{\prime}_{n}- \mathcal{S}(t_{n})g-ct_{n}\right\|_{\infty} =\left\|\mathcal{S}\left(t_{n}+t^{\prime}_{n}-t_{n}\right)\phi- \mathcal{S}(t_{n})g+c\left(t^{\prime}_{n}-t_{n}\right)\right\|_{\infty}\] \[\leq\left\|\mathcal{S}\left(t^{\prime}_{n}-t_{n}\right)\phi+c \left(t^{\prime}_{n}-t_{n}\right)-g\right\|_{\infty},\]
which shows that
\[\lim_{n\to\infty}\|\mathcal{S}(t_{n})g+ct_{n}-f\|_{\infty}. \tag{41}\]
Next we have by Lemma 5.10 that, for any \(t_{2}\geq t_{1}\geq 0\),
\[(\mathcal{S}(t_{2})g)(\zeta(t_{2}))+ct_{2}-(\mathcal{S}(t_{1})g) (\zeta(t_{1}))-ct_{1} \leq\int_{t_{1}}^{t_{2}}\left(L\left(\zeta,\dot{\zeta}\right)+c \right)d\tau=S_{c}(\zeta(t_{1}),\zeta(t_{2}))\] \[=w(\zeta(t_{2}))-w(\zeta(t_{1}))\]
and consequently that \(t\mapsto(\mathcal{S}(t)g)(\zeta(t))+ct-w(\zeta(t))\) is nonincreasing. This monotonicity and (36) imply the existence of a \(C\in\mathds{R}\) such that
\[\lim_{t\to\infty}(\mathcal{S}(t)g)(\zeta(t))+ct-w(\zeta(t))=C. \tag{42}\]
Finally we have by (41) and (42) that, for any \(t\in\mathds{R}^{+}\),
\[C=\lim_{n\to\infty}(\mathcal{S}(t+t_{n})g)(\zeta(t+t_{n}))+c(t+t_{n})-w(\zeta( t+t_{n}))=(\mathcal{S}(t)f)(\zeta(t))+ct-w(\zeta(t)).\]
**Lemma 5.15**.: _Let \(\zeta\) be a critical curve and define, for each \(\rho\in(0,1)\), \(\zeta_{\rho}(t):=\zeta(\rho t)\). Then_
\[\int_{t_{1}}^{t_{2}}\left(L\left(\zeta_{\rho},\dot{\zeta}_{\rho}\right)+c \right)d\tau\leq S(\zeta_{\rho}(t_{1}),\zeta_{\rho}(t_{2}))+o(1-\rho),\qquad \text{for any }t_{2}\geq t_{1}, \tag{43}\]
_where \(o(\cdot)\) is the Landau symbol._
Proof.: Preliminarily we define the set \(E\) made up by the \(t\in\mathds{R}\) such that \(\zeta\) is differentiable in \(t\), \(\dot{\zeta}(t)\neq 0\) and \(\zeta(t)\notin\mathds{V}\). If \(t\in E\) there is a \(\gamma\in\mathcal{E}\) and an \(s\in(0,1)\) such that \(\zeta(t)=\gamma(s)\) and
\[L\left(\zeta(t),\dot{\zeta}(t)\right)+c=\sigma_{c}\left(\zeta(t),\dot{\zeta}(t )\right)=\sigma_{\gamma,c}^{+}(s)\frac{\dot{\zeta}(t)\dot{\gamma}(s)}{|\dot{ \gamma}(s)|^{2}}, \tag{44}\]
therefore we have that \(q\mapsto\sigma_{c}(\zeta(t),q)\) is differentiable in \(\dot{\zeta}(t)\) and
\[\dot{\zeta}(t)\partial_{q}\sigma_{c}\left(\zeta(t),\dot{\zeta}(t)\right)= \sigma_{c}\left(\zeta(t),\dot{\zeta}(t)\right).\]
In particular (44) and Lemma B.11 yield that \(q\mapsto\sigma_{c}(\zeta(t),q)\) is a subtangent to \(q\mapsto L(\zeta(t),q)+c\) at \(\dot{\zeta}(t)\) for all \(t\in E\), thus
\[\sigma_{c}\left(\zeta(t),\dot{\zeta}(t)\right)\in\dot{\zeta}(t)\partial_{q}L \left(\zeta(t),\dot{\zeta}(t)\right),\qquad\text{for any }t\in E. \tag{45}\]
We set, for any \(\rho\in(0,1)\), the function \(\ell_{\rho}:\mathds{R}\to\mathds{R}\) such that \(\ell_{\rho}(t)\) is the projection of \(\sigma_{c}\left(\zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right)\) on \(\dot{\zeta}_{\rho}(t)\partial_{q}L\left(\zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right)\) whenever \(\rho t\in E\) and
\[\ell_{\rho}(t):=\sigma_{c}\left(\zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right), \qquad\text{otherwise}.\]
By [15, Theorem 1.3.28] the functions \(\ell_{\rho}\) are measurable, and from Proposition B.10 and [5, Theorem 2.8.1] that
\[\lim_{\rho\to 1^{-}}\ell_{\rho}(t)=\sigma_{c}\left(\zeta(t),\dot{\zeta}(t)\right), \qquad\text{for a.e. }t\in\mathds{R}. \tag{46}\]
Thanks to [5, Proposition 2.4.3] we have that
\[L\left(\zeta(\rho t),\rho\dot{\zeta}(\rho t)\right)-L\left(\zeta(\rho t),\dot {\zeta}(\rho t)\right)\leq\ell_{\rho}(t)(\rho-1),\qquad\text{for any }\rho\in(0,1),\,\rho t\in E,\]
therefore it follows from (44) that, for any \(\rho\in(0,1)\) and \(\rho t\in E\),
\[L\left(\zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right)+c\leq\sigma_{c}\left( \zeta(\rho t),\dot{\zeta}(\rho t)\right)+\ell_{\rho}(t)(\rho-1). \tag{47}\]
Next we define \(E_{0}\) as the set made up by the \(t\in\mathds{R}\) such that \(\dot{\zeta}(t)=0\), then it is apparent that, for any \(\rho\in(0,1)\) and a.e. \(\rho t\in E_{0}\),
\[L\left(\zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right)+c=L\left(\zeta(\rho t), \dot{\zeta}(\rho t)\right)+c=\sigma_{c}\left(\zeta(\rho t),\dot{\zeta}(\rho t )\right). \tag{48}\]
Notice that \(\mathds{R}\setminus(E\cup E_{0})\) is a set of measure zero, thus (47), (48) and the positive homogeneity of \(\sigma_{c}\) in the second variable yield, for any \(\rho\in(0,1)\) and a.e. \(t\in\mathds{R}\),
\[L\left(\zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right)+c\leq\sigma_{c}\left( \zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right)+\left(\frac{1}{\rho}\sigma_{c} \left(\zeta_{\rho}(t),\dot{\zeta}_{\rho}(t)\right)-\ell_{\rho}(t)\right)(1- \rho). \tag{49}\]
We point out that by Proposition B.10, [6, Corollary to Proposition 2.2.6] and (45) there is a constant \(M\) independent from \(\rho\) such that
\[\left|\sigma_{c}\left(\zeta(t),\dot{\zeta}(t)\right)\right|\leq M\quad\text{ and}\quad\left|\ell_{\rho}(t)\right|\leq M\qquad\text{for a.e }t\in\mathds{R},\]
therefore (43) follows from (46), (49) and the dominated convergence Theorem.
Thanks to the previous Lemma we can provide an extension of Lemma 5.14.
**Lemma 5.16**.: _Given \(\phi\in C(\Gamma)\) and a flux limiter satisfying (19), let \(f\in\omega_{\mathcal{S}}(\phi)\) and \(w\) be a subsolution to (4). For any periodic critical curve \(\zeta\) the function_
\[t\longmapsto f(\zeta(t))-w(\zeta(t)) \tag{50}\]
_is constant._
Proof.: We proceed by contradiction, assuming that (50) is not constant. Since \(\zeta\) is periodic we assume without loss of generality that (50) is decreasing in a neighborhood \(I\) of \(0\). By Lebesgue's Theorem for the differentiability of monotone functions we have that (50) is differentiable a.e. in \(I\), thus we further assume that it is differentiable in \(0\) and that \(m:=D(f(\zeta(0))-w(\zeta(0)))<0\). It follows that
\[f(\zeta(t))-w(\zeta(t))\leq f(\zeta(0))-w(\zeta(0))+mt+o(t),\qquad\text{for any }t\in I. \tag{51}\]
Thanks to Lemma 5.15 we have that, for any \(t\geq 0\) and \(\rho\in(0,1)\),
\[(\mathcal{S}(t)f)(\zeta(t))\leq f(\zeta((1-\rho)t))+\int_{\left(1-\frac{1}{\rho}\right)t}^{ \frac{t}{\rho}}\left(L\left(\zeta_{\rho},\dot{\zeta}_{\rho}\right)+c\right)d\tau\] \[\leq f(\zeta((1-\rho)t))+S(\zeta((1-\rho)t),\zeta(t))+o(1-\rho),\]
hence Lemma 5.10 yields that for any \(t\in\mathds{R}^{+}\) and \(\rho\in(0,1)\),
\[(\mathcal{S}(t)f)(\zeta(t))-w(\zeta(t))\leq f(\zeta((1-\rho)t))-w(\zeta((1-\rho) t))+o(1-\rho). \tag{52}\]
Finally (51) and (52) show that fixed \(t>0\) and for any \(\rho\) sufficiently near \(1\)
\[(\mathcal{S}(t)f)(\zeta(t))-w(\zeta(t))\leq f(\zeta(0))-w(\zeta(0))+m(1-\rho)t +o(1-\rho),\]
therefore, since \(m<0\), a suitable choice of \(\rho\) proves that
\[(\mathcal{S}(t)f)(\zeta(t))-w(\zeta(t))<f(\zeta(0))-w(\zeta(0)),\]
in contradiction with Lemma 5.14.
**Lemma 5.17**.: _Let \(\phi\in C(\Gamma)\), \(c_{x}\) be a flux limiter satisfying (19) and \(u\) be as in (20). If \(f\in\omega_{\mathcal{S}}(\phi)\), \(\zeta\) is a critical curve and \(\varepsilon>0\), then there is a \(t\in\mathds{R}^{+}\) such that_
\[|f(\zeta(t))-u(\zeta(t))|<\varepsilon.\]
Proof.: By definition there is a \(z\in\Gamma\) such that
\[u(\zeta(0))=\phi(z)+S_{c}(z,\zeta(0)),\]
then we choose an optimal curve \(\xi:[0,T]\to\Gamma\) for \(S_{c}(z,\zeta(0))\). Following Corollary B.14 we also choose a curve \(\xi_{\varepsilon}:[0,T_{\varepsilon}]\to\Gamma\) reparametrization of \(\xi\) such that
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau+\frac{\varepsilon}{2} \geq\int_{0}^{T_{\varepsilon}}\left(L\left(\xi_{\varepsilon},\dot{\xi}_{ \varepsilon}\right)+c\right)d\tau.\]
This implies that
\[u(\zeta(0))+\frac{\varepsilon}{2}\geq\phi(z)+\int_{0}^{T_{\varepsilon}}\left( L\left(\xi_{\varepsilon},\dot{\xi}_{\varepsilon}\right)+c\right)d\tau\geq( \mathcal{S}(T_{\varepsilon})\phi)(\zeta(0))+cT_{\varepsilon}. \tag{53}\]
Next we let \(\{t_{n}\}_{n\in\mathds{N}}\) be a positive diverging sequence such that \(\mathcal{S}(t_{n})\phi+ct_{n}\) converges uniformly to \(f\), then we have that for any \(n\) big enough
\[\|\mathcal{S}(t_{n})\phi+ct_{n}-f\|_{\infty}<\frac{\varepsilon}{2}\qquad\text {and}\qquad t_{n}>T_{\varepsilon}. \tag{54}\]
We fix a \(t_{n}\) satisfying (54) and set \(t:=t_{n}-T_{\varepsilon}\). We observe that by Lemma 5.10
\[u(\zeta(t))=u(\zeta(0))+S_{c}(\zeta(0),\zeta(t))=u(\zeta(0))+\int_{0}^{t} \left(L\left(\zeta,\dot{\zeta}\right)+c\right)d\tau.\]
This identity, (53) and (54), yield
\[f(\zeta(t))-\frac{\varepsilon}{2}< \left(\mathcal{S}(t_{n})\phi\right)(\zeta(t))+ct_{n}=(\mathcal{S }(t)\mathcal{S}(T_{\varepsilon})\phi)(\zeta(t))+c(t+T_{\varepsilon})\] \[\leq \left(\mathcal{S}(T_{\varepsilon})\phi\right)(\zeta(0))+cT_{ \varepsilon}+\int_{0}^{t}\left(L\left(\zeta,\dot{\zeta}\right)+c\right)d\tau <u(\zeta(t))+\frac{\varepsilon}{2}\]
which proves, together with (37), our claim.
We can finally provide the proof of Theorem 5.3.
Proof of Theorem 5.3.: Let \(\underline{\phi}\) be defined by (38). Proposition 5.13, (37) and (39) show that \(\underline{\phi}\) is a subsolution to \((\mathcal{H}Jc)\) satisfying
\[\underline{\phi}\geq u,\qquad\text{on }\Gamma. \tag{55}\]
Moreover Lemmas 5.16 and 5.17 yield that \(\underline{\phi}=u\) on the support all periodic critical curves, thus, by Lemma 5.10 and Proposition 5.12, \(\underline{\phi}=u\) on \(\widetilde{\mathcal{A}}_{\Gamma}\). Finally (55) and the maximality of \(u\), see Theorem 4.5, prove that \(\underline{\phi}=u\) on \(\Gamma\).
### The Supercritical Case
This section is devoted to the proof of Theorem 5.4. Our reasoning is similar to the one used in section 5.1 to prove Proposition 5.9, therefore here we will just highlight the main steps.
We start with the following Proposition, whose proof is almost identical to the one given for Proposition 5.7.
**Proposition 5.18**.: _Let \(c_{x}\) be a flux limiter such that \(a:=\max_{x\in\mathbf{V}}c_{x}>c\) and \(u\) be a solution to \((\mathcal{H}\textsc{Ja})\) in \(\Gamma\setminus\mathbf{V}_{a}\), then \(\mathcal{S}(t)u=u-at\) on \(\Gamma\times\mathds{R}^{+}\)._
The next result is an analogue of Lemma 5.8 for the supercritical case.
**Lemma 5.19**.: _Given a flux limiter \(c_{x}\) such that \(a:=\max_{x\in\mathbf{V}}c_{x}>c\) and a \(\phi\in C(\Gamma)\), there is a \(T_{\phi}>0\) depending only on \(\phi\) and \(a\) such that, for any \(x\in\Gamma\), \(t\geq T_{\phi}\) and optimal curve \(\xi\) for \((\mathcal{S}(t)\phi)(x)\), \(\xi([0,t])\cap\mathbf{V}_{a}\neq\emptyset\)._
Proof.: The proof of this Lemma is obtained from the proof of Lemma 5.8 with straightforward modification, using Proposition B.8 instead of Proposition B.5 and Proposition 5.18 instead of Proposition 5.7.
Proceeding as in the proof of Proposition 5.6, replacing Lemma 5.8 with Lemma 5.19, we get a convergence result when the initial datum is a subsolution:
**Proposition 5.20**.: _Given a flux limiter \(c_{x}\) such that \(a:=\max_{x\in\mathbf{V}}c_{x}>c\) and a subsolution \(w\) to \((\mathcal{H}\textsc{Ja})\), we define_
\[u(x):=\min_{y\in\mathbf{V}_{a}}(w(y)+S_{a}(y,x)),\qquad\text{for $x\in\Gamma$.}\]
_Then there is a time \(T_{w}\), depending on \(w\) and \(a\), such that_
\[(\mathcal{S}(t)w)(x)+ct=u(x),\qquad\text{for any $x\in\Gamma$, $t\geq T_{w}$.}\]
Finally we have:
Proof of Theorem 5.4.: We point out that, since \(a>c\geq a_{0}\), Theorem B.13 yields that every curve on \(\Gamma\) has \(a\)-Lagrangian parametrization. Then, arguing as in the proof of Proposition 5.9 with straightforward modification, e.g. using Proposition 5.20 instead of Proposition 5.6, we prove our claim.
## 6 Fixed Points of the Semigroup \(\mathcal{S}\)
In this paper we have characterized the critical value \(c\) dynamically, using closed curves on the network \(\Gamma\). Alternatively \(c\) can be seen as the minimum \(a\in\mathds{R}\) such that \((\mathcal{H}\textsc{Ja})\) admits subsolutions. Both these characterization are given in [23]. In more traditional settings additional characterizations are known. In particular, see for instance [10], on compact connected Riemannian manifolds the critical value is the only value \(a\) such that the semigroup \(\phi\mapsto\mathcal{S}(t)\phi+at\) admits fixed points and these fixed points are the solutions to the respective eikonal equation. In our case, however, the presence of the flux limiters influence this result, as can be inferred from the previous analysis on the large time behavior. More in details we have the following:
**Theorem 6.1**.:
**i)**: _Given a flux limiter_ \(c_{x}\) _satisfying (_19_), the only value_ \(b\) _such that the semigroup_ \(\phi\mapsto\mathcal{S}(t)\phi+bt\) _admits fixed points is the critical value_ \(c\)_. These fixed points are the solutions to (_\(\mathcal{H}\)Jc_) in_ \(\Gamma\setminus\widetilde{\mathbf{V}}\)_._
**ii)**: _Given a flux limiter_ \(c_{x}\) _such that_ \(a:=\max\limits_{x\in\mathbf{V}}c_{x}>c\)_, the only value_ \(b\) _such that the semigroup_ \(\phi\mapsto\mathcal{S}(t)\phi+bt\) _admits fixed points is_ \(a\)_. These fixed points are the solutions to (_\(\mathcal{H}\)Ja_) in_ \(\Gamma\setminus\mathbf{V}_{a}\)_._
Proof.: Here we will just prove **(i)**, the proof of **(ii)** can be obtained the arguing in the same way with straightforward modification.
First of all observe that by Theorem 5.3, given \(\phi\in C(\Gamma)\), \(\mathcal{S}(t)\phi+ct\) converges to a continuous function as \(t\) tends to \(\infty\), thus it is apparent that \(\mathcal{S}(t)\phi+bt\) diverges as \(t\) tends to \(\infty\) whenever \(b\neq c\). This shows that, if \(b\neq c\), then \(\phi\mapsto\mathcal{S}(t)\phi+bt\) does not admit fixed points.
We know from Proposition 5.7 that the solutions to (\(\mathcal{H}\)Jc) in \(\Gamma\setminus\widetilde{\mathbf{V}}\) are fixed point of \(\phi\mapsto\mathcal{S}(t)\phi+ct\), hence to conclude we just have to prove that every fixed point of \(\phi\mapsto\mathcal{S}(t)\phi+ct\) is a solution to (\(\mathcal{H}\)Jc) in \(\Gamma\setminus\widetilde{\mathbf{V}}\). To achieve that we fix a fixed point \(\phi\) and define \(\mathbf{V}_{c}:=\{x\in\mathbf{V}:c_{x}=c\}\). We stress out that \(\mathbf{V}_{c}\supseteq\widetilde{\mathbf{V}}\). We start showing that \(\phi\circ\gamma\) is a subsolution to (\(\mathcal{H}\)Jc) for any \(\gamma\in\mathcal{E}\). Let \(\gamma\in\mathcal{E}\) and \(\varphi\) be a supertangent to \(\phi\circ\gamma\) at a point \(s\in(0,1)\) and notice that, by definition, \(\phi(x)-ct\) is a solution to (\(\mathcal{H}\)JE) and, for any \(t>0\), \(\varphi-ct\) is a supertangent to \(\phi\circ\gamma-ct\) at \((s,t)\in\mathcal{Q}\). It follows that
\[-c+H_{\gamma}(s,\partial_{s}\varphi)\leq 0,\]
therefore, thanks to arbitrariness of \(\gamma\), \(s\) and \(\varphi\), we have that \(\phi\circ\gamma\) is a subsolution to (\(\mathcal{H}\)Jc) for each \(\gamma\in\mathcal{E}\), hence \(\phi\) is a subsolution to (\(\mathcal{H}\)Jc). In the same way we can prove that \(\phi\circ\gamma\) is a supersolution to (\(\mathcal{H}\)Jc) for each \(\gamma\in\mathcal{E}\). Using the same technique we also have that **(iii)** in Definition 4.8 implies that \(\phi\) satisfies **(iii)** in Definition 4.1 on any vertex outside \(\mathbf{V}_{c}\), i.e. \(\phi\) is a solution on \(\Gamma\setminus\mathbf{V}_{c}\). Finally Theorem 4.5 yields that the restriction of \(\phi\) on a static class \(\Gamma^{\prime}\) of \(\mathcal{A}_{\Gamma}\) is a solution to (\(\mathcal{H}\)Jc) on \(\Gamma^{\prime}\), i.e. \(\phi|_{\Gamma^{\prime}}\) satisfies **(iii)** in Definition 4.1 on \(\Gamma^{\prime}\). It follows from this that \(\phi\) satisfies **(iii)** in Definition 4.1 on any vertex in \(\mathcal{A}_{\Gamma}\), therefore \(\phi\) is solution to (\(\mathcal{H}\)Jc) in \(\Gamma\setminus\widetilde{\mathbf{V}}\).
## Appendix A Additional Results for the Eikonal Problem
In this appendix we collect some auxiliary results for the eikonal problem which are needed to our analysis. They can be easily inferred from [23], however we report them for reader's convenience.
**Lemma A.1**.: _Let \(\xi:[0,T]\to\Gamma\) be an optimal curve for \(S_{a}(y,x)\). Then, given two curves \(\xi_{1}:[0,T_{1}]\to\Gamma\) and \(\xi_{2}:[0,T_{2}]\to\Gamma\) such that \(\xi=\xi_{1}*\xi_{2}\), we have that_
\[S_{a}(y,\xi_{1}(T_{1}))=\int_{0}^{T_{1}}\sigma_{a}\left(\xi_{1},\dot{\xi}_{1} \right)d\tau\qquad\text{and}\qquad S_{a}(\xi_{2}(0),x)=\int_{0}^{T_{2}}\sigma_{ a}\left(\xi_{2},\dot{\xi}_{2}\right)d\tau.\]
Proof.: By definition we have that
\[S_{a}(y,\xi_{1}(T_{1}))\leq\int_{0}^{T_{1}}\sigma_{a}\left(\xi_{1},\dot{\xi}_{1 }\right)d\tau,\qquad\text{and}\qquad S_{a}(\xi_{2}(0),x)\leq\int_{0}^{T_{2}} \sigma_{a}\left(\xi_{2},\dot{\xi}_{2}\right)d\tau,\]
thus
\[S_{a}(y,x)\leq S_{a}(y,\xi_{1}(T_{1}))+S_{a}(\xi_{2}(0),x)\leq\int_{0}^{T_{1}} \sigma_{a}\Big{(}\xi_{1},\dot{\xi}_{1}\Big{)}d\tau+\int_{0}^{T_{2}}\sigma_{a} \Big{(}\xi_{2},\dot{\xi}_{2}\Big{)}d\tau=\int_{0}^{T}\sigma_{a}\Big{(}\xi,\dot{ \xi}\Big{)}d\tau.\]
Since \(\xi\) is optimal for \(S_{a}(y,x)\), this concludes the proof.
If \(c=a_{0}\) we can easily identify some of the elements of \(\mathcal{A}_{\Gamma}\). Indeed, if \(\gamma\in\mathcal{E}\) is such that \(a_{\gamma}=c\), then, setting \(\xi:=\gamma*\widetilde{\gamma}\), we have by (8)
\[\int_{0}^{2}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau=\int_{0}^{1}\left(\sigma_ {\gamma,c}^{+}(s)+\sigma_{\widetilde{\gamma},c}^{+}(1-s)\right)ds=\int_{0}^{1 }\left(\sigma_{\gamma,c}^{+}(s)-\sigma_{\gamma,c}^{-}(s)\right)ds=0,\]
which is to say that the support of \(\xi\) is contained in the Aubry set. Formally we have:
**Lemma A.2**.: _If \(\gamma\in\mathcal{E}\) is such that \(a_{\gamma}=c\), the support of \(\gamma\) is contained in the Aubry set._
The following Theorem is a small extension of one of the main results given in [23].
**Theorem A.3**.: _Let \(\Gamma^{\prime}\) be a closed subset of \(\Gamma\), \(\phi:\Gamma^{\prime}\to\mathds{R}\) be any function bounded by below and define_
\[w(x):=\min_{y\in\Gamma^{\prime}}(\phi(y)+S_{a}(y,x)),\qquad\text{for }x\in\Gamma.\]
_If \(a\geq c\) then \(w\) is the maximal subsolution to \((\mathcal{H}Ja)\) not exceeding \(\phi\) on \(\Gamma^{\prime}\) and a solution in \(\Gamma\setminus\Gamma^{\prime}\)._
Proof.: Given \(y\in\Gamma^{\prime}\) it is shown in [23] that \(x\mapsto\phi(y)+S_{a}(y,x)\) is a solution in \(\Gamma\setminus\{y\}\), therefore \(w\) satisfies the subtangent test in \(\Gamma\setminus\Gamma^{\prime}\) as the minimum of those solutions. Then to conclude we just have to show that \(w\) is the maximal subsolution not exceeding \(\phi\) on \(\Gamma^{\prime}\).
That \(w\) is a subsolution to \((\mathcal{H}Ja)\) follows from Proposition 4.2, while that \(w\leq\phi\) on \(\Gamma^{\prime}\) is manifest. Now let us assume by contradiction that \(w\) is not maximal among the subsolutions not exceeding \(\phi\), which is to say that there is a subsolution \(v\) not exceeding \(\phi\) such that \(v(x)>w(x)\) for a fixed \(x\in\Gamma\). Let \(y\) be such that \(w(x)=\phi(y)+S_{c}(y,x)\), then
\[v(x)-v(y)>w(x)-\phi(y)=S_{c}(y,x),\]
in contradiction with Proposition 4.2.
## Appendix B Reparametrizations of Curves
Solutions to the time dependent problem \((\mathcal{H}JE)\) are given through a Lax-Oleinik type operator, while for the solutions to the eikonal problem \((\mathcal{H}Jc)\) a representation formula is obtained exploiting the weak KAM theory. It is then manifest that in order to perform our asymptotic analysis we need to establish a relationship between these two theories. Following [8, 11], this is done through reparametrizations of the curves on \(\Gamma\).
**Definition B.1**.: Given an absolutely continuous curve \(\xi:[0,T]\to\mathds{R}^{N}\), a curve \(\zeta:[0,T^{\prime}]\to\mathds{R}^{N}\) is called a _reparametrization_ of \(\xi\) if there exists a nondecreasing absolutely continuous function \(\psi\) from \([0,T^{\prime}]\) onto \([0,T]\) with
\[\zeta(t)=\xi\circ\psi(t),\qquad\text{for any }t\in\left[0,T^{\prime}\right].\]
Note that if \(\zeta\) is a reparametrization of \(\xi\), the converse property in general is not true for \(\psi\) could have not strictly positive derivative for a.e. \(t\), see Zarecki criterion for an absolutely continuous inverse in [4]. We have that reparametrizations are absolutely continuous:
**Lemma B.2**.: [21, Corollary 4] _Let \(\xi:[0,T]\to\mathds{R}^{N}\) be a curve and \(\psi:[0,T^{\prime}]\to[0,T]\) be absolutely continuous and nondecreasing. Then the reparametrization \(\zeta\equiv\xi\circ\psi\) of \(\xi\) is absolutely continuous and_
\[\frac{d}{dt}\zeta(t)=\dot{\xi}(\psi(t))\dot{\psi}(t),\qquad\text{a.e. in }[0,T^{\prime}].\]
**Lemma B.3**.: _If the curve \(\zeta:[0,T^{\prime}]\to\Gamma\) is a reparametrization of a curve \(\xi:[0,T]\to\Gamma\), then_
\[\int_{0}^{T^{\prime}}\sigma_{a}\left(\zeta,\dot{\zeta}\right)d\tau=\int_{0}^{T} \sigma_{a}\left(\xi,\dot{\xi}\right)d\tau,\qquad\text{for every $a\in\mathds{R}$}.\]
Proof.: It follows from the the definition that \((x,q)\mapsto\sigma_{a}(x,q)\) is positively homogeneous on \(q\), thus, if we let \(\psi\) be the nondecreasing absolutely continuous function such that \(\zeta\equiv\xi\circ\psi\) and consider the change of variable \(r=\psi(\tau)\), we get from Lemma B.2 that, for every \(a\in\mathds{R}\),
\[\int_{0}^{T^{\prime}}\sigma_{a}\left(\zeta,\dot{\zeta}\right)d\tau=\int_{0}^{T ^{\prime}}\sigma_{a}\left(\xi\circ\psi,\dot{\xi}\circ\psi\right)\dot{\psi}( \tau)d\tau=\int_{0}^{T}\sigma_{a}\left(\xi,\dot{\xi}\right)dr.\]
The next Proposition comes from classical results of analysis in metric space, see [4] and [7, Lemma 3.11].
**Proposition B.4**.: _Any curve in \([0,T]\) is the reparametrization of a curve \(\xi\) with constant speed, namely with \(\left|\dot{\xi}\right|\equiv\mathrm{constant}\) a.e., defined on a bounded interval._
**Proposition B.5**.: _Let \(\xi:[0,T]\to\Gamma\) be a curve such that_
\[\xi([0,T])\cap(\mathcal{A}_{\Gamma}\setminus\mathbf{V})=\emptyset. \tag{56}\]
_Then there exist two positive constants \(A\) and \(B\) independent from \(\xi\) such that_
\[A\int_{0}^{T}\left|\dot{\xi}(\tau)\right|d\tau-B\leq\int_{0}^{T}\sigma_{c} \left(\xi,\dot{\xi}\right)d\tau. \tag{57}\]
In order to prove this Proposition we provide some auxiliary results.
**Lemma B.6**.: _Let \(\xi:[0,T]\to\Gamma\) be a curve such that_
\[\xi=(\gamma_{1}\circ\eta_{1})\ast\cdots\ast(\gamma_{k}\circ\eta_{k}), \tag{58}\]
_where \(\gamma_{i}((0,1))\cap\mathcal{A}_{\Gamma}=\emptyset\) and \(\dot{\eta}_{i}=1\) a.e. for any \(i\in\{1,\ldots,k\}\). Then there exist two positive constants \(A\) and \(B\) independent from \(\xi\) such that_
\[AT-B\leq\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau. \tag{59}\]
Proof.: Preliminarily we define the set \(\mathcal{E}^{\prime}\subset\mathcal{E}\) made up by the arcs \(\gamma\) such that \(\gamma((0,1))\cap\mathcal{A}_{\Gamma}=\emptyset\). We also set \(B_{1}\) as the number of elements in \(\mathcal{E}^{\prime}\) and
\[B_{2}:=-\min_{s_{1},s_{2}\in[0,1],\gamma\in\mathcal{E}^{\prime}}\int_{s_{1}}^ {s_{2}}\sigma_{\gamma,c}^{+}(s)ds\geq 0.\]
Next we observe that there is only a finite number of closed curves
\[\zeta_{i}:=\gamma_{i_{1}}\ast\ldots\ast\gamma_{i_{k_{i}}} \tag{60}\]
such that \(\zeta_{i}\) is simple or \(\zeta_{i}=\gamma\ast\widetilde{\gamma}\) and, for all \(l\in\{1,\ldots,k_{i}\}\), \(\gamma_{i_{l}}\in\mathcal{E}^{\prime}\). Then we define
\[A_{1}:=\min_{i}\int_{0}^{k_{i}}\sigma_{c}\left(\zeta_{i},\dot{\zeta}_{i} \right)d\tau. \tag{61}\]
We stress out that \(A_{1}>0\) since the supports of the \(\zeta_{i}\) are not contained in the Aubry set.
If
\[\xi=\gamma_{1}*\cdots*\gamma_{k} \tag{62}\]
we define \(j,l\in\{1,\ldots,k\}\) as the smallest indices such that \(j<l\) and \(\gamma_{j}(0)=\gamma_{l}(1)\). We assume that such \(j\), \(l\) exist and, to ease notation, we also assume that \(j>1\) and \(l<k\); the other cases can be treated with straightforward modifications. We set
\[\zeta_{1}^{\prime}:=\gamma_{j}*\cdots*\gamma_{l},\qquad\xi_{1}^{\prime}:= \gamma_{1}*\cdots*\gamma_{j-1},\qquad\xi_{2}^{\prime}:=\gamma_{l+1}*\cdots* \gamma_{k},\]
then \(\zeta_{1}^{\prime}\) is as in (60), \(\xi_{1}^{\prime}\) is simple and \(\xi=\xi_{1}^{\prime}*\zeta_{1}^{\prime}*\xi_{2}^{\prime}\). Iterating the above procedure a finite number of times we get that the support of \(\xi\) is made up by the closed curves \(\{\zeta_{i}^{\prime}\}_{i=1}^{m}\) as in (60) and the non-closed simple curve
\[\overline{\xi}:=\gamma_{1}^{\prime}*\cdots*\gamma_{n}^{\prime},\]
therefore
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau=\sum_{i=1}^{m}\int \sigma_{c}\left(\zeta_{i}^{\prime},\dot{\zeta}_{i}^{\prime}\right)d\tau+\sum_ {j=1}^{n}\int_{0}^{1}\sigma_{c,\gamma_{j}^{\prime}}^{+}(s)ds.\]
It follows that
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq mA_{1}-nB_{2},\]
thus we need to provide an estimates for \(m\) and \(n\). To do so we observe that if \(k\geq mB_{1}\) then \(\xi\) contains at least \(m\) closed curves as in (60), thereby
\[m\geq\left\lfloor\frac{k}{B_{1}}\right\rfloor\geq\frac{k}{B_{1}}-1\qquad\text{ and}\qquad n<B_{1}.\]
We conclude this step writing down the inequality below:
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq A_{1}\frac{k}{B_{1}} -A_{1}-B_{1}B_{2}. \tag{63}\]
Finally let \(\xi\) be as in (58), then it is apparent that
\[\xi:=(\gamma_{1}\circ\eta_{1})*\xi^{\prime}*(\gamma_{k}\circ\eta_{k}), \tag{64}\]
where \(\xi^{\prime}\) is as in (62). Notice that under our current assumptions \(k\geq T\), therefore (63) and (64) yield
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq A_{1}\frac{k-2}{B_{ 1}}-A_{1}-B_{2}(B_{1}+2)\geq A_{1}\frac{T-2}{B_{1}}-A_{1}-B_{2}(B_{1}+2). \tag{65}\]
Setting
\[A:=\frac{A_{1}}{B_{1}}\qquad\text{and}\qquad B:=\frac{2}{B_{1}}+A_{1}+B_{2}(B _{1}+2),\]
(65) proves (59).
**Lemma B.7**.: _Let \(\xi:[0,T]\to\Gamma\) be a curve such that_
\[\xi=(\gamma_{1}\circ\eta_{1})*(\gamma_{2}\circ\eta_{2})*\cdots, \tag{66}\]
_where \(\gamma_{i}((0,1))\cap\mathcal{A}_{\Gamma}=\emptyset\) and \(|\dot{\eta_{i}}|=1\) a.e. for any index \(i\). Then there exist two positive constants \(A\) and \(B\) independent from \(\xi\) such that_
\[AT-B\leq\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau. \tag{67}\]
Proof.: Preliminarily we assume that \(\xi=\gamma\circ\eta\) with \(|\dot{\eta}|=1\) for some \(\gamma\in\mathcal{E}\). We also assume, possibly replacing \(\gamma\) with \(\widetilde{\gamma}\), that
\[\eta(T)\geq\eta(0). \tag{68}\]
If \(\dot{\eta}=1\) a.e. our claim is a consequence of Lemma B.6, therefore we assume that this is not the case. It then follows from (68) that both \(\{t\in[0,T]:\dot{\eta}(t)=1\}\) and \(\{t\in[0,T]:\dot{\eta}(t)=-1\}\) have positive measure, which in turn implies that there exist at least three points \(t_{1},t_{2},t_{3}\in[0,T]\) such that \(t_{1}<t_{3}\), \(t_{2}=\dfrac{t_{3}+t_{1}}{2}\) and
\[\eta(t)=\eta(t_{3}+t_{1}-t),\qquad\text{for any }t\in[t_{1},t_{2}].\]
It is then apparent that, possibly replacing \(\gamma\) with \(\widetilde{\gamma}\),
\[\int_{t_{1}}^{t_{3}}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau= \int_{t_{1}}^{t_{2}}\sigma_{\gamma,c}^{+}(\eta(\tau))d\tau-\int_{ t_{2}}^{t_{3}}\sigma_{\gamma,c}^{-}(\eta(\tau))d\tau\] \[= \int_{t_{1}}^{t_{2}}\left(\sigma_{\gamma,c}^{+}(\eta(\tau))- \sigma_{\gamma,c}^{-}(\eta(t_{3}+t_{1}-\tau))\right)d\tau,\]
hence if we define
\[C:=\min\frac{\left(\sigma_{\gamma,c}^{+}(s)-\sigma_{\gamma,c}^{-}(s)\right)}{ 2},\]
where the minimum is taken over the \(s\in[0,1]\) and \(\gamma\in\mathcal{E}\) such that \(\gamma((0,1))\cap\mathcal{A}_{\Gamma}=\emptyset\), we have, thanks to Lemma A.2, that \(C>0\) and
\[\int_{t_{1}}^{t_{3}}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq C(t_{3}-t_{1 }).\]
Moreover, if \(T^{\prime}:=T-t_{3}+t_{1}>0\), \(\eta^{\prime}:=\eta|_{[0,t_{1}]}*\eta|_{[t_{3},T]}\) and \(\xi^{\prime}:=\gamma\circ\eta^{\prime}\), we obtain that
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq\int_{0}^{T^{\prime} }\sigma_{c}\left(\xi^{\prime},\dot{\xi}^{\prime}\right)d\tau+C(t_{3}-t_{1}).\]
Notice that, by the separability of \(\mathds{R}\) and (68), iterating the previous step an at most countable number of times we get that there exist a \(T_{0}>0\) and, if \(\overline{T}:=T-T_{0}>0\), a curve \(\overline{\eta}:\left[0,\overline{T}\right]\to[0,1]\) such that \(\overline{T}+T_{0}=T\), \(\dot{\overline{\eta}}=1\) a.e. and
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq\int_{0}^{\overline {T}}\sigma_{c}\left(\overline{\xi},\dot{\overline{\xi}}\right)d\tau+CT_{0},\]
where \(\overline{\xi}=\gamma\circ\overline{\eta}\).
Now let \(\xi\) be as in (66). We notice that repeating the previous analysis for each \(\gamma_{i}\circ\eta_{i}\) we get that there exist a time \(T_{0}\) and, if \(T^{\prime}:=T-T_{0}>0\), a curve \(\xi^{\prime}:[0,T^{\prime}]\to\Gamma\) such that
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq\int_{0}^{T^{\prime} }\sigma_{c}\left(\xi^{\prime},\dot{\xi}^{\prime}\right)d\tau+CT_{0}\]
and
\[\xi^{\prime}=\left(\gamma_{1}^{\prime}\circ\eta_{1}^{\prime}\right)*\cdots* \left(\gamma_{k}^{\prime}\circ\eta_{k}^{\prime}\right),\]
where \(\dot{\eta}_{i}=1\) a.e. for all \(i\in\{1,\dots,k\}\) and \(k\) is finite by the absolute continuity of \(\xi\). Finally we apply Lemma B.6 to \(\xi^{\prime}\) and we get that
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau\geq A^{\prime}T^{\prime }-B+CT_{0},\]
for some constants \(A^{\prime}\), \(B\). Setting \(A:=A^{\prime}\wedge C\) this yields (67).
Proof of Proposition b.5.: By Remark 3.2 we have that
\[\xi=(\gamma_{1}\circ\eta_{1})\ast(\gamma_{2}\circ\eta_{2})\ast\cdots\]
where, for each index \(i\), \(\gamma_{i}\in\mathcal{E}\) and \(\eta_{i}\) is a curve from \([0,T_{i}]\) into \([0,1]\). We know from Proposition B.4 that for every index \(i\) there is a curve \(\eta_{i}^{\prime}:[0,T_{i}^{\prime}]\to[0,1]\) with \(|\dot{\eta}_{i}^{\prime}|=1\) a.e. and an absolutely continuous nondecreasing function \(\psi_{i}:[0,T_{i}]\to[0,T_{i}^{\prime}]\) such that \(\eta_{i}=\eta_{i}^{\prime}\circ\psi_{i}\). In particular we have by Lemma B.2 that, for any index \(i\),
\[\int_{0}^{T_{i}}|\dot{\eta}_{i}(\tau)|d\tau=\int_{0}^{T_{i}}\left|\dot{\eta}_{i }^{\prime}(\psi_{i}(\tau))\right|\left|\dot{\psi}_{i}(\tau)\right|d\tau=T_{i} ^{\prime}.\]
Setting \(T^{\prime}:=\sum_{i}T_{i}^{\prime}\) and \(C:=\max_{\gamma\in\mathcal{E},\kappa\in[0,1]}|\dot{\gamma}(s)|\), it then follows that
\[\int_{0}^{T}\left|\dot{\xi}(\tau)\right|d\tau=\sum\nolimits_{i}\int_{0}^{T_{i }}|\dot{\gamma}_{i}(\eta_{i}(\tau))||\dot{\eta}_{i}(\tau)|d\tau\leq CT^{\prime}. \tag{69}\]
Finally the curve \(\xi^{\prime}:[0,T^{\prime}]\to\Gamma\) defined by
\[\xi^{\prime}=\left(\gamma_{1}\circ\eta_{1}^{\prime}\right)\ast\left(\gamma_{2 }\circ\eta_{2}^{\prime}\right)\ast\cdots\]
is a reparametrization of \(\xi\) and satisfies the assumptions of Lemma B.7, thus (69) and Lemma B.3 yield
\[\frac{A^{\prime}}{C}\int_{0}^{T_{i}}\left|\dot{\xi}(\tau)\right|d\tau-B\leq A^ {\prime}T^{\prime}-B\leq\int_{0}^{T^{\prime}}\sigma_{c}\left(\xi^{\prime},\dot {\xi}^{\prime}\right)d\tau=\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau,\]
where \(A^{\prime}\) and \(B\) are positive constant independent from \(\xi\). This proves (57).
For \(a>c\) it is possible to obtain an analogue of Proposition B.5 with straightforward modifications. The main difference is the presence of the condition (56), which is only used in (61) to obtain the positive constant \(A_{1}\). Such condition is not needed since, for any \(a>c\),
\[\int_{0}^{T}\sigma_{a}\left(\xi,\dot{\xi}\right)d\tau>0\]
whenever \(\xi:[0,T]\to\Gamma\) is a closed curve. More in details we have:
**Proposition B.8**.: _Given \(a>c\) there exist two positive constants \(A\) and \(B\), depending only on \(a\), such that, for any curve \(\xi:[0,T]\to\Gamma\),_
\[A\int_{0}^{T}\left|\dot{\xi}(\tau)\right|d\tau-B\leq\int_{0}^{T}\sigma_{a} \left(\xi,\dot{\xi}\right)d\tau.\]
**Definition B.9**.: Given a curve \(\xi:[0,T]\to\Gamma\) and an \(a\in\mathds{R}\) we say that \(\xi\) has an \(a\)_-Lagrangian parametrization_ if
\[L\left(\xi(t),\dot{\xi}(t)\right)+a=\sigma_{a}\left(\xi(t),\dot{\xi}(t)\right),\qquad\text{for a.e. }t\in[0,T].\]
We will also say that \(\zeta\) is an \(a\)-Lagrangian reparametrization of \(\xi\) if \(\zeta\) has an \(a\)-Lagrangian parametrization and it is a reparametrization of \(\xi\).
**Proposition B.10**.: _If \(\xi\) has an \(a\)-Lagrangian parametrization there is a constant \(\kappa_{a}\), depending only on \(a\), such that \(\xi\) is \(\kappa_{a}\)-Lipschitz continuous. Furthermore, if \(a<b\), then \(\kappa_{a}<\kappa_{b}\)._
Proof.: We start assuming that there exist an arc \(\gamma\) and a curve \(\eta:[0,T]\to[0,1]\) such that \(\xi=\gamma\circ\eta\). We have that for a.e. \(t\in[0,T]\)
\[L_{\gamma}(\eta(t),\dot{\eta}(t))=\mu(t)\dot{\eta}(t)-H_{\gamma}(\eta(t),\mu(t)), \tag{70}\]
where \(\mu(t)\) satisfies \(H_{\gamma}(\eta(t),\mu(t))=a\). It follows that, for all \(t\) satisfying (70), \(\dot{\eta}(t)\in\partial_{\mu}H_{\gamma}(\eta(t),\mu(t))\) and, by the coercivity of \(H_{\gamma}\) in \(\mu\), \(|\mu(t)|\leq M\) for some \(M>0\). Since \(H_{\gamma}(s,\mu)\) is locally Lipschitz continuous in \(\mu\) uniformly with respect to \(s\) and \(\gamma\), see [6, Corollary to Proposition 2.2.6], we find a constant \(C_{M}\) with \(|\dot{\eta}|\leq C_{M}\) a.e.. This yields the existence of a constant \(\kappa_{a}\), depending only on \(a\), such that \(\left|\dot{\xi}\right|\leq\kappa_{a}\) a.e.. Moreover, if \(a<b\), (H4) implies that \(\kappa_{a}<\kappa_{b}\). Finally our claim is a consequence of Remark 3.2.
The next Lemma, together with Lemma B.3, shows that, given an upper bound for the flux limiter, Lagrangian reparametrizations are, in a certain sense, optimal among all the possible reparametrizations.
**Lemma B.11**.: _Assume that \(c_{x}\leq a\) for all \(x\in\mathbf{V}\), then_
\[L(x,q)+a\geq\sigma_{a}(x,q),\qquad\text{for any }(x,q)\in T\Gamma. \tag{71}\]
Proof.: If \(x\in\mathbf{V}\) we have from our assumptions that
\[L(x,0)=-c_{x}\geq-a=\sigma_{a}(x,0)-a. \tag{72}\]
Next we let \((x,q)\in T\Gamma\) with \(q\neq 0\), it then follows that there is an arc \(\gamma\) such that, putting for notation sake \(s:=\gamma^{-1}(x)\) and \(\lambda:=\dfrac{\dot{\gamma}(s)q}{|\dot{\gamma}(s)|^{2}}\),
\[L(x,q)=L_{\gamma}(s,\lambda)\geq\max\{\mu\lambda-a:\mu\in\mathds{R},\,H_{ \gamma}(s,\mu)=a\}\geq\sigma_{a}(x,q)-a. \tag{73}\]
Finally (72) and (73) yields (71).
**Definition B.12**.: Given a curve \(\xi:[0,T]\to\Gamma\), we set
\[a_{\xi}:=-\min_{t\in[0,T]}L(\xi(t),0),\]
then we say that \(a\) is _admissible_ for \(\xi\) if \(a>a_{\xi}\). Trivially, if \(a>a_{0}\), it is admissible for any curve on \(\Gamma\).
The concept of admissibility is strongly related to Lagrangian reparametrizations, as shown by the next Theorem.
**Theorem B.13**.: _Let \(\xi:[0,T]\to\Gamma\) be a curve with a.e. non-vanishing derivative, \(a_{\xi}\) be as in Definition B.12 and define for each \(a\geq a_{\xi}\) the set_
\[T(a):=\left\{t>0:\xi\text{ has an }a\text{-Lagrangian reparametrization }\zeta:[0,t]\to\Gamma\right\}.\]
_The following facts hold:_
**i)**: _if_ \(a\) _is admissible for_ \(\xi\) _then_ \(T(a)\) _is a compact interval, namely,_
\[T(a)=\left[\underline{T}(a),\overline{T}(a)\right],\qquad\text{for some }\overline{T}(a)\geq\underline{T}(a)>0;\]
**ii)**: _if_ \(a\) _and_ \(b\) _are both admissible for_ \(\xi\) _and_ \(b>a\)_, then_ \(\overline{T}(b)\leq\underline{T}(a)\)
**iii)**: \(\lim_{a\to\infty}\overline{T}(a)=0\) _and, for any admissible_ \(a\)_,_
\[\underline{T}(a)=\lim_{b\to a^{+}}\overline{T}(b),\qquad\overline{T}(a)=\lim_{b \to a^{-}}\underline{T}(b);\]
**iv)**: _if_ \(\underline{T}(a_{\xi}):=\lim_{a\to a_{\xi}^{+}}\overline{T}(a)\) _is finite, then_
\[T(a_{\xi})=[\underline{T}(a_{\xi}),\infty).\]
_In particular, for any \(t\in(0,\infty)\), there exists an \(a\geq a_{\xi}\) such that \(\xi\) has an \(a\)-Lagrangian reparametrization \(\zeta:[0,t]\to\Gamma\)._
Proof.: If there exist an arc \(\gamma\) and a curve \(\eta:[0,T]\to[0,1]\) such that \(\xi=\gamma\circ\eta\), then our claim is a consequence of [7, Proposition 3.13 and Remark 3.17] applied to the curve \(\eta\) and the Lagrangian \(L_{\gamma}\). It is also shown there that for any admissible \(a\) there is a \(C_{a}>0\), independent from \(\eta\), such that \(\overline{T}(a)\leq C_{a}T\). Since the arcs are finite \(C_{a}\) can be chosen to be independent from the arc \(\gamma\). In the general case we have by Remark 3.2 that
\[\xi:=(\gamma_{1}\circ\eta_{1})*(\gamma_{2}\circ\eta_{2})*\cdots,\]
thus, if we define \(\xi_{i}:=\gamma_{i}\circ\eta_{i}\) for each index \(i\), we have by the previous step that our claim is true for each \(\xi_{i}\) and consequently for \(\xi\).
**Corollary B.14**.: _Let \(\xi:[0,T]\to\Gamma\) be a curve with a.e. non-vanishing derivative and define for each \(t>0\)_
\[[\xi]_{t}:=\{\zeta:[0,t]\to\Gamma:\zeta\text{ is a reparametrization of }\xi\}.\]
_Then_
\[\int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau=\inf\left\{\int_{0}^{t} \left(L\left(\zeta,\dot{\zeta}\right)+c\right)d\tau:\zeta\in[\xi]_{t},t>0 \right\}.\]
Proof.: We let \(\{a_{n}\}_{n\in\mathds{N}}\) be a decreasing sequence converging to \(c\), then
\[\lim_{n\to\infty}\int_{0}^{T}\sigma_{a_{n}}\left(\xi,\dot{\xi}\right)d\tau= \int_{0}^{T}\sigma_{c}\left(\xi,\dot{\xi}\right)d\tau \tag{74}\]
by Proposition 3.3 and the monotone convergence Theorem. Each \(a_{n}\) is bigger than \(c\), thus it is admissible for \(\xi\). Theorem B.13 then implies that, for each \(n\in\mathds{N}\), there is an \(a_{n}\)-Lagrangian reparametrization \(\zeta_{n}:[0,T_{n}]\to\Gamma\) of \(\xi\). It follows from Lemma B.3 that
\[\int_{0}^{T}\sigma_{a_{n}}\left(\xi,\dot{\xi}\right)d\tau= \int_{0}^{T_{n}}\sigma_{a_{n}}\left(\zeta_{n},\dot{\zeta}_{n} \right)d\tau= \int_{0}^{T_{n}}\left(L\left(\zeta_{n},\dot{\zeta}_{n}\right)+a_{n} \right)d\tau\geq \int_{0}^{T_{n}}\left(L\left(\zeta_{n},\dot{\zeta}_{n}\right)+c \right)d\tau,\]
we can then conclude thanks to (74) and Lemma B.11.
## Appendix C Regularity of the Minimal Action
We consider the minimal action
\[h_{T}(y,x)=\min\left\{\int_{0}^{T}L\left(\xi,\dot{\xi}\right)d\tau:\xi\text{ is a curve with }\xi(0)=y,\,\xi(T)=x\right\}, \tag{75}\]
for \((y,x,T)\in\Gamma^{2}\times\mathds{R}^{+}\). In this appendix we will provide a regularity result for the minimal action using Lagrangian parametrizations.
**Lemma C.1**.: _Given \((y,x,T)\in\Gamma^{2}\times\mathds{R}^{+}\) there exist an optimal curve \(\zeta\) for \(h_{T}(y,x)\) and a constant \(a\geq a_{\zeta}\) such that \(\zeta\) has an \(a\)-Lagrangian parametrization._
Proof.: Given \((y,x,T)\in A_{C}\) there is an optimal curve \(\xi:[0,T]\to\Gamma\) for \(h_{T}(y,x)\). If \(\left|\dot{\xi}\right|=0\) a.e., i.e. \(y=x\), then, setting \(a:=-L(x,0)\), \(\xi\) has an \(a\)-Lagrangian parametrization. Otherwise, by Proposition B.4, it is a reparametrization of a curve \(\zeta_{0}:[0,T]\to\Gamma\) with \(\left|\dot{\zeta}\right|\) constant a.e.. Since by our assumptions \(\zeta_{0}\) has a.e. non-vanishing derivative, Theorem B.13 yields the existence of a constant \(a\) and an \(a\)-Lagrangian reparametrization \(\zeta:[0,T]\to\Gamma\) of \(\zeta_{0}\). In particular we have, thanks to Lemmas B.3 and B.11,
\[\int_{0}^{T}\left(L\left(\xi,\dot{\xi}\right)+a\right)d\tau\geq\int_{0}^{T} \sigma_{a}\left(\xi,\dot{\xi}\right)d\tau=\int_{0}^{T}\sigma_{a}\left(\zeta, \dot{\zeta}\right)d\tau=\int_{0}^{T}\left(L\left(\zeta,\dot{\zeta}\right)+a \right)d\tau,\]
which shows that \(\zeta\) is optimal.
**Lemma C.2**.: _Given \(C>0\) and_
\[A_{C}:=\left\{(y,x,T)\in\Gamma^{2}\times\mathds{R}^{+}:d_{\Gamma}(y,x)\leq CT \right\}, \tag{76}\]
_we have that for all \((y,x,T)\in A_{C}\) there is a constant \(\kappa\), depending only on \(C\), such that there exists an optimal curve for \(h_{T}(y,x)\) which is Lipschitz continuous of rank \(\kappa\)._
Proof.: We notice that, if \((y,x,T)\in A_{C}\), there is a curve \(\xi:[0,T]\to\Gamma\) with \(\left|\dot{\xi}\right|\leq C\) such that \(\xi(0)=y\) and \(\xi(T)=x\). Consequently, setting \(M_{1}:=\sup\limits_{x\in\Gamma,\left|q\right|\leq C}L(x,q)\), we get
\[\int_{0}^{T}L\left(\xi,\dot{\xi}\right)d\tau\leq T\sup\limits_{x\in\Gamma, \left|q\right|\leq C}L(x,q)=M_{1}T. \tag{77}\]
Since \(L\) is a superlinearly coercive function we can choose two positive \(A\) and \(B\) such that
\[A|q|-B\leq L(x,q),\qquad\text{for any }(x,q)\in T\Gamma,\]
thus, if \(\zeta:[0,T]\to\Gamma\) is an optimal curve for \(h_{T}(y,x)\), we have by (77)
\[A\int_{0}^{T}\left|\dot{\zeta}(\tau)\right|d\tau-BT\leq\int_{0}^{T}L\left( \zeta,\dot{\zeta}\right)d\tau\leq M_{1}T.\]
Setting \(M_{2}:=\dfrac{M_{1}+B}{A}\), we then have that, whenever \((y,x,T)\in A_{C}\) and \(\zeta\) is an optimal curve for \(h_{T}(y,x)\),
\[\int_{0}^{T}\left|\dot{\zeta}(\tau)\right|d\tau\leq M_{2}T\]
and consequently
\[\left|\left\{t\in[0,T]:\left|\dot{\zeta}(t)\right|\leq 2M_{2}\right\}\right| \geq\dfrac{T}{2}. \tag{78}\]
Given \((y,x,T)\in A_{C}\) we fix, thanks to Lemma C.1, an optimal curve \(\zeta\) for \(h_{T}(y,x)\) and a constant \(a\geq a_{\zeta}\) such that \(\zeta\) has an \(a\)-Lagrangian parametrization. \(\zeta\) is differentiable a.e., therefore (78) yields the existence of a \(t^{\prime}\in[0,T]\), a \(\gamma^{\prime}\in\mathcal{E}\) and an \(s^{\prime}\in(0,1)\) such that \(\zeta\) is differentiable in \(t^{\prime}\), \(\left|\dot{\zeta}(t^{\prime})\right|\leq 2M_{2}\), \(\zeta(t^{\prime})=\gamma^{\prime}(s^{\prime})\) and
\[L\left(\zeta\left(t^{\prime}\right),\dot{\zeta}\left(t^{\prime}\right)\right)+ a=\sigma_{a}\left(\zeta\left(t^{\prime}\right),\dot{\zeta}\left(t^{\prime} \right)\right)=\sigma_{\gamma^{\prime},a}^{+}\left(s^{\prime}\right)\dfrac{ \dot{\zeta}(t^{\prime})\dot{\gamma}^{\prime}(s^{\prime})}{|\dot{\gamma}^{ \prime}(s^{\prime})|^{2}}. \tag{79}\]
Assuming that \(\dot{\zeta}(t^{\prime})\neq 0\) we have that \(q\mapsto\sigma_{a}(\zeta(t^{\prime}),q)\) is differentiable in \(\dot{\zeta}(t^{\prime})\) and
\[\dot{\gamma}^{\prime}\left(s^{\prime}\right)\partial_{q}\sigma_{a}\left(\zeta \left(t^{\prime}\right),\dot{\zeta}\left(t^{\prime}\right)\right)=\sigma_{ \gamma^{\prime},a}^{+}\left(s^{\prime}\right).\]
Moreover we have by (79) and Lemma B.11 that \(q\mapsto\sigma_{a}(\zeta(t^{\prime}),q)\) is a subtangent to \(q\mapsto L(\zeta(t^{\prime}),q)+a\) at \(\dot{\zeta}(t^{\prime})\), therefore
\[\sigma_{\gamma^{\prime},a}^{+}(s)\in\dot{\gamma}^{\prime}\left(s^{\prime} \right)\partial_{q}L\left(\zeta\left(t^{\prime}\right),\dot{\zeta}\left(t^{ \prime}\right)\right).\]
If instead \(\dot{\zeta}(t^{\prime})=0\) then \(a=-L_{\gamma^{\prime}}(s^{\prime},0)=\min_{\mu\in\mathbb{R}}H_{\gamma^{\prime }}(s^{\prime},\mu)\), thus \(\sigma_{\gamma^{\prime},a}^{+}(s^{\prime})=\sigma_{\gamma^{\prime},a_{\gamma^ {\prime}}}^{+}(s^{\prime})\) by **(H5)**. In both cases, since \(\left|\dot{\zeta}(t^{\prime})\right|\leq 2M_{2}\), we further have by [6, Corollary to Proposition 2.2.6] that
\[\left|\sigma_{\gamma^{\prime},a}^{+}\left(s^{\prime}\right)\right|\leq\sup_{ \gamma\in\mathcal{E},s\in[0,1],|q|\leq 2M_{2}+2}\left|\dot{\gamma}(s)\right| \left(\left|L(\gamma(s),q)\right|\vee\left|\sigma_{\gamma,a_{\gamma}}^{+}(s) \right|\right)=:M_{3},\]
which in turn implies that
\[a\leq\max_{\gamma\in\mathcal{E},s\in[0,1],|\mu|\leq M_{3}}H_{\gamma}(s,\mu)= :\overline{a}.\]
It follows from Proposition B.10 that there is a constant \(\kappa\) depending on \(\overline{a}\) such that \(\left|\dot{\zeta}\right|\leq\kappa\) a.e.. We conclude this proof observing that the constant \(\overline{a}\) only depends on \(C\), thus, for each \((y,x,T)\in A_{C}\), it is always possible to select an optimal curve for \(h_{T}(y,x)\) which is also \(\kappa\)-Lipschitz continuous.
Lemma C.2 is crucial for the proof of the next Proposition.
**Proposition C.3**.: _Let \(A_{C}\) be defined by (76), then there is a constant \(\ell\) such that the minimal action in (75) is Lipschitz continuous of rank \(\ell\) on \(A_{C}\)._
Proof.: We fix \((y,x,T)\in A_{C}\), then Lemma C.2 shows that there is a constant \(\kappa\), depending only on \(C\), and an optimal curve \(\xi\) for \(h_{T}(y,x)\) such that \(\xi\) is \(\kappa\)-Lipschitz continuous. We set
\[\ell^{\prime}:=\sup_{(x,q)\in T\Gamma,|q|\leq 2\kappa}L(x,q),\qquad l:=\inf_{(x,q )\in T\Gamma}L(x,q)\qquad\text{and}\qquad\ell^{\prime\prime}:=3\ell^{\prime}-2l.\]
We start proving that, if \((y,x,T^{\prime})\in A_{C}\),
\[\left|h_{T}(y,x)-h_{T^{\prime}}(y,x)\right|\leq\ell^{\prime\prime}\left|T-T^{ \prime}\right|. \tag{80}\]
We temporarily assume that
\[\left|T^{\prime}-T\right|<\frac{T\wedge T^{\prime}}{2}, \tag{81}\]
then we define
\[\overline{\xi}(t):=\begin{cases}\xi(t),&\text{if }t\in\left[0,T-2\left|T-T^{ \prime}\right|\right),\\ \xi\left(2\left|T-T^{\prime}\right|\frac{t-T+2|T-T^{\prime}|}{T^{\prime}-T+2 |T-T^{\prime}|}+T-2\left|T-T^{\prime}\right|\right),&\text{if }t\in\left[T-2\left|T-T^{ \prime}\right|,T^{\prime}\right],\end{cases}\]
which is \(2\kappa\)-Lipschitz continuous curve connecting \(y\) to \(x\), thus
\[h_{T^{\prime}}(y,x)-h_{T}(y,x) \leq\int_{0}^{T^{\prime}}L\left(\overline{\xi},\dot{\overline{ \xi}}\right)d\tau-\int_{0}^{T}L\left(\xi,\dot{\xi}\right)d\tau\] \[=\int_{T-2|T^{\prime}-T|}^{T^{\prime}}L\left(\overline{\xi},\dot{ \overline{\xi}}\right)d\tau-\int_{T-2|T^{\prime}-T|}^{T}L\left(\xi,\dot{\xi} \right)d\tau\] \[\leq 3\ell^{\prime}\left|T-T^{\prime}\right|-2l\left|T-T^{\prime} \right|\leq\ell^{\prime\prime}\left|T-T^{\prime}\right|.\]
Interchanging the roles of \(T\) and \(T^{\prime}\) we get that (80) holds true whenever (81) is satisfied.
Now assume that (81) does not hold, then we pretend without loss of generality that \(T^{\prime}>T\) and choose an integer \(m\) such that
\[\frac{|T^{\prime}-T|}{m}<\frac{T\wedge T^{\prime}}{2}=\frac{T}{2}.\]
We define the sequence \(\{T_{i}\}_{i=0}^{m}\) such that
\[T_{0}:=T,\qquad T_{i}:=T_{i-1}+\frac{|T^{\prime}-T|}{m}\]
and observe that \(T_{m}=T^{\prime}\). By definition \(T_{i-1}\) and \(T_{i}\) satisfy (81), consequently we have from the previous step that
\[|h_{T}(y,x)-h_{T^{\prime}}(y,x)|\leq\sum_{i=1}^{m}\left|h_{T_{i}}(y,x)-h_{T_{i- 1}}(y,x)\right|\leq\sum_{i=1}^{m}\ell^{\prime\prime}|T_{i}-T_{i-1}|=\ell^{ \prime\prime}\left|T-T^{\prime}\right|.\]
To prove the general case we let \((y^{\prime},x^{\prime},T^{\prime})\) be an element of \(A_{C}\), then we define
\[T_{y}:=\frac{d_{\Gamma}(y,y^{\prime})}{C}\quad\text{and}\quad T_{x}:=\frac{d_{ \Gamma}(x,x^{\prime})}{C}.\]
We point out that by definition \((y,y^{\prime},T_{y}),(x,x^{\prime},T_{x})\in A_{C}\), thus \((y^{\prime},x^{\prime},T+T_{y}+T_{x})\in A_{C}\). In particular we have by Lemma C.2 that there exist two Lipschitz continuous curve \(\xi_{x}\), \(\xi_{y}\) of rank \(\kappa\) connecting, respectively, \(x\) to \(x^{\prime}\) and \(y^{\prime}\) to \(y\). We further define
\[\overline{\xi}(t):=\begin{cases}\xi_{y}(t),&\text{if }t\in[0,T_{y}),\\ \xi(t-T_{y}),&\text{if }t\in[T_{y},T+T_{y}),\\ \xi_{x}\left(t-T-T_{y}\right),&\text{if }t\in[T+T_{y},T+T_{y}+T_{x}].\end{cases}\]
Clearly \(\overline{\xi}:[0,T+T_{y}+T_{x}]\to\Gamma\) is a \(\kappa\)-Lipschitz continuous curve connecting \(y^{\prime}\) to \(x^{\prime}\), therefore, by (80),
\[h_{T^{\prime}}\left(y^{\prime},x^{\prime}\right)-h_{T}(y,x) \leq\left|h_{T^{\prime}}\left(y^{\prime},x^{\prime}\right)-h_{T+T _{y}+T_{x}}\left(y^{\prime},x^{\prime}\right)\right|+h_{T+T_{y}+T_{x}}\left(y^ {\prime},x^{\prime}\right)-h_{T}(y,x)\] \[\leq\] \[\leq\] \[\leq\]
Finally, interchanging the roles of \((y,x,T)\) and \((y^{\prime},x^{\prime},T^{\prime})\), we prove our claim.
|
2306.17001 | Universal edge scaling limit of discrete 1d random Schrödinger
operator with vanishing potentials | Consider random Schr\"odinger operators $H_n$ defined on
$[0,n]\cap\mathbb{Z}$ with zero boundary conditions: $$
(H_n\psi)_\ell=\psi_{\ell-1}+\psi_{\ell+1}+\sigma\frac{\mathfrak{a}(\ell)}{n^{\alpha}}\psi_{\ell},\quad
\ell=1,\cdots,n,\quad \quad \psi_{0}=\psi_{n+1}=0, $$ where $\sigma>0$ is a
fixed constant, $\mathfrak{a}(\ell)$, $\ell=1,\cdots,n$, are i.i.d. random
variables with mean $0$, variance $1$ and fast decay. The bulk scaling limit
has been investigated in \cite{kritchevski2011scaling}: at the critical
exponent $\alpha= \frac{1}{2}$, the spectrum of $H_n$, centered at
$E\in(-2,2)\setminus\{0\}$ and rescaled by $n$, converges to the
$\operatorname{Sch}_\tau$ process and does not depend on the distribution of
$\mathfrak{a}(\ell).$
We study the scaling limit at the edge. We show that at the critical value
$\alpha=\frac{3}{2}$, if we center the spectrum at 2 and rescale by $n^2$, then
the spectrum converges to a new random process depending on $\sigma$ but not
the distribution of $\mathfrak{a}(\ell)$. We use two methods to describe this
edge scaling limit. The first uses the method of moments, where we compute the
Laplace transform of the point process, and represent it in terms of integrated
local times of Brownian bridges. Then we show that the rescaled largest
eigenvalues correspond to the lowest eigenvalues of the random Schr\"odinger
operator $-\frac{d^2}{dx^2}+\sigma b_x'$ defined on $[0,1]$ with zero boundary
condition, where $b_x$ is a standard Brownian motion. This allows us to compute
precise left and right tails of the rescaled largest eigenvalue and compare
them to Tracy-Widom beta laws.
We also show if we shift the potential $\mathfrak{a}(\ell)$ by a
state-dependent constant and take $\alpha=\frac{1}{2}$, then for a particularly
chosen state-dependent shift, the rescaled largest eigenvalues converge to the
Tracy-Widom beta distribution. | Yi Han | 2023-06-29T15:00:25Z | http://arxiv.org/abs/2306.17001v2 | # Universal edge scaling limit of discrete 1D random Schrodinger operator with vanishing potentials
###### Abstract.
Consider random Schrodinger operators \(H_{n}\) defined on \([0,n]\cap\mathbb{Z}\) with zero boundary conditions:
\[(H_{n}\psi)_{\ell}=\psi_{\ell-1}+\psi_{\ell+1}+\sigma\frac{\mathfrak{a}(\ell)}{ n^{\alpha}}\psi_{\ell},\quad\ell=1,\cdots,n,\qquad\psi_{0}=\psi_{n+1}=0,\]
where \(\sigma>0\) is a fixed constant, \(\mathfrak{a}(\ell)\), \(\ell=1,\cdots,n\), are i.i.d. random variables with mean \(0\), variance \(1\) and fast decay. The bulk scaling limit has been investigated in [28]: at the critical exponent \(\alpha=\frac{1}{2}\), the spectrum of \(H_{n}\), centered at \(E\in(-2,2)\setminus\{0\}\) and rescaled by \(n\), converges to the \(\operatorname{Sch}_{\tau}\) process and does not depend on the distribution of \(\mathfrak{a}(\ell)\).
We study the scaling limit at the edge. We show that at the critical value \(\alpha=\frac{3}{2}\), if we center the spectrum at \(2\) and rescale by \(n^{2}\), then the spectrum converges to a new random process depending on \(\sigma\) but not the distribution of \(\mathfrak{a}(\ell)\). We use two methods to describe this edge scaling limit. The first uses the method of moments, where we compute the Laplace transform of the point process, and represent it in terms of integrated local times of Brownian bridges. Then we show that the rescaled largest eigenvalues correspond to the lowest eigenvalues of the random Schrodinger operator \(-\frac{d^{2}}{dx^{2}}+\sigma b_{x}^{\prime}\) defined on \([0,1]\) with zero boundary condition, where \(b_{x}\) is a standard Brownian motion. This allows us to compute precise left and right tails of the rescaled largest eigenvalue and compare them to Tracy-Widom beta laws.
We also show if we shift the potential \(\mathfrak{a}(\ell)\) by a state-dependent constant and take \(\alpha=\frac{1}{2}\), then for a particularly chosen state-dependent shift, the rescaled largest eigenvalues converge to the Tracy-Widom beta distribution.
Supported by EPSRC grant EP/W524141/1
## 1. Introduction
Consider the random Schrodinger operator \(H_{n}\) defined on \((0,n]\cap\mathbb{Z}\),
\[(H_{n}\psi)_{\ell}=\psi_{\ell-1}+\psi_{\ell+1}+\sigma\frac{\mathfrak{a}(\ell) }{n^{\alpha}}\psi_{\ell},\quad\ell=1,\cdots,n,\qquad\psi_{0}=\psi_{n+1}=0, \tag{1.1}\]
where \(\sigma>0\), and \(\mathfrak{a}(\ell)\), \(\ell=1,2,\cdots,n,\cdots\) are random variables that satisfy
_Assumption 1.1_.: The random variables \(\mathfrak{a}(\ell)\), \(\ell=1,2,\cdots\), are independent and
1. \(\mathbb{E}[\mathfrak{a}(\ell)]=0\) for each \(\ell\),
2. \(\mathbb{E}[\mathfrak{a}(\ell)^{2}]\)=1 for each \(\ell\), and
3. for some \(C>0\) and \(0<\gamma<2/3\), we have \[\mathbb{E}[|\mathfrak{a}(\ell)|^{k}]\leq C^{k}k^{\gamma k},\quad\text{ for all }\ell,k\in\mathbb{N}_{+}.\]
Denote by \(\Lambda_{n}\) the (random) set of eigenvalues of \(H_{n}\). Throughout this paper we assume the parameter \(\alpha>0\) in the definition of (1.1), so the random potentials are vanishing in the \(n\to\infty\) limit. In this scenario, it is well-known that the spectrum \(\Lambda_{n}\) converges to
\([-2,2]\) almost surely as \(n\to\infty\), and the empirical measure supported on \(\Lambda_{n}\) converges to the arcsine law
\[\rho=\frac{1}{\sqrt{1-E^{2}/4}}1_{|E|<2} \tag{1.2}\]
on \([-2,2]\). In other words, the macroscopic statistics of \(\Lambda_{n}\) converges to that of the free Laplacian in the large \(n\) limit. Convergence of the integrated density of states towards the free Laplacian also holds for higher dimensional Anderson operators with vanishing potentials, see [8]. It is thus of interest to zoom in at a finer scale and investigate second order fluctuations and obtain nontrivial scaling limits of \(\Lambda_{n}\) around some energy \(E\in[-2,2]\).
For energy in the bulk of the spectrum, that is for \(E\in(-2,2)\), the scaling limit is much better understood. This is closely related to the phenomenon of Anderson localization [1], which, in the context of random Schrodinger operator on \(\mathbb{Z}\) (potentials without decay),
\[(H\psi)_{\ell}=\psi_{\ell-1}+\psi_{\ell+1}+\sigma\mathfrak{a}(\ell)\psi_{\ell },\quad\ell\in\mathbb{Z}, \tag{1.3}\]
predicts that the spectrum of \(H\) is pure point and the eigenfunctions decay exponentially fast at infinity with probability one. Proof of such results have been obtained via different methods in the past forty years under various assumptions of the distribution of \(\mathfrak{a}\), and also for some multidimensional Anderson models. The literature is extensive so we refer to the review [21] for a comprehensive list of works in this direction. For random potentials with decaying variance, [22] considered the random Schrodinger operator
\[(H\psi)_{\ell}=\psi_{\ell-1}+\psi_{\ell+1}+\sigma\frac{\mathfrak{a}(\ell)}{ \ell^{\alpha}}\psi_{\ell},\quad\ell\in\mathbb{N}_{+}, \tag{1.4}\]
and their result is that for \(\alpha>\frac{1}{2}\), the spectrum of \(H\) is absolutely continuous (hence no localization); for \(\alpha\in(0,\frac{1}{2})\) the spectrum is pure point and eigenfunctions are square integrable; and for \(\alpha=\frac{1}{2}\) a mixed spectrum may occur depending on the value of \(\sigma\).
The bulk scaling limit of the spectrum \(\Lambda_{n}\) of the Anderson operator \(H_{n}\) (1.1), centered at a bulk energy \(E\subset(-2,2)\), also depends on the value of \(\alpha\) in an essential way. When the potentials do not decay, i.e. we take \(\alpha=0\) in (1.1) and assuming \(\mathfrak{a}(\ell)\)'s have a bounded density with respect to Lebesgue measure, then \(n(\Lambda_{n}-E)\) converges to an inhomogeneous Poisson process, for any \(E\) such that the density of states function is not flat near \(E\). See Minami [34], and also Germinet and Klopp [16] for more refined results on Poisson statistics. For potentials with fast decay, \(\alpha>\frac{1}{2}\), the rescaled point process \(n(\Lambda_{n}-E)\) converges to a deterministic point process called the clock process, independent of \(\sigma\) and the random variables \(\mathfrak{a}(\ell)\). For potentials with slow decay \(\alpha\in(0,\frac{1}{2})\), Poisson statistics is expected for \(n(\Lambda_{n}-E)\), yet the only rigorous proof in this regime up to now [23] concerns a continuous time model with vanishing coefficients, driven by Brownian motion, see also [14], Remark 2.5. Therefore we may safely conclude that, in the no decay and slow decay regime \(\alpha\in[0,\frac{1}{2})\), a Poisson scaling limit is expected, yet the limit is not universal in that the proofs use the fact that the distribution of \(\mathfrak{a}(\ell)\) is absolutely continuous with respect to the Lebesgue measure, or at least has no atoms, so the proof does not work for discrete probability laws like the Bernoulli distribution. In the regime \(\alpha\in(\frac{1}{2},\infty)\) the limiting distribution is universal but less interesting, as the randomness is hidden from our scaling.
As the reader might expect, universality with respect to probability laws for bulk eigenvalue statistics only takes place at the critical value \(\alpha=\frac{1}{2}\), which is the main result of [28]. For the decaying model (1.1), with the choice \(z=E/2+i\sqrt{1-E^{2}/4}\) and \(\rho=1=\sqrt{1-E^{2}/4}\), [28] proved that, for \(\Lambda_{n}\) being the spectrum of the operator \(H_{n}\), we
have \(\rho n(\Lambda_{n}-E)-\arg(z^{2n+2})-\pi\) converges to the point process (with \(\tau=(\sigma\rho)^{2}\))
\[\operatorname{Sch}_{\tau}:=\{\lambda:\varphi^{\lambda/\tau}(\tau)\in 2\pi \mathbb{Z}\}, \tag{1.5}\]
that is, \(\operatorname{Sch}_{\tau}\) consists of the set of \(\lambda\) such that \(\varphi^{\lambda/\tau}(\tau)\in 2\pi\mathbb{Z}\) where \(\varphi^{\lambda}\) is the solution to the SDE
\[d\varphi^{\lambda}(t)=\lambda dt+d\mathcal{B}+\operatorname{Re}[e^{-i\varphi^ {\lambda}(t)}d\mathcal{W}],\quad\varphi^{\lambda}(0)=0, \tag{1.6}\]
where \(\mathcal{B}\) and \(\mathcal{W}\) are real and complex standard Brownian motions. See also [10] for a different context in which \(\operatorname{Sch}_{\tau}\) arises as the bulk scaling limit. In [28] they also considered the random Schrodinger operator on \((0,n]\cap\mathbb{Z}\)
\[(H_{n}\psi)_{\ell}=\psi_{\ell-1}+\psi_{\ell+1}+\sigma\frac{\mathfrak{a}(\ell )}{\ell^{\frac{1}{2}}}\psi_{\ell},\quad\ell=1,\cdots,n,\qquad\psi_{0}=\psi_{n +1}=0, \tag{1.7}\]
and they identified the point process scaling limit in the bulk as the sine beta process in random matrix theory [42]. One may also consider random potentials with different vanishing profiles and get some other scaling limits [19].
In this paper we are interested in the edge scaling limit of \(H_{n}\) at \(\pm 2\), which is the edge of the arcsine law. The main question we would like to ask is, for what value of \(\alpha>0\), do we expect non-trivial edge scaling limits of \(H_{n}\) at \(\pm 2\) that is universal in terms of the distribution of \(\mathfrak{a}(\ell)\)? How do we rescale the spectrum of \(H_{n}\) (that is, for what value of \(c>0\) do we consider \(n^{c}(\Lambda_{n}-2)\)), and how do we characterize the limiting random point process?
Before stating our main results, we mention two recent papers [8] and [20] where the authors considered \(d\)-dimensional random Schrodinger operators on \(\mathbb{Z}^{d}\) with vanishing single site potentials that are multidimensional generalizations of (1.1), and they showed that when the density of the single site potentials has a particular expression (depending on \(\alpha\)), then the extremal eigenvalues of \(H_{n}\) converge to an inhomogeneous Poisson process. However, the single site potentials in these papers are heavy tailed and does not cover any distribution that has a bounded support. In this paper we take the opposite direction by considering potentials with fast decay, and do not impose further restrictions on the distribution beyond Assumption 1.1.
The main discovery of this paper is that edge scaling limit occurs at \(\alpha=\frac{3}{2}\) and \(c=2\). That is, for \(\alpha=\frac{3}{2}\) in (1.1), the point process \(n^{2}(\Lambda_{n}-2)\) converges to a nontrivial random point process on \(\mathbb{R}\), and the point process depends only on \(\sigma\) but not other properties of the random potential \(\mathfrak{a}\). This point process appears to be new in the literature, and we use two different methods to characterize it: a Brownian local time interpretation via the method of moments in Section 1.1, and a Schrodinger operator interpretation in Section 1.3. The two methods yield complementary results, and the techniques are respectively inspired by [17] and [37], with quite nontrivial modifications made in the present paper. We also discuss the possibility of Tracy-Widom edge fluctuations for a modified model of (1.1) in Section 1.2.
There are very few existing literature on the fluctuations of top eigenvalues of random Schrodinger operators, see [24], [5], [33] and [18] for earlier works in continuous space, with particularly chosen random potentials. Our paper provides the first universality result on fluctuations of top eigenvalues, and the exponent \(\frac{3}{2}\) has never been predicted before in the literature. Although we do not discuss the \(\alpha<\frac{3}{2}\) case in this paper, we believe the edge scaling limit will be a Poisson distribution and leave the proof for a future work.
### Universal edge scaling limit
As we assume \(H_{n}\) has zero boundary conditions, the eigenvalues of \(H_{n}\) are exactly the eigenvalues of the tridiagonal matrix
\[H_{n}=\begin{pmatrix}v_{1}&1&&&&\\ 1&v_{2}&1&&&\\ &1&\ddots&\ddots&&\\ &&\ddots&\ddots&1&\\ &&&1&v_{n-1}&1\\ &&&&1&v_{n}\end{pmatrix} \tag{1.8}\]
where \(v_{k}=\sigma\frac{\mathfrak{a}(k)}{n^{\alpha}}\), \(k=1,\cdots,n\). Then we can regard \(H_{n}\) as a random matrix and use tools from random matrix theory to obtain the edge scaling limit. However, the analogy between random Schrodinger operator \(H_{n}\) and classical random matrix ensembles such as the Gaussian \(\beta\) ensemble holds in the bulk but not at the edge. In the bulk, typical eigenvalue spacing is \(\frac{1}{n}\) for both the random Schrodinger operator (1.1) and an \(n\times n\) GUE matrix normalized by \(2\sqrt{n}\) ; but at the edge, an explicit computation of eigenvalues of \(H_{n}\) (in the deterministic case \(\sigma=0\)) predicts that eigenvalue spacing has order \(n^{-2}\) whereas the Tracy-Widom law of GUE matrix shows the eigenvalue spacing is \(n^{-\frac{2}{3}}\) at the edge.
Although a direct connection to random matrix ensembles does not hold in our setting, some powerful techniques in random matrix theory can still be used. In this section we use the moment method to compute the eigenvalue distribution of \(H_{n}\) at scale \(n^{-2}\). The strategy is that we compute the \(n^{2}\)-th power of \(H_{n}/2\) and try to find a scaling limit. We aim for an \(\alpha>0\) such that the scaling limit is nontrivial and universal in terms of the distribution of \(\mathfrak{a}(\ell)\), and our computation shows that for \(\alpha=\frac{3}{2}\) this is precisely the case. This value is in contrast to the critical value \(\alpha=\frac{1}{2}\) for a universal scaling limit in the bulk. The moment method has been used to establish edge scaling limits for a number of random matrix ensembles including [13],[38], and to derive an alternative description of the stochastic Airy operator in [17]. In our proof we will follow some computations in Gorin and Shkolnikov [17] in the case of Gaussian beta ensembles (thanks to Dumitriu and Edelman [12], the Gaussian beta ensemble admits a tridiagonal matrix representation, which is a starting point in [17] and [37]), yet as explained above, our scaling limit is very different from the Gaussian ensembles.
Now we state the main results. Consider a probability space supporting a Brownian motion \(W\), and for any \(T>0\) define a stochastic kernel \(K(x,y;T)\) on \([0,1]\times[0,1]\) via
\[K(x,y;T)=\frac{1}{\sqrt{2\pi T}}\exp(-\frac{(x-y)^{2}}{2T})\mathbb{E}_{B^{x,y }}\left[1_{\forall t:B^{x,y(t)}\in[0,1]}\exp\left(\frac{\sigma}{2}\int_{0}^{1 }L_{a}(B^{x,y})dW(a)\right)\right] \tag{1.9}\]
where \(B^{x,y}\) denotes a standard Brownian bridge that starts from \(x\) at \(t=0\) ad ends at \(y\) at \(t=T\), and independent of \(W\); \(L_{a}(B^{x,y})\) is the local time of \(B^{x,y}\) at level \(a\) on time interval \([0,T]\); with expectation \(\mathbb{E}_{B^{x,y}}\) taking over \(B^{x,y}\). Denote by \(\mathcal{U}(T),T>0\) as the integral operator on \(L^{2}([0,1])\) with respect to the kernel \(K(x,y;T)\). Then the following properties of \(\mathcal{U}(T)\) will be proved in Section 4.1. After writing the first draft, we learned about the recent paper [30], where similar properties for Schrodinger operators with Gaussian potentials are derived in a more general setting. Thus we do not claim any originality for this result but keep the proof for sake of completeness.
**Proposition 1.2**.:
1. _Given any_ \(T>0\)_,_ \(\mathcal{U}(T)\) _is symmetric, non-negative trace class operator on_ \(L^{2}([0,1])\) _with probability one, that satisfies_ \[\mathrm{Trace}(\mathcal{U}(T))=\int_{0}^{1}K(x,x;T)dx.\] (1.10)
2. _The operators_ \(\mathcal{U}(T),T\geq 0\) _has the semigroup property with probability one: for given_ \(T_{1},T_{2}\geq 0\)_, then_ \(\mathcal{U}(T_{1})\mathcal{U}(T_{2})=\mathcal{U}(T_{1}+T_{2})\) _almost surely._
3. _The semigroup_ \(\mathcal{U}(T),T\geq 0\)_, is strongly continuous in_ \(L^{2}\) _in the sense that given_ \(T\geq 0\)_,_ \(f\in L^{2}([0,1]),\) _we have_ \(\lim_{t\to T}\mathbb{E}[\|\mathcal{U}(T)f-\mathcal{U}(t)f\|^{2}]=0\)_._
4. _We can find a (random) orthogonal basis of vectors_ \(\mathbf{v}^{1},\mathbf{v}^{2},\cdots\in L^{2}([0,1])\)_, as well as stochastic variables_ \(\eta^{1}\geq\eta^{2}\geq\cdots\) _on the same probability space in such a way that for any_ \(T>0\)_, the spectrum of_ \(\mathcal{U}(T)\) _is given by_ \(\exp(T\eta^{i}/2),i\in\mathbb{N}\) _that correspond to eigenvectors_ \(\mathbf{v}^{i},i\in\mathbb{N}.\)__
Let \(H_{n}\) denote the matrix representation (1.8) of the random Schrodinger operator (1.1) and consider the \(n\times n\) matrix
\[\mathcal{M}(T,n)=\frac{1}{2}\left(\left(\frac{H_{n}}{2}\right)^{\lfloor Tn^{2 }\rfloor}+\left(\frac{H_{n}}{2}\right)^{\lfloor Tn^{2}\rfloor-1}\right), \tag{1.11}\]
then we have the main convergence theorem
**Theorem 1.3**.: _Under Assumption (1.1) and assuming \(\alpha=\frac{3}{2}\), we have_
\[\lim_{n\to\infty}\mathcal{M}(T,n)=\mathcal{U}(T),\quad T\geq 0\]
_in the sense that we have_
1. _Convergence in weak sense: for any_ \(f,g\in L^{2}([0,1])\) _and_ \(T>0\)_, denote by_ \(\pi_{n}f\) _the vector in_ \(\mathbb{R}^{n}\) _with components_ \(n^{\frac{1}{2}}\int_{\frac{i-1}{n}}^{\frac{i}{n}}f(x)dx,\)__\(i=1,\cdots,n\)_, and_ \((\pi_{n}f)^{\prime}\) _the transpose of_ \(\pi_{n}f\)_, then_ \[\lim_{n\to\infty}(\pi_{n}f)^{\prime}\mathcal{M}(T,n)(\pi_{n}g)=\int_{0}^{1}( \mathcal{U}(T)f)(x)g(x)dx\] _with the convergence in distribution and in the sense of moments._
2. _Convergence for traces: given any_ \(T>0\)_,_ \[\lim_{n\to\infty}\mathrm{Trace}(\mathcal{M}(T,n))=\mathrm{Trace}(\mathcal{U}( T))\] _in distribution and in moments._
3. _The above convergence holds simultaneously for any finitely many_ \(T\)_'s,_ \(f\)_'s and_ \(g\)_'s._
_Moreover, the Brownian motion \(W\) in the definition of \(\mathcal{U}(T)\) can be realized as the following limit in the Skorokhod topology:_
\[W(a):=\lim_{n\to\infty}\sum_{l=0}^{\lfloor na\rfloor}\frac{\mathfrak{a}(l)}{n ^{1/2}},\quad a\in[0,1]. \tag{1.12}\]
The trace convergence of \(\mathcal{M}(T,n)\) leads to the Laplace transform of rescaled largest eigenvalues of \(H_{n}\).
**Corollary 1.4**.: _Take \(\alpha=\frac{3}{2}\) and assume Assumption 1.1 holds. Denote by \(\eta_{n}^{1}\geq\eta_{n}^{2}\geq\eta_{n}^{n}\) the eigenvalues of \(n^{2}(H_{n}-2)\), where \(H_{n}\) is the matrix representation (1.8) of the random Schrodinger operator 1.1. Then we have the following convergence result, in terms of Laplace transform, for the edge scaling limit of the random Schrodinger operator \(H_{n}\):_
\[\sum_{i=1}^{n}e^{T\eta_{n}^{i}/2}\rightarrow_{n\rightarrow\infty}\sum_{i=1}^{ \infty}e^{T\eta^{i}/2}=\operatorname{Trace}(\mathcal{U}(T)), \tag{1.13}\]
_with the convergence holding jointly for finitely many \(T\)'s. In particular, we have convergence_
\[\eta_{n}^{i}\rightarrow_{n\rightarrow\infty}\eta^{i} \tag{1.14}\]
_simultaneously for finitely many \(i\)'s._
Since (1.13) holds for every \(T>0\), this gives the convergence of Laplace transform for rescaled eigenvalues of \(H_{n}\) towards \(\mathcal{U}(T)\). This characterization involves all the eigenvalues of \(H_{n}\), while a characterization to be given in Section 1.3 only involves the finitely many largest eigenvalues. Another interesting feature of this convergence is that it is pathwise, or deterministic in the sense that the Brownian motion \(W\) in the definition of \(\mathcal{U}(T)\) can be realized as a deterministic function of the \(\mathfrak{a}(\cdot)\)'s via (1.12). On the contrary, the interpretation to be given in Section 1.3 is more probabilistic, yielding sharp quantitative estimates but does not appear to have a pathwise interpretation.
In the degenerate case \(\sigma=0\), where no randomness appears and \(H_{n}\) is a deterministic matrix, we deduce the following interesting corollary:
**Corollary 1.5**.: _For each \(T>0\), the following limit holds_
\[\sum_{j=1}^{\infty}e^{-\frac{T}{2}\pi^{2}j^{2}}=\frac{1}{\sqrt{2\pi T}}\int_{0 }^{1}\mathbb{P}_{B^{x,x}}\left[\text{ for any }t\in[0,T],B^{x,x}(t)\in[0,1]\right]dx, \tag{1.15}\]
_where \(B^{x,x}\) is a Brownian bridge connecting \(x\) to \(x\) on \([0,T]\)._
Proof.: Set \(\sigma=0\). Since \(H_{n}\) is a Toeplitz matrix, its eigenvalues are \(2\cos(\frac{k\pi}{n+1})\) for \(k=1,2,\cdots,n\). Then for each \(j\), \(\lim_{n\rightarrow\infty}\eta_{n}^{j}=-j^{2}\pi^{2}\). Then the claim follows from Corollary 1.4 and dominated convergence theorem.
The summation in (1.15) is the Jacobi theta function in the theory of elliptic functions, and it does not have a closed form unless \(T\) takes some particular values. See also [4] or other references for Brownian motion representations of infinite series summations.
### Tracy-Widom fluctuations for shifted means
An interesting related problem is if we can construct a 1d random Schrodinger operator of the form (1.1) that has similar behavior as a Gaussian random matrix at the edge, or to say, to obtain Tracy-Widom fluctuations for (1.1).
Unfortunately, this does not seem possible from the author's eyes, if we insist that the random variables \(\mathfrak{a}(\ell)\) are i.i.d. with zero mean and unit variance. An example of 2d random Schrodinger operator whose lowest eigenvalue has Tracy-Widom fluctuation was constructed in [25] via establishing a connection between the lowest eigenvalues of the random Schrodinger operator and the fluctuation of the free energy of a log-gamma polymer [26]. The polymer model is integrable, whose free energy has GUE Tracy-Widom fluctuation, and hence the Schrodinger operator has Tracy-Widom fluctuation at the edge.
In this note we observe that, if we add a deterministic, state-dependent shift to the variable \(\mathfrak{a}(\ell)\)'s, then we can obtain Tracy-Widom \((\beta)\)-fluctuations for a series of random Schrodinger operators for any \(\beta>0\). In this case we take \(\alpha=\frac{1}{2}\) in (1.1) and we rescale eigenvalues by \(n^{2/3}\). This scaling is consistent with the one that appears in Gaussian random matrix ensembles.
Recall that for any \(\beta>0\), the Tracy-Widom \((\beta)\)-distribution is defined in the celebrated work [37], as the negative of the lowest eigenvalue of the following random Schrodinger operator on \([0,\infty)\) with zero boundary condition at \(0\):
\[\mathcal{H}_{\beta}=-\frac{d^{2}}{dx^{2}}+x+\frac{2}{\sqrt{\beta}}b^{\prime}, \tag{1.16}\]
where \(b^{\prime}\) is a white noise. The irregularity of \(b^{\prime}\) requires us to interpret \(\mathcal{H}_{\beta}\) via integration by parts, and the details can be found in [37]. This definition generalizes the definition of Tracy-Widom laws in the integrable case \(\beta=1,2,4\) in [39], [40].
**Proposition 1.6**.: _For each \(\beta>0\), consider the random Schrodinger operator_
\[(H_{n}^{\beta}\psi)_{\ell}=\psi_{\ell-1}+\psi_{\ell+1}+\left(\frac{2}{\sqrt{ \beta}}\frac{\mathfrak{a}(\ell)}{n^{1/2}}-\frac{\ell}{n}\right)\psi_{\ell}, \quad\ell=1,\cdots,n,\qquad\psi_{0}=\psi_{n+1}=0, \tag{1.17}\]
_where \(\mathfrak{a}(1),\cdots,\mathfrak{a}(n),\cdots\) are i.i.d. mean \(0\), variance \(1\) random variables and have bounded fourth moments. Denote by \(\lambda_{1,n}^{\beta}>\lambda_{2,n}^{\beta}>\cdots\) the eigenvalues of \(H_{n}^{\beta}\), then for each \(k\in\mathbb{N}_{+}\), we have the joint convergence in distribution of the random vector_
\[\left(n^{2/3}\left(2-\lambda_{1,n}^{\beta}\right),\cdots,n^{2/3}\left(2- \lambda_{k,n}^{\beta}\right)\right)\]
_in the limit \(n\to\infty\) towards the lowest \(k\) eigenvalues of the operator \(\mathcal{H}_{\beta}\) defined in (1.16)._
_More generally, for any sequence \(m_{n}\) of positive integers tending to infinity with \(m_{n}=o(n)\), consider_
\[(H_{n}^{\beta}\psi)_{\ell}=\psi_{\ell-1}+\psi_{\ell+1}+\left(\frac{2}{\sqrt{ \beta}}\frac{\mathfrak{a}(\ell)}{(m_{n})^{3/2}}-\frac{\ell}{(m_{n})^{3}} \right)\psi_{\ell},\quad\ell=1,\cdots,n,\qquad\psi_{0}=\psi_{n+1}=0, \tag{1.18}\]
_then the random vector_
\[\left((m_{n})^{2}\left(2-\lambda_{1,n}^{\beta}\right),\cdots,(m_{n})^{2}\left( 2-\lambda_{k,n}^{\beta}\right)\right)\]
_converges in the \(n\to\infty\) limit to the lowest \(k\) eigenvalues of \(\mathcal{H}_{\beta}\)._
The method to drive Proposition 1.6 is outlined in Section 5.
### A Schrodinger operator representation of the edge scaling limit
In this section we turn back to the random Schrodinger operator (1.1) with potentials of zero mean, variance \(1\) satisfying Assumption 1.1. We have determined, for \(\alpha=\frac{3}{2}\), the (universal) edge scaling limit of \(H_{n}\) via the Laplace transform (see (1.13) and (1.10)). It is interesting to ask if we can find a random Schrodinger operator representation for this scaling limit, just as the operator \(\mathcal{H}_{\beta}\) in (1.16) governs the Tracy-Widom \(\beta\)-distribution. We give a positive answer by considering the following random operator defined on \([0,1]\) with zero boundary conditions:
\[\mathcal{G}_{\sigma}=-\frac{d^{2}}{dx^{2}}+\sigma b^{\prime}_{x} \tag{1.19}\]
More precisely, let \(L^{*}\) be the set of functions \(f:[0,1]\to\mathbb{R}\) with \(f(0)=0,f(1)=0\) and \(\int_{0}^{1}(|f|^{2}+|f^{\prime}|^{2})dx<\infty.\) We say that \((\psi,\lambda)\in L^{*}\times\mathbb{R}\) is a pair of eigenfunction/eigenvalue of \(\mathcal{G}_{\sigma}\) if \(\|\psi\|_{2}=1\) and
\[\psi^{\prime\prime}(x)=\sigma\psi(x)b_{x}^{\prime}-\lambda\psi(x), \tag{1.20}\]
in the sense that the following integration by parts formula holds
\[\psi^{\prime}(x)-\psi^{\prime}(0)=\sigma\psi(x)b_{x}-\int_{0}^{x}\sigma b_{y} \psi^{\prime}(y)dy-\int_{0}^{x}\lambda\psi(y)dy. \tag{1.21}\]
Since the random operator \(\mathcal{G}_{\sigma}\) is defined on \([0,1]\), its eigenvalues are almost surely bounded from below, and we know that, by [15], Section 2 or [35]:
**Proposition 1.7**.: _Almost surely, for any \(k\geq 0\), the set of eigenvalues of \(\mathcal{G}_{\sigma}\) has a well-defined \((k+1)\)-st smallest eigenvalue \(\Lambda_{k}\). The eigenvalues of \(\mathcal{G}_{\sigma}\) are distinct admitting no accumulation point. Moreover, \(\Lambda_{k}\to\infty\) as \(k\to\infty\) with probability 1._
This proposition yields the following variational characterization of the minimal eigenvalue of \(\mathcal{G}_{\sigma}\):
\[\Lambda_{0}=\inf_{f\in L^{*},\|f\|_{2}=1}\left\{\sigma\int_{0}^{1}f^{2}(x)db( x)+\int_{0}^{1}[f^{\prime}(x)]^{2}dx\right\} \tag{1.22}\]
where \(b\) is a standard Brownian motion and the stochastic integral is defined through integration by parts.
Now we state the main result of this section, which claims that the rescaled largest eigenvalues of the random Schrodinger operator \(H_{n}\) (1.1) converges to the eigenvalues of \(\mathcal{G}_{\alpha}\):
**Theorem 1.8**.: _Assume \(\alpha=\frac{3}{2}\) and Assumption 1.1 is satisfied. Let \(\lambda_{1}^{n}\geq\lambda_{2}^{n}\geq\cdots\) denote the eigenvalues of the random Schrodinger operator \(H_{n}\), (1.1). Then for any \(k\in\mathbb{N}_{+}\) the vector_
\[\left(n^{2}(2-\lambda_{\ell}^{n})\right)_{\ell=1,\cdots,k} \tag{1.23}\]
_converges in distribution in the \(n\to\infty\) limit to the random vector \((\Lambda_{0},\Lambda_{1},\cdots,\Lambda_{k-1})\)._
Indeed, as the proof in Section 6.2 will show, the third requirement in Assumption 1.1 is not necessary for the proof of Theorem 1.8; instead a uniform bound on the fourth moment of \(\mathfrak{a}(\ell)\) is sufficient. A possible explanation of this discrepancy is that Theorem 1.8 only concerns the convergence of a finite number of eigenvalues, whereas the moment method in Section 1.1 yields Laplace transform (1.13) that depends on all the eigenvalues.
Our proof technique of a random matrix converging to a random Schrodinger operator also has some similarity to the recent works [29] and [36]. Further localization properties of the limiting operator \(\mathcal{G}_{\sigma}\) have been investigated in [9] and a series of works that follow.
Combined with results from Section 1.1, we can set up the following equivalence between the random operators \(\mathcal{U}(T),T>0\) with the random operator \(\mathcal{G}_{\sigma}\). The correspondence is detailed in the following corollary, which will be proved in Section 6.3. We learned after writing the first draft that [30], Theorem 2.24 also contains some more general result on equivalence of operators of this form.
**Corollary 1.9**.: _For any \(T>0\) define \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\) the unique operator on \(L^{2}([0,1])\) with respect to the orthogonal basis spanned by eigenvectors of \(\mathcal{G}_{\sigma}\) and having corresponding eigenvalues \(e^{-T\Lambda_{0}/2}\geq e^{-T\Lambda_{1}/2}\geq\cdots\). We take a coupling of \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\) with \(\mathcal{U}(T)\) via identifying the
Brownian motions \(W\) that appear in their definitions. Then given any \(T>0\), the operator \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\) and \(\mathcal{U}(T)\) coincide almost surely._
It is also very interesting to investigate the left and right tails of the smallest eigenvalue of the random operator \(\mathcal{G}_{\sigma}\) via the variational characterization (1.22). For the Tracy-Widom distribution, a rather detailed asymptotic analysis can be found in [37], Section 4. In our case, we have the following result, which will be proved in Section 7:
**Theorem 1.10**.: _Let \(\mathrm{RSO}_{\sigma}:=-\Lambda_{0}(\sigma)\) where \(\Lambda_{0}(\sigma)\) is the smallest eigenvalue of \(\mathcal{G}_{\sigma}\). Then for \(a\uparrow\infty\) we have_
\[\begin{split}&\mathbb{P}\left(\mathrm{RSO}_{\sigma}>a\right)= \exp\left(-\frac{8}{3\sigma^{2}}a^{3/2}(1+o(1))\right),\quad\text{and}\\ &\mathbb{P}\left(\mathrm{RSO}_{\sigma}<-a\right)=\exp\left(-\frac {1}{2\sigma^{2}}a^{2}(1+o(1))\right).\end{split} \tag{1.24}\]
This is to be compared with the Tracy-Widom (\(\beta\))-distribution, whose tails have asymptotic ([37], Theorem 1.3): as \(a\uparrow\infty\),
\[\begin{split}&\mathbb{P}\left(\mathrm{TW}_{\beta}>a\right)=\exp \left(-\frac{2}{3}\beta a^{3/2}(1+o(1))\right),\quad\text{ and}\\ &\mathbb{P}\left(\mathrm{TW}_{\beta}<-a\right)=\exp\left(-\frac{ 1}{24}\beta a^{3}(1+o(1))\right).\end{split}\]
Some parts of Theorem 1.10 have appeared in previous literature. For the right tail, a related estimate was first derived in [15], and a more precise probability estimate in the right tail was claimed in [6] that implies our right tail estimate via a simple asymptotic computation. Yet our computation is of an independent interest as it is thus much shorter and simpler than that in [6] or any other possible references. It can also be generalized to other settings without much difficulty.
We observe that (setting \(\sigma=\frac{2}{\sqrt{\beta}}\)), \(\mathrm{RSO}_{\sigma}\) has the same right tail decay as \(\mathrm{TW}_{\beta}\) up to the precision stated in the theorem, yet the left tail of \(\mathrm{RSO}_{\sigma}\) is Gaussian-like and decays slower than the left tail of \(\mathrm{TW}_{\beta}\). It would be interesting to find a better understanding of the mechanism that accounts for this similarity and difference between the tails of \(\mathrm{RSO}_{\sigma}\) and \(\mathrm{TW}_{\beta}\).
Finally, we mention that all the proofs of edge scaling limit of random Schrodinger operators in this paper work only for the discrete model defined on \(\mathbb{Z}\), (1.1), where a random matrix interpretation (1.8) is available. It is very natural to conjecture that for 1d Schrodinger operator in continuous space
\[H_{\alpha,n}=-\frac{d^{2}}{dt^{2}}+n^{-\alpha}F(X),\quad\text{on }L^{2}([0,n]), \tag{1.25}\]
where \(F\) is a smooth function on a manifold \(M\) and \(X\) is the Brownian motion on \(M\), we have the same scaling limit, or at least a nontrivial scaling limit occurs at \(\alpha=\frac{3}{2}\). We also expect that similar edge scaling limits should occur for higher dimensional Anderson operators defined on the lattice with vanishing potentials, such that the limit does not depend on the potential beyond its first few moments (in contrast to [8], [20]). A rigorous proof of these results needs many additional efforts beyond the techniques presented in this paper, so we leave these questions for a future investigation. Yet we note that the case (1.25) can possibly be treated via a method similar to the universality result in [27].
## 2. Proof sketch
We first use the method of moments to show why a universal scaling limit is expected at \(\alpha=\frac{3}{2}\). This section serves as a sketch of proof of results announced in Section 1.1, while the details are presented in Section 3. The ideas presented here are largely inspired by [17].
In the rest, the symbol \(H_{n}\) stands for the tridiagonal matrix (1.8), which is the matrix representation of the random Schrodinger operator (1.1). We will compute very high powers of the matrix \(H_{n}\), indeed compute its power up to \(n^{2}\). By definition of matrix product,
\[(H_{n})^{k}[i,i^{\prime}]=\sum H_{n}[i_{0},i_{1}]H_{n}[i_{1},i_{2}]\cdots H_{n }[i_{k-2},i_{k-1}]H_{n}[i_{k-1},i_{k}], \tag{2.1}\]
with summation over integer sequences \(i_{0},i_{1},\cdots,i_{k}\) in \(\{1,2,\cdots,n\}\) with \(i_{0}=i\), \(i_{k}=i^{\prime}\) and \(|i_{j}-i_{j-1}|\leq 1\) for all \(i,j=1,2,\cdots,k\).
As an illustration we first consider \(\left(\frac{H_{n}}{2}\right)^{k}\) and \(k=\lfloor Tn^{2}\rfloor\). Assume \(k\) is even and \(|i-i^{\prime}|\) is even, then the corresponding part of the sum **involving no diagonal element** is
\[\frac{1}{2^{k}}\sum_{\begin{subarray}{c}1\leq i_{0},i_{1},\cdots,i_{k}\leq n \\ |i_{j}-i_{j-1}|=1\forall j\\ i_{0}=i,i_{k}=i^{\prime}\end{subarray}}1. \tag{2.2}\]
Note that by the constraint of \(k\) and \(|i-i|^{\prime}\), the number of diagonal elements in the sum must be even. The sum involving **precisely two diagonal elements** is given by
\[\frac{1}{2^{k}}\sum_{\begin{subarray}{c}1\leq i_{0},i_{1},\cdots,i_{k-2}\leq n \\ |i_{j}-i_{j-1}|=1\forall j\\ i_{0}=i,i_{k-2}=i^{\prime}\end{subarray}}1^{k-2}\times\left(\frac{1}{n^{2 \alpha}}\sum_{0\leq j\leq\ell\leq k-2}\mathfrak{a}(i_{j})\mathfrak{a}(i_{l}) \right).\]
We rewrite the last factor in terms of \(\frac{1}{n^{2\alpha}}(\sum_{j=0}^{k-2}\mathfrak{a}(i_{j}))^{2}\) and \(\frac{1}{n^{2\alpha}}\sum_{j=0}^{k-2}\mathfrak{a}(i_{j})^{2}\), and we denote them by \(A\) and \(B\) respectively. Recall that \(k\sim Tn^{2}\). Assuming \(\alpha\) is very large, say \(\alpha>1\), then \(B\) vanishes in expectation as \(n\to\infty\) so it only suffices to consider \(A\) in the scaling limit. We may rewrite
\[\sum_{j=0}^{k-2}\mathfrak{a}(i_{j})=\sum_{l=0}^{n}\mathfrak{a}(l)\cdot\left| \{j=1,\cdots,k:i_{j}=l\}\right|. \tag{2.3}\]
A trajectory of simple random walk connecting \(i_{0}\) to \(i_{k-2}\) with \(k\) steps typically visit \(k^{1/2}\sim n\) order of sites, and the occupation time of each site typically has order \(k^{1/2}\sim n\). Therefore we may regard the summation \(\frac{1}{n^{\alpha}}\sum_{j=0}^{k-2}\) as a summation of \(n\) independent, mean \(0\), variance \(1\) random variables with coefficient \(\frac{1}{n^{\alpha-1}}\) each. By the central limit theorem, the sum converges to a Gaussian only if \(\alpha=\frac{3}{2}\).
For technical convenience, it would be useful to consider simultaneously the sum involving any odd number of diagonal elements. To cover that case, we consider \((\frac{H_{n}}{2})^{\lfloor Tn^{2}-1\rfloor}\) and do the same expansion. To take both even and odd cases into account, we introduce as in the introduction \(\mathcal{M}(T,n):=\frac{1}{2}\left(\left(\frac{H_{n}}{2}\right)^{\lfloor Tn^{ 2}\rfloor}+\left(\frac{H_{n}}{2}\right)^{\lfloor Tn^{2}-1\rfloor}\right)\) and prove convergence of \(\mathcal{M}(T,n)\) as \(n\) tends to infinity.
The rest of the proof of results announced in Section 1.1 follows from computing summations involving any even and odd number of diagonal elements, and approximating the result as an exponential.
For the proof of results in Section 1.2 and 1.3, we adapt the procedure in [37], yet we have to rewrite most parts of the proof because our limiting random Schrodinger operator \(\mathcal{G}_{\sigma}\) (and its domain) is very different from the stochastic Airy operator \(\mathcal{H}_{\beta}\) considered in [37].
## 3. Proof via the method of moments
### Approximating random walk by Brownian bridges
Given \(x,y\in\mathbb{R}\) and \(n\in\mathbb{N}\), \(\widetilde{T}>0\) so that \(\widetilde{T}n^{2}\) is an integer with parity of \(\lfloor nx\rfloor-\lfloor ny\rfloor\). Write
\[X^{x,y;n,\widetilde{T}}:=(X^{x,y;n,\widetilde{T}}(0),X^{x,y;n,\widetilde{T}}( n^{-2}),\cdots,X^{x,y;n,\widetilde{T}}(\widetilde{T}))\]
as the simple random walk bridge that links \(\lfloor nx\rfloor\) to \(\lfloor ny\rfloor\) in \(\widetilde{T}n^{2}\) steps of increment size \(\pm 1\), such that the walk does not go below \(0\) or go above \(n\) within time \([0,\widetilde{T}]\). The trajectory of \(X^{x,y;n,\widetilde{T}}\) is obtained uniformly over all trajectories with these properties. Define also the occupation times, up to normalization
\[L_{h}(X^{x,y;n,T})=n^{-1}\left|\{t\in[0,\widetilde{T}]:X^{x,y;n,\widetilde{T} }(t)=nh\}\right|,\quad h\in\mathbb{R}. \tag{3.1}\]
The following coupling result is crucial throughout the rest of the proof.
**Proposition 3.1**.: _Given \(x,y\in\mathbb{R}\), consider \(T_{n},n\in\mathbb{N}\) a sequence of positive numbers with \(\sup_{n}|T_{n}-T|n^{2}<\infty\) for a given \(T>0\), and that \(T_{n}n^{2}\in\mathbb{N}\). Then we can find a probability space that supports a sequence of random walk bridges \(X^{x,y;n,T_{n}},\)\(n\in\mathbb{N}\), a standard Brownian bridge \(B^{x,y}\) on \([0,T]\) that connects \(x\) to \(y\), and a random variable \(\mathcal{C}\) such that for any \(n\in\mathbb{N}\),_
\[\sup_{h\in\mathbb{R}}|L_{h}(X^{x,y;n,T_{n}})-L_{h}(B^{x,y})|\leq Cn^{-3/16} \tag{3.2}\]
\[\sup_{0\leq t\leq T_{n}\wedge T}|n^{-1}X^{x,y;n,T_{n}}(t)-B^{x,y}(t)|\leq \mathcal{C}n^{-1}\log n. \tag{3.3}\]
The proof is deferred to Appendix B, and is inspired by [17], Proposition 4.1.
We will also need the following auxiliary convergence result, stating that while the random walk bridge converges, the summation involving local times also converge:
**Proposition 3.2**.: _Given \(x,y\in\mathbb{R}\) and \(T_{n}>0\) with \(\sup_{n}|T_{n}-T|n^{2}<\infty\) for a given \(T>0\), that \(Tn^{2}\in\mathbb{N}\) for all \(n\), and \(Tn^{2}\) having same parity with \(\lfloor nx\rfloor-\lfloor ny\rfloor\). Then we have the joint convergence of_
\[\left(n^{-1}X^{x,y;n,T_{n}},\sigma\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+ \frac{1}{2})}L_{h}(X^{x,y;n,T_{n}})\frac{\mathfrak{a}(\lfloor nh\rfloor)}{n^{ 1/2}}\right) \tag{3.4}\]
_as \(n\to\infty\), converges in distribution to_
\[\left(B^{x,y},\sigma\int_{0}^{\infty}L_{a}(B^{x,y})dW_{\mathfrak{a}}(a)\right) \tag{3.5}\]
Proof.: Consider the conditional characteristic function
\[\mathbb{E}_{\mathfrak{a}}\left[\exp\left(i\sigma u\sum_{h\in n^{-1}(\mathbb{Z }_{\geq 0}+\frac{1}{2})}L_{h}(X^{x,y;n,T_{n}})\frac{\mathfrak{a}(\lfloor nh \rfloor)}{n^{1/2}}\right)\right],\quad u\in\mathbb{R} \tag{3.6}\]
with the expectation taken over \(\mathfrak{a}(m),m\in\mathbb{N}\). Define also the conditional characteristic function
\[\mathbb{E}_{W_{a}}\left[\exp\left(i\sigma u\int_{0}^{\infty}L_{a}(B^{x,y})dW_{ \mathfrak{a}}(a)\right)\right],\quad u\in\mathbb{R} \tag{3.7}\]
we show the difference between them converge to \(0\) as \(n\to\infty\), under the coupling discussed in Proposition 3.1. For fixed \(B^{x,y}\), the variable \(\sigma\int_{0}^{\infty}L_{a}(B^{x,y})dW_{\mathfrak{a}}(a)\) is a mean zero normal with variance \(\sigma^{2}\int_{0}^{\infty}L_{a}(B^{x,y})^{2}da\); meanwhile for fixed \(X^{x,y;n,T_{n}}\),
\[\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}L_{h}(X^{x,y;n,T_{n}}) \frac{\mathfrak{a}(\lfloor nh\rfloor)}{n^{1/2}}\]
is a sum of independent variables, so the desired convergence will follow from central limit theorem once we verify, in the limit \(n\to\infty\),
\[\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}\mathbb{E}_{\mathfrak{a}} \left[L_{h}(X^{x,y;n,T_{n}})\frac{\mathfrak{a}(\lfloor nh\rfloor)}{n^{1/2}} \right]\to 0, \tag{3.8}\]
\[\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}\mathbb{E}_{\mathfrak{a}} \left[\left(L_{h}\left(X^{x,y;n,T_{n}}\right)\sigma\frac{\mathfrak{a}(\lfloor nh \rfloor)}{n^{1/2}}\right)^{2}\right]\to\sigma^{2}\int_{0}^{\infty}\left[L_{a}(B ^{x,y})\right]^{2}da, \tag{3.9}\]
\[\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}\mathbb{E}_{\mathfrak{a}} \left[\left|L_{h}(X^{x,y;n,T_{n}})\sigma\frac{\mathfrak{a}(\lfloor nh\rfloor) }{n^{1/2}}\right|^{3}\right]\to 0. \tag{3.10}\]
By the coupling in Proposition 3.1, the rescaled local times \(L_{h}(x^{x,y;n,T_{n}})\) are bounded in a way that is uniform for \(h\in\mathbb{R}\) and is zero for \(h\) sufficiently large. Thus (3.8) and (3.10) follow from the fact that \(\mathfrak{a}(\cdot)\) has mean zero, the fact that \(L_{h}(X^{x,y;n,T_{n}})\) and \(L_{h}(B^{x,y})\) are bounded uniformly in \(n\) for all given \(h\) thanks to the coupling in Proposition 3.1, and that there are only \(n\) terms in the summation of \(|\mathfrak{a}(\cdot)|^{3}.\) To check (3.9), compute
\[\left|\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})} \mathbb{E}_{\mathfrak{a}}\left[\left(L_{h}(X^{x,y;n,T_{n}})\frac{\mathfrak{a}( \lfloor nh\rfloor)}{n^{1/2}}\right)^{2}\right]\right.\] \[\left.-\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}\frac{L_{ h}(B^{x,y})^{2}\mathbb{E}[\mathfrak{a}(\lfloor nh\rfloor)^{2}]}{n}\right|\] \[\leq 2\max_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}\max\{(L_{ h}(X^{x,y;n,T_{n}}),L_{h}(B^{x,y})\}\mathcal{C}n^{-3/16}\] \[\left.\cdot\frac{1}{n}\sum_{n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2}) \ni h\leq\max(\|X^{x,y;n,T_{n}}\|_{\infty},\|B^{x,y}\|_{\infty})}\mathbb{E}[ \mathfrak{a}(\lfloor nh\rfloor)^{2}].\]
The expression almost surely converges to \(0\) thanks to the fact that \(h\to L_{h}(X^{x,y;n,T_{n}})\), \(h\to L_{h}(B^{x,y})\), and \(\|X^{x,y;n,T_{n}}\|_{\infty}\) are bounded uniformly and there are \(n\)\(\mathfrak{a}(\cdot)\) to be summed. Finally, since \(a\to L_{a}(B^{x,y})^{2}\) is almost surely uniformly continuous in \(a\), we have the almost
sure convergence
\[\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}\sigma^{2}\frac{\left[L_{h} \left(B^{x,y}\right)]^{2}\mathbb{E}[\zeta\left(\left\lfloor nh\right\rfloor \right)^{2}\right]}{n}\rightarrow\sigma^{2}\int_{0}^{\infty}[L_{a}(B^{x,y})]^ {2}da.\]
which implies (3.9) and completes the proof of the whole proposition.
We also need the following version with random initial condition and multiple Brownian bridges. First introduce the following notation: given probability distributions \(\lambda,\mu\) on \([0,1]\) and \(\tilde{T}\in n^{-2}\mathbb{N}\), we denote by \(X^{\lambda,\mu;n,\tilde{T}}\) a random walk bridge whose start and end points \(x,y\) are independently sampled from the image of \(\lambda\), \(\mu\) via the map \(z\rightarrow\left\lfloor nz\right\rfloor\), and that the walk makes \(\widetilde{T}n^{2}\) steps of \(\pm 1\) if \(\widetilde{T}n^{2}\) has the same parity as \(\left\lfloor nx\right\rfloor-\left\lfloor ny\right\rfloor\), and \(\widetilde{T}n^{2}-1\) steps if this is not the case. Write also \(B^{\lambda,\mu}\) the Brownian bridge whose start and end points are independently sampled from \(\lambda\) and \(\mu\). We prove the following generalization:
**Proposition 3.3**.: _For a given \(R\in\mathbb{N}\), given any probability distributions \(\lambda_{1},\cdots,\lambda_{R}\), \(\mu_{1},\cdots,\mu_{R}\) supported on \([0,1)\), a sequence of positive reals \(T_{n},n\in\mathbb{N}\) with \(\sup_{n}|T_{n}-T|n^{2}<\infty\) for a given \(T\), and \(T_{n}n^{2}\in\mathbb{N},N\in\mathbb{N}\). Given \(n\in\mathbb{N}\), consider \(X^{\lambda_{1},\mu_{1};n,T_{n}},\cdots,X^{\lambda_{R},\mu_{R};n,T_{n}}\) that are \(R\) independent bridges of random walk. Then the \(R\)-dimensional random vector_
\[\left(\frac{1}{n}X^{\lambda_{r},\mu_{r};n,T_{n}},\sigma\sum_{n^{-1}(\mathbb{Z} +\frac{1}{2})\ni h\leq 1}L_{h}(X^{\lambda_{r},\mu_{r};n,T_{n}})\frac{\mathfrak{a}( \left\lfloor nh\right\rfloor)}{n^{1/2}}\right)_{r=1}^{R}, \tag{3.11}\]
_converges as \(n\rightarrow\infty\) in law to_
\[\left(B^{\lambda_{r},\mu_{r}},\sigma\int_{0}^{\infty}L_{a}(B^{\lambda_{r},\mu _{r}})dW_{\mathfrak{a}}(a)\right)_{r=1}^{R}, \tag{3.12}\]
_for \(B^{\lambda_{1},\mu_{1}},\cdots,B^{\lambda_{R},\mu_{R}}\) independent Brownian bridges on \([0,T]\), and \(W_{\mathfrak{a}}\) an independent standard Brownian motion. We also have the convergence in Skorokhod topology_
\[\frac{1}{n^{1/2}}\sum_{m=1}^{\left\lfloor an\right\rfloor}\mathfrak{a}(m) \to W_{\mathfrak{a}}(a),\quad n\rightarrow\infty,\quad a\in[0,1]. \tag{3.13}\]
Proof.: In the case \(R=1\), replacing fixed endpoints by random distributions follow from integrating over \(\lambda_{1}\), \(\mu_{1}\). For \(R>1\), one uses the same proof by applying the multidimensional version of central limit theorem. By the same CLT we obtain the convergence (3.13) at the level of finite dimensional marginals. Then we enhance this, via tightness, to convergence on the process level for the Skorokhod topology.
We note that when using the method of moments, we will consider random walks from \(\left\lfloor nx\right\rfloor\) to \(\left\lfloor ny\right\rfloor\) within \(\widetilde{T}n^{2}\) steps which never leaves \([0,n]\). The latter structural constraint is not very easy to eliminate via combinatorial identities, so we temporarily forget about this constraint and instead enumerate over all possible random walk paths, as in the following lemma. We will take care of this constraint in the limit by conditioning on the event that the Brownian bridge \(B^{x,y}\) never leaves \([0,1]\).
We deduce the limits of path counting numbers as follows: Consider
\[\Xi(x,y;n,\widetilde{T}):=\frac{n}{2^{\widetilde{T}n^{2}}}\begin{pmatrix} \widetilde{T}n^{2}\\ \frac{1}{2}(\widetilde{T}n^{2}+\lfloor nx\rfloor-\lfloor ny\rfloor)\end{pmatrix},\]
which gives the number of all trajectories \(X^{x,y;n,\widetilde{T}}\), re-scaled such that it has magnitude \(O(1)\). Then
**Lemma 3.4**.: _Given \(T_{n}\), \(n\in\mathbb{N}\) with \(\sup_{n}|T_{n}-T|n^{2}<\infty\), assume also \(T_{n}n^{2}\in\mathbb{N},n\in\mathbb{N}\). Then the following convergence holds uniformly for \(x,y\) in \([0,1]\) with \(\lfloor nx\rfloor-\lfloor ny\rfloor\) having the same parity as \(Tn^{2}\):_
\[\Xi(x,y;n,T_{n})\mapsto\sqrt{\frac{2}{\pi T}}e^{-(x-y)^{2}/(2T)},\quad n\to\infty. \tag{3.14}\]
_Moreover we have the (finite \(n\)) estimate_
\[\Xi(x,y;n,\widetilde{T})\leq Ce^{-(x-y)^{2}/(C\widetilde{T})} \tag{3.15}\]
_given \(x,y\in\mathbb{R},n\in\mathbb{N}\), any \(\widetilde{T}>0\) and \(\widetilde{T}n^{2}\in\mathbb{N}\) having the same parity with \(\lfloor nx\rfloor-\lfloor ny\rfloor\)._
The first claim follows from the de Moivre-Laplace Theorem and the second claim needs a bit more reasoning, and the proof is exactly the same as in [17], Lemma 4.2.
### Convergence of leading term contributions
Define the trace process
\[\operatorname{Tr}(n) =\frac{1}{2}\int_{0}^{1}\sum_{j=0}^{\lfloor Tn^{2}\rfloor}\left( \Xi(x,x;n,n^{-2}(\lfloor Tn^{2}\rfloor-j-\epsilon_{j,x,x}))\rfloor\right)\] \[\quad\cdot\operatorname{\mathbb{E}}_{X}\left[1_{\forall t:0\leq X (t)\leq n}\frac{1}{(2n^{\alpha})^{j}}\sum_{0\leq i_{1}\leq\cdots\leq i_{j}\leq \lfloor Tn^{2}\rfloor-j-\epsilon_{j,x,x}}\prod_{j^{\prime}=1}^{j}\mathfrak{a}(X (i_{j^{\prime}}n^{-2}))\right]dx,\]
whence \(\epsilon_{j,x,y}\in\{0,1\}\) is selected such that \(\lfloor Tn^{2}\rfloor-j-\epsilon_{j,x,y}\) is always an even integer, and \(X\) denotes the trajectory \(X^{x,y;n,n^{-2}(\lfloor Tn^{2}\rfloor-j-\epsilon_{j,x,y})}\).
Define also the scalar product
\[\operatorname{Sc}(n)= \frac{1}{2}\int_{0}^{1}\int_{0}^{1}f(x)g(y)\] \[\quad\cdot\sum_{j=0}^{\lfloor Tn^{2}\rfloor}\Xi\left(x,y;n,( \lfloor Tn^{2}\rfloor-j-\epsilon_{j,x,y})n^{-2}\right)\operatorname{\mathbb{ E}}_{X}\!\left[1_{\forall t:0\leq X(t)\leq n}\right.\] \[\quad\cdot\frac{1}{(2n^{\alpha})^{j}}\sum_{0\leq i_{1}\leq\cdots \leq i_{j}\leq\lfloor Tn^{2}\rfloor-j-\epsilon_{j,x,y}}\prod_{j^{\prime}=1}^{j }\mathfrak{a}(X(i_{j^{\prime}}n^{-2}))\right]\!dxdy.\]
Then we see that \(\operatorname{Sc}(n)=(\pi_{n}f)^{\prime}\mathcal{M}(T,n)(\pi_{n}g)\) and \(\operatorname{Tr}(n)=\operatorname{Trace}(\mathcal{M}(T,n))\), recalling their definitions stated in Theorem 1.3. Therefore, to prove Theorem 1.3, it suffices to prove the \(n\to\infty\) convergence of \(\operatorname{Tr}(n)\) and \(\operatorname{Sc}(n)\).
For technical convenience, define also the truncated \(j\)-th order contribution via
\[\begin{split}\operatorname{Sc}^{(j)}(n;\underline{R},\overline{R})=& \frac{1}{2}\int_{0}^{1}\int_{0}^{1}f(x)g(y)\\ &\cdot\sum_{j=0}^{\lfloor Tn^{2}\rfloor}\Xi\left(x,y;n,T_{n} \right)\mathbb{E}_{X}\Bigg{[}1_{\forall t:0\leq X(t)\leq n}\qquad,\\ &\cdot\underline{R}\vee\frac{\left(\sum_{i^{\prime}=0}^{T_{n}n^ {2}}\left(\mathfrak{a}(X(i^{\prime}n^{-2}))\right)\right)^{j}}{j!(2n^{\alpha} )^{j}}\wedge\overline{R}\Bigg{]}dxdy.\end{split} \tag{3.16}\]
where in the definition of \(\operatorname{Sc}^{(j)}(n;\underline{R},\overline{R})\), the time \(\{T_{n}\}\) satisfies \(\sup_{n}|T_{n}-T|n^{2}<\infty\) for some \(T>0\). moreover, \(T_{n}n^{2}\) is an integer for each \(n\). In the proof of the next proposition we shall take \(T_{n}=(\lfloor Tn^{2}\rfloor-j-\epsilon_{j,x,y})n^{-2}\), but the precise value of \(T_{n}\) is actually not important as they shall lead to the same limit.
The aim of this subsection is to prove the following Proposition, as we are interested in the case \(\underline{R}=-\infty\) and \(\overline{R}=\infty\):
**Proposition 3.5**.: _Assume \(\alpha=\frac{3}{2}\) and Assumption 1.1 holds. For a finite collection of \(j\)'s, some \(\underline{R}\in[-\infty,0]\) and \(\overline{R}\in[0,\infty]\), we have in the limit \(n\to\infty\) the following convergence in distribution and in moments_
\[\begin{split}\operatorname{Sc}^{(j)}(n;\underline{R},\overline{R })\to&\frac{1}{\sqrt{2\pi T}}\int_{0}^{1}\int_{0}^{1}f(x)g(y)\exp( -\frac{(x-y)^{2}}{2T})\\ &\cdot\mathbb{E}_{B^{x,y}}[1_{\forall t:B^{x,y}(t)\in[0,1]} \underline{R}\vee\frac{\left(s_{\mathfrak{a}}\int_{0}^{\infty}L_{a}(B^{x,y}) dW_{\mathfrak{a}}(a)\right)^{j}}{2^{j}j!}\wedge\overline{R}]dxdy,\end{split}\]
_and_
\[\operatorname{Tr}^{(j)}(n;\underline{R},\overline{R})\to\frac{1}{\sqrt{2\pi T }}\int_{0}^{1}\mathbb{E}_{B^{x,x}}[1_{\forall t:B^{x,x}(t)\in[0,1]}\underline {R}\vee\frac{\left(s_{\mathfrak{a}}\int_{0}^{\infty}L_{a}(B^{x,x})dW_{ \mathfrak{a}}(a)\right)^{j}}{2^{j}j!}\wedge\overline{R}]dx.\]
_The proof of this Proposition is split into the following steps._
**Lemma 3.6**.: _Assume \(\alpha=\frac{3}{2}\) and Assumption 1.1 holds. Then proposition 3.5 holds for any \(\overline{R}\in[0,\infty)\) and \(\underline{R}\in(-\infty,0]\)._
Proof.: It suffices to consider \(\operatorname{Sc}^{(j)}\) as the case for \(\operatorname{Tr}^{(j)}\) is identical. We assume that
\[f\geq 0,g\geq 0,\quad\int_{0}^{1}f(x)dx=\int_{0}^{1}g(y)dy=1,\]
as the general case follows by decomposing and re-scaling \(f\) and \(g\). Let \(\lambda,\mu\) be two probability measures on \([0,1]\) with densities \(f,g\) respectively. Given any \(n\in\mathbb{N}\) and \(R\in\mathbb{N}\), consider independent copies \(X_{r}^{n},r=1,2,\cdots,R\) of the random walk \(X^{\lambda,\mu;n,T_{n}}\). By Fubini's theorem, the \(R\)-th moment of \(\operatorname{Sc}^{(j)}(n;\underline{R},\overline{R})\) shall be given as, neglecting the \(\frac{1}{2}\) multiplicative factor and the integration against \(f\) and \(g\):
\[\begin{split}\mathbb{E}&\Bigg{[}\prod_{r=1}^{R}\Xi(X_ {r}^{n}(0),X_{r}^{n}(T_{n});n,T_{n})1_{\forall t:X_{r}^{n}(t)\in[0,n]}\\ &\cdot\underline{R}\vee\frac{\left(\sum_{i^{\prime}=0}^{T_{n}n^ {2}}\mathfrak{a}(X_{r}^{n}(i^{\prime}n^{-2}))\right)^{j}}{j!(2n^{\alpha})^{j}} \wedge\overline{R}\Bigg{]}.\end{split} \tag{3.17}\]
Now we choose \(\alpha=\frac{3}{2}\), then we are in the precise situation to use Proposition 3.3. Since all the random variables are bounded, the desired convergence in distribution and in moment follow from Proposition 3.3.
**Lemma 3.7**.: _The convergence holds in the sense of moments for any \(\underline{R}\in[-\infty,0]\) and \(\overline{R}\in[0,\infty]\)._
Proof.: The convergence in distribution of random variables inside the expectation actually holds if \(\overline{R}=\infty\) or \(\underline{R}=-\infty\), so we are left to verify the uniform integrability. The second moment of random variable is bounded from above, for any \(\overline{R}\) or \(\underline{R}\), by
\[\mathbb{E}\left[\prod_{r=1}^{R}\Xi(X_{r}^{n}(0),X_{r}^{n}(T_{n});n,T_{n})^{2} \frac{\left(\sum_{i^{\prime}=0}^{T_{n}n^{2}}\mathfrak{a}(X_{r}^{n}(i^{\prime}n ^{-2}))\right)^{2j}}{(j!)^{2}(2n^{\alpha})^{2j}}.\right] \tag{3.18}\]
We essentially follow [17], Lemma 4.4 so we only give a sketch. We bound the first factor \(\Xi(X_{r}^{n}(0),X_{r}^{n}(T_{n});n,T_{n})^{2}\) via uniform constants thanks to Lemma 3.4, and for the terms involving \(\mathfrak{a}\) we use the elementary inequality
\[\left(\frac{|z|^{j}}{j!}\right)^{2}\leq e^{2|z|}\leq e^{2z}+e^{-2z},\quad z\in \mathbb{R},j\in\mathbb{N} \tag{3.19}\]
and the expectations involving \(\mathfrak{a}(\cdot)\) can be bounded thanks to Lemma A.1 under the Assumption 1.1. We end up with the upper bound
\[C\mathbb{E}\left[\exp\left(C\sum_{h\in n^{-1}\mathbb{Z}}\left(\frac{\sum_{r=1 }^{R}L_{h}(X_{r}^{n})^{2}}{n}+\frac{\sum_{r=1}^{R}L_{h}(X_{r}^{n})^{\gamma^{ \prime}}}{n^{r^{\prime}/2}}\right)\right)\right] \tag{3.20}\]
where we have neglected the term \(|\mathbb{E}[\mathfrak{a}(\lfloor nh\rfloor)]|\frac{\sum_{r=1}^{R}L_{h}(X_{r}^ {n})}{n^{1/2}}\) since \(\mathfrak{a}(\cdot)\) has mean zero. Here \(2<\gamma^{\prime}<3\). The desired estimate then follows from the large deviations estimate in Lemma B.2.
**Lemma 3.8**.: _The convergence holds in distribution for any \(\underline{R}\in[-\infty,0]\) and \(\overline{R}\in[0,\infty]\)._
Proof.: The reasoning is exactly the same as [17], Lemma 4.5. For any \(\overline{R}\in[0,\infty)\), the variable \(\mathrm{Sc}^{(j)}(n;\underline{R},\infty)\) stochastically dominates \(\mathrm{Sc}^{(j)}(n;\underline{R},\overline{R})\), so the limit point \(\mathrm{Sc}^{(j)}(\infty;\underline{R},\infty)\) stochastically dominate \(\lim_{n\to\infty}Sc^{(j)}(n;\underline{R},\overline{R})\). The latter limit tends as \(\overline{R}\uparrow\infty\) to the corresponding expression for \(\overline{R}=\infty\) by monotone convergence. Hence \(\mathrm{Sc}^{(j)}(\infty;\underline{R},\infty)\) and (3.16) with \(\overline{R}=\infty\) form two non-negative variables with the same moment, one dominating the other, hence have the same law.
_Combining all these results, we have established Proposition 3.5._
_Now we have obtained the contribution for each power \(j\), and now we estimate the remainder term. The results are as follows:_
**Lemma 3.9**.: _Assume \(\alpha=\frac{3}{2}\) and we denote by \(\mathrm{Sc}^{(j)}(n):=\mathrm{Sc}^{(j)}(n;-\infty,\infty)\) and \(\mathrm{Tr}^{(j)}(n):=\mathrm{Tr}^{(j)}(n;-\infty,\infty).\) Assume also Assumption 1.1. Then for any \(R\in\mathbb{N}\) we can find constant \(C(R)\) such that for each \(n\in\mathbb{N}\),_
\[\mathbb{E}\left[|\,\mathrm{Sc}^{(j)}(n)|^{R}\right]\leq\frac{C(R)}{2^{jR}}, \quad\mathbb{E}\left[|\,\mathrm{Tr}^{(j)}(n)|^{R}\right]\leq\frac{C(R)}{2^{jR}}.\]
Proof.: The proof shall be compared to [17], Lemma 4.6 and our case is simpler: the domain is bounded and we don't have the \(\zeta(\cdot)\) terms. Expanding the expression of \(\mathbb{E}[|\operatorname{Sc}^{(j)}(n)|^{R}]\) and using Holder's inequality, we may integrate over the \(f\) and \(g\) and discard them from the following computations. The function \(\Xi\) can be uniformly bounded via Lemma 3.4. For the terms involving \(|\mathfrak{a}(\cdot)|^{R}\), we use the inequality (3.19), Assumption 1.1 and Lemma A.1 in exactly the same way as the proof of Lemma (3.7). Then we are reduced to bounding exponential moments of local times of the following form
\[\sum_{h\in n^{-1}(\mathbb{Z}+\frac{1}{2})}\frac{L_{h}(X_{r}^{n})}{ n^{3/2}},\quad\sum_{h\in n^{-1}(\mathbb{Z}+\frac{1}{2})}\frac{L_{h}(X_{r}^{n})^{2 }}{n},\] \[\sum_{h\in n^{-1}(\mathbb{Z}+\frac{1}{2})}\frac{L_{h}(X_{r}^{n})^ {\gamma^{\prime}}}{n^{\gamma^{\prime}/2}},\quad\sum_{h\in n^{-1}\mathbb{Z}} \frac{L_{h}(X_{r}^{n})}{n^{3/2}}, \tag{3.21}\] \[\sum_{h\in n^{-1}\mathbb{Z}}\frac{L_{h}(X_{r}^{n})^{2}}{n},\quad \sum_{h\in n^{-1}\mathbb{Z}}\frac{L_{h}(X_{r}^{n})^{\gamma^{\prime}}}{n^{ \gamma^{\prime}/2}},\]
the first and fourth terms are exactly \(T_{n}n^{-1/2}\). For the remaining terms, we use Proposition C.2, note that \(\gamma^{\prime}\in(2,3)\). Thus we have obtained uniform in \(n\) exponential moments of these sums of local times, and the proof of the lemma follows.
We also have the following slight extension:
**Lemma 3.10**.: _Fix \(n\in\mathbb{N}\), and consider a random variable \(\Gamma\) which is a deterministic function of the random walk bridge \(X\) and random variable \(\mathfrak{a}(\cdot)s\). Let \(\operatorname{Sc}^{(j)}(n;\Gamma)\) denote the modification of \(\operatorname{Sc}^{(j)}(n)\) via adding \(\Gamma\) as an extra factor in (3.16). Assume \(\Gamma\) has \(2R\)-th moment bounded \(\mathcal{C}_{1}<\infty,\) then we find \(C(R)\) such that_
\[\mathbb{E}\left[|\operatorname{Sc}^{(j)}(n;\Gamma)|^{R}\right]\leq\frac{C(R)} {2^{jR}}C_{1}^{1/2},\quad\mathbb{E}\left[|\operatorname{Tr}^{(j)}(n;\Gamma)|^ {R}\right]\leq\frac{C(R)}{2^{jR}}C_{1}^{1/2}.\]
Before we enter the proof of the main result, we estimate the error of replacing
\[\sum_{0\leq i_{1}\leq\cdots\leq i_{j}\leq|Tn^{2}|-j-\epsilon_{j,x,y}}\prod_{j ^{\prime}=1}^{j}\mathfrak{a}(X(i_{j^{\prime}}n^{-2}))\]
by
\[\left(\sum_{j^{\prime}=0}^{|T_{n}n^{2}|-j-\epsilon_{j,x,y}}\mathfrak{a}(X(i_{j ^{\prime}}n^{-2}))\right)^{j}.\]
This corresponds to Lemma 4.8 of [17]. We use the following notations from that work: define
\[h(j;n):=\sum_{0\leq i_{1}\leq\cdots\leq i_{j}\leq T_{n}n^{2}}\prod_{j^{\prime }=1}^{j}\mathfrak{a}(X(i_{j^{\prime}}n^{-2})),\]
\[p(j;n):=\sum_{j^{\prime}=0}^{T_{n}n^{2}}\mathfrak{a}(X(i^{\prime}n^{-2}))^{j}.\]
Expressing \(h(j;n)\) in terms of Newton identities ([32], Chapter 1, Section 2):
\[h(j;n)=[z^{j}]\exp\left(\sum_{j^{\prime}=1}^{\infty}\frac{p(j^{\prime},N)}{(j^{ \prime})!}z^{j^{\prime}}\right) \tag{3.22}\]
with \([z^{j}]\) meaning the coefficient in front of \(z^{j}\) if we expand the exponential as a series. We make a further expansion and get
\[\frac{h(j;n)}{(2n^{3/2})^{j}}=\sum_{\ell=0}^{j}\frac{p(1;n)^{\ell}}{l!(2n^{3/2 })^{\ell}}\left([z^{j-\ell}]\exp\left(\sum_{j^{\prime}=2}^{\infty}\frac{p(j^{ \prime},n)}{j^{\prime}(2n^{3/2})^{j^{\prime}}}z^{j^{\prime}}\right)\right), \tag{3.23}\]
The case \(l=j\) is the leading order contribution, and we will prove contribution from the rest of the terms is small.
**Lemma 3.11**.: _For each \(R=1,2,\cdots\) we may find \(\tilde{C}\) and \(\epsilon\) positive, so that for any \(\ell=1,2,\cdots,T_{n}n^{2}\) and \(n\in\mathbb{N}\),_
\[\mathbb{E}\left[\left([z^{\ell}]\exp\left(\sum_{j^{\prime}=2}^{\infty}\frac{ |p(j^{\prime};n)|}{j^{\prime}(2n^{3/2})^{j^{\prime}}}\right)\right)\right] \leq(\tilde{C}n^{-\epsilon})^{\ell R} \tag{3.24}\]
Proof.: Defining \(\Gamma_{j}=\sup_{0\leq i_{1}\leq\cdots\leq i_{j}}\mathbb{E}[\prod_{j^{\prime}= 1}^{j}|\mathfrak{a}(i_{j^{\prime}})|]=\sup_{i\in\mathbb{N}}\mathbb{E}[| \mathfrak{a}(i)|^{j}]\), which follows from Holder's inequality. By assumption we have \(\Gamma_{j}\leq C^{j}j^{j\gamma}\) where \(C>0\) is a universal constant and \(0<\gamma<\frac{3}{4}\), Hence the right hand side is bounded by
\[\frac{1}{(2n^{3/2})^{lR}}\Gamma_{lR}\left([z^{l}]\exp\left(\sum_{j^{\prime}=2} ^{\infty}\frac{k}{j^{\prime}}z^{j^{\prime}}\right)\right)^{R}. \tag{3.25}\]
The rest of the estimate is the same as in the proof of [17], Lemma 4.8, the major difference is we replace any \(N\) there by \(N^{3}\). We omit the combinatorial details, but note that it follows from the combinatorial inequality (for \(l<k\))
\[[z^{l}]\exp\left(\sum_{j^{\prime}=2}^{\infty}\frac{k}{j^{\prime}}z^{j^{\prime} }\right)\leq C^{l}\left(\frac{k\log l}{l}\right)^{\lfloor l/2\rfloor}.\]
and that \(\gamma<3/4\) by Assumption 1.1.
### Completing the proof of edge scaling limit
Now we are in the position to complete the proof of Theorem 1.3. We have already computed \((H_{n})^{k}[i,i^{\prime}]\) for \(k\) is even, and the case for \((H_{n})^{k-1}[i,i^{\prime}]\) for \(k-1\) is odd is identical, giving rise to an odd number of horizontal segments. Therefore, in order to prove Theorem 1.3, it suffices to show that for any fixed \(T>0\), \(f,g\in L^{2}([0,1])\), we have
\[\begin{split}\operatorname{Sc}(n)\to\frac{1}{\sqrt{2\pi T}}\int_ {0}^{\infty}\int_{0}^{\infty}f(x)g(y)&\exp(-\frac{(x-y)^{2}}{2T })\mathbb{E}_{B^{x,y}}\Bigg{[}1_{\forall t:B^{x,y}(t)\in[0,1]}\\ &\exp\left(\frac{\sigma}{2}\int_{0}^{1}L_{a}(B^{x,y})dW(a) \right)\Bigg{]}dxdy\end{split} \tag{3.26}\]
and a similar claim for \(\operatorname{Tr}(n)\). We only prove the convergence of \(\operatorname{Sc}(n)\), as that for \(\operatorname{Tr}(n)\) is essentially the same. We have the expansion
\[\operatorname{Sc}(n)=\sum_{j=0}^{\lfloor Tn^{2}\rfloor}\operatorname{Sc}^{(j)} (n;-\infty,\infty)+\sum_{j=0}^{\lfloor Tn^{2}\rfloor}\sum_{l=0}^{j-1}U(n;j,l) \tag{3.27}\]
where we define
\[U(n;j,l):= \frac{1}{2}\int_{0}^{1}\int_{0}^{1}f(x)g(y)\Xi(x,y;n,T_{n})\mathbb{ E}_{X}\Bigg{[}1_{\{\forall t:0\leq X(t)\leq n\}} \tag{3.28}\] \[\frac{\left(\sum_{i^{\prime}=0}^{T_{n}n^{2}}\mathfrak{a}(X(i^{ \prime}n^{-2}))\right)^{l}}{l!(2n^{3/2})^{l}}\cdot[z^{j-l}]\exp\left(\sum_{j^ {\prime}=2}^{\infty}\frac{\sum_{i^{\prime}=0}^{T_{n}n^{2}}\mathfrak{a}(X(i^{ \prime}n^{-2}))^{j^{\prime}}}{j!(2n^{3/2})^{j^{\prime}}}z^{j^{\prime}}\right) \Bigg{]}dxdy.\]
Now we take the limit as \(n\to\infty\). The limit of the sum of \(\operatorname{Sc}^{(j)}(n;-\infty,\infty)\) is identified in Proposition 3.5. Together with the moment bound of the remainder term (Lemma 3.9), we deduce that the first term in equation (3.27) converges in the limit \(n\to\infty\) towards
\[\frac{1}{\sqrt{2\pi T}}\int_{0}^{1}\int_{0}^{1}f(x)g(y(\exp(- \frac{(x-y)^{2}}{2T}))\mathbb{E}_{B^{x,y}} \Bigg{[}1_{B^{x,y}(t)\in[0,1]} \tag{3.29}\] \[\cdot\exp\left(\frac{s_{\mathfrak{a}}}{2}\int_{0}^{\infty}L_{a} (B^{x,y}dW(a))\right)\Bigg{]}dxdy\]
that converges in moment and in law. The last step involves applying the moment decaying estimate for \(U(n;j,l)\) given a fixed \(R\in\mathbb{N}\). We deduce from Lemma 3.11 that
\[\mathbb{E}[U(n;j,l)^{R}]\leq\frac{\tilde{C}_{1}}{2^{lR}}(\tilde{C}_{2}n^{- \epsilon})^{(j-l)R},\quad j,l\in\mathbb{N},j\geq l, \tag{3.30}\]
given constants \(\tilde{C}_{1},\tilde{C}_{2},\epsilon\) depending on \(R\). By maximal inequality for \(L^{R}\) norm, it leads to that the \(R\)-th moment of the second term in equation (3.27) tend to \(0\) in the \(n\to\infty\) limit. Since the choice of \(R\) is arbitrary, we have justified the convergence in distribution and in moments of Theorem 1.3,
## 4. Laplace transform of edge scaling limit
### Properties of the semigroup
In this section we establish Proposition 1.2. The proof is split into a number of technical lemmas. As stated in the introduction, these properties can likely be derived from [30] in a very general setting. Yet we leave the proof here for sake of completeness.
**Lemma 4.1**.: _For any given \(T>0\), \(\mathcal{U}(T)\) is Hilbert-Schmidt on \(L^{2}([0,1])\) almost surely._
Proof.: Fix \(T>0\), it suffices to show the integral kernel \(K(x,y;T)\) satisfies a.s.
\[\int_{0}^{1}\int_{0}^{1}K(x,y;T)^{2}dxdy<\infty. \tag{4.1}\]
We indeed show that this holds in expectation. Expanding the expression of \(K(x,y;T)\), dropping the indicator function and noting that \([0,1]^{2}\) is compact, it suffices to bound
\[\mathbb{E}_{B^{x,y}}\left[\exp\left(\frac{\sigma}{2}\int_{0}^{1}L_{a}(B^{x,y})^ {2}da\right)\right]dxdy, \tag{4.2}\]
uniformly for \(x,y\in[0,1]\). The exponential moment of Brownian bridge local time is finite thanks to Proposition C.2. More precisely, \(\int_{0}^{\infty}L_{a}(B^{x,y})^{2}da\) is the limit of
\[\frac{1}{n}\sum_{h\in n^{-1}(\mathbb{Z}_{\geq 0}+\frac{1}{2})}L_{h}(X^{x,y;n,T_{ n}})^{2},\]
and the latter is stochastically dominated by \(8T(n^{-1}J_{Tn^{2}}+n^{-1}\tilde{J}_{Tn^{2}}+2|x-y|+2n^{-1})\), for \(J,\tilde{J}\) the maxima of two independent simple random walks with \(Tn^{2}\) steps of \(\pm 1\), by the reasoning in Appendix C. Taking the limit, we deduce that \(\int_{0}^{\infty}L_{a}(B^{x,y})^{2}da\) is stochastically dominated by \(8T(J+\tilde{J}+2|x-y|)\), for \(J,\tilde{J}\) the maximum of two independent Brownian motions on \([0,T]\). This upper bound has finite exponential moment.
**Lemma 4.2**.: \(\mathcal{U}(T)\) _has the semigroup property: given any \(T_{1},T_{2}>0\), we have \(\mathcal{U}(T_{1})\mathcal{U}(T_{2})=\mathcal{U}(T_{1}+T_{2})\) almost surely._
Proof.: By Fubini's theorem, it suffices to prove that with probability one
\[\forall x,y\in\mathbb{R}_{\geq 0}:\quad\int_{0}^{1}K(x,z;T_{1})K(z,y;T_{2})dz= K(x,y;T_{1}+T_{2}). \tag{4.3}\]
The rest of proof is the same as [17], Proposition 2.5 which involves elementary manipulations of conditioned Brownian bridges. We omit the details here.
**Lemma 4.3**.: \(\mathcal{U}(T)\)_, \(T>0\) is strongly continuous in \(L^{2}\): for any \(f\in L^{2}([0,1])\),_
\[\lim_{t\to T}\mathbb{E}[\|\mathcal{U}(T)f-\mathcal{U}(t)f\|^{2}]=0.\]
Proof.: The argument is similar to, and simpler than [17], Proposition 2.6 so we only give a sketch. By Cauchy-Schwartz inequality and semigroup property, after some manipulations,
\[\mathbb{E}\|\mathcal{U}(T)f-\mathcal{U}(t)f\|^{p}\leq\mathbb{E}[\mathrm{Tr}( \mathcal{U})(2p(t\wedge T))]^{1/2}\mathbb{E}[\|\mathcal{U}(|T-t|)f-f\|^{2p}]^ {1/2}.\]
The term involving the trace of \(\mathcal{U}(2p(t\wedge T))\) is bounded for \(t\) in a bounded neighborhood, so we only need to prove \(\mathbb{E}[\|\mathcal{U}(T)f-f\|^{2p}]\to 0\) as \(T\to 0\).
we shall apply the inequality \(|e^{S}-1|\leq|S|e^{|S|}\), to
\[S:=\frac{\sigma}{2}\int_{0}^{1}L_{a}(B^{x,y})dW(a),\]
where \(B^{x,y}\) is a Brownian bridge on \([0,T]\) from \(x\) to \(y\), so we need an estimate of
\[\left(\int_{0}^{1}\left(\int_{0}^{1}\frac{1}{\sqrt{2\pi T}}\exp(-\frac{(x-y)^{ 2}}{2T})\cdot\mathbb{E}_{B^{x,y}}[1_{\forall t:B^{x,y}(t)\in[0,1]}\mathbb{E}_ {W}[|S|^{p}e^{p|S|}]^{1/p}]|f(y)|dy\right)^{2}dx\right)^{p/2}. \tag{4.4}\]
We use Cauchy-Schwartz inequality, ignoring the indicator function, noting that the bound on the squared local time has an estimate of the form
\[\mathbb{E}_{B^{x,y}}[\mathbb{E}_{W}[|S|^{p}e^{p|S|}]^{2/p}]^{1/2}\leq CT^{3/4} e^{C|x-y|/\sqrt{T}}\]
thanks to Proposition C.2 and the discussion at the end of the proof of Lemma 4.1 (some further computations are also needed). This upper bound converges to \(0\) as \(T\to 0\), so we finish the proof of the lemma.
**Lemma 4.4**.: _We can find an orthogonal basis \(\mathbf{v}^{1},\mathbf{v}^{2},\cdots\in L^{2}([0,1])\), and \(\eta_{1}\geq\eta_{2}\geq\cdots\) on the same probability space (both are random) such that the spectrum of \(\mathcal{U}(T)\) has eigenvalues \(\exp(T\eta^{i}/2)\) that correspond to eigenvectors \(\mathbf{v}^{i}\), \(i\in\mathbb{N}\)._
Proof.: We follow [17], Proposition 2.4. The symmetry of \(\mathcal{U}(T)\), almost surely, follows from the fact that almost surely,
\[\forall x,y\in\mathbb{R}_{\geq 0}:\quad K(x,y;T)=K(y,x;T), \tag{4.5}\]
which further follows from the expression of \(K(\cdot,\cdot,T)\) and that the time reversed Brownian bridge from \(x\) to \(y\) in time \(T\) is a standard Brownian bridge from \(y\) to \(x\) in time \(T\).
The non-negativity of \(\mathcal{U}(T)\) follows directly from definition. Then we know \(\mathcal{U}(T)\) is a positive symmetric Hilbert-Schmidt operator almost surely, so we can find an orthogonal basis with eigenvalues \(e^{1}(T)\geq e^{2}(T)\geq\cdots\). By the semigroup property \(\mathcal{U}(T)\mathcal{U}(T/2)=\mathcal{U}(T/2)\mathcal{U}(T)\), so we can find an orthogonal basis of eigenfunctions for both \(\mathcal{U}(T)\) and \(\mathcal{U}(T/2)\) in \(L^{2}([0,1])\). Then we can simultaneously diagonalize \(\mathcal{U}(T)\) and \(\mathcal{U}(T/2)\), and this leads to \(\sum_{i=1}^{\infty}e^{i}(T)=\sum_{i=1}^{\infty}e^{i}(T/2)^{2}\). The latter can be written as
\[\int_{0}^{1}\int_{0}^{1}K(x,y;T/2)^{2}dxdy.\]
As this expression is finite almost surely, this implies \(\mathcal{U}(T)\) is trace class.
We can now complete the proof of Proposition 1.2. We have checked that for each \(T>0\), \(\mathcal{U}(T)\) is symmetric and trace class, hence has discrete spectrum. By commutativity of \(\mathcal{U}(T)\) almost surely, we can find an orthogonal basis \(\mathbf{v}^{1},\mathbf{v}^{2},\cdots\) of \(L^{2}([0,1])\) that are eigenfunctions of all \(\mathcal{U}(T)\) for \(T\in(0,\infty)\cap\mathbb{Q}\). Let \(e^{i}\) denote \(e^{i}(1)\) for each \(i\in\mathbb{N}\), we order the eigenvectors such that \(e^{1}\geq e^{2}\geq\cdots\), and we set \(\eta^{1}\geq\eta^{2}\geq\cdots\) via \(\eta_{i}=2\log e^{i},i\in\mathbb{N}\). We claim that none of the eigenvalues \(e^{i}\) of \(\mathcal{U}(1)\) will vanish. Note that, if otherwise, then all these operators \(\mathcal{U}(T),T\in(0,\infty)\cap\mathbb{Q}\) has \(0\) eigenvalue acting on this vector, and this contradicts Lemma 4.3 at \(T=0\).
By the semigroup property, \(\mathcal{U}(T)\mathbf{v}^{i}=\exp(T\eta^{i}/2)\mathbf{v}^{i},i\in\mathbb{N}\) for all such \(T\), so that \(\text{Trace}(\mathcal{U}(T))=\sum_{i=1}^{\infty}\exp(T\eta^{i}/2)\). This formula is also true for any \(T\in\mathbb{R}_{+}\) by continuity in \(T\) and dominated convergence theorem.
### Bounds on extreme eigenvalues
In this section we prove Corollary 1.4, which gives the Laplace transform of the \(n\to\infty\) limiting profile of the edge eigenvalue statistics of the random Schrodinger operator (1.1). The reasoning is similar to [17] Lemma 6.1, yet our scaling of eigenvalues is very different from the Gaussian beta ensembles considered in that paper, so we give a sketch of the details.
**Lemma 4.5**.: _Under the assumptions of Corollary 1.4 we have the convergence in distribution_
\[\sum_{i=1}^{n}e^{T\eta_{n}^{i}/2}\to_{n\to\infty}\text{Trace}(\mathcal{U}(T)). \tag{4.6}\]
_simultaneously for finitely many \(T\)'s._
Proof.: Denoting by \(\mu_{n}^{1,+}\geq\mu_{n}^{2,+}\geq\cdots\), \(\mu_{n}^{1,-}\leq\mu_{n}^{2,-}\leq\cdots\) the positive and negative eigenvalues of \(H_{n}\). We work with the rescaled versions
\[\lambda_{n}^{i,+}=n^{2}(\mu_{n}^{i,+}-2),\quad\text{and}\]
\[\lambda_{n}^{i,-}=-n^{2}(\mu_{n}^{i,-}+2),\quad i=1,2,\cdots.\]
Then
\[\begin{split}&\text{Trace}(\mathcal{M}(T,n))\\ &=\frac{1}{2}\sum_{i}\left(1+\frac{\lambda_{n}^{i,+}}{2n^{2}} \right)^{\lfloor Tn^{2}\rfloor}+\frac{(-1)^{\lfloor Tn^{2}\rfloor}}{2}\sum_{i} \left(1+\frac{\lambda_{n}^{i,-}}{2n^{2}}\right)^{\lfloor Tn^{2}\rfloor}\\ &+\frac{1}{2}\sum_{i}\left(1+\frac{\lambda_{n}^{i,+}}{2n^{2}} \right)^{\lfloor Tn^{2}\rfloor-1}\\ &+\frac{(-1)^{\lfloor Tn^{2}\rfloor-1}}{2}\sum_{i}\left(1+\frac{ \lambda_{n}^{i,-}}{2n^{2}}\right)^{\lfloor Tn^{2}\rfloor-1}.\end{split} \tag{4.7}\]
Since we already know the convergence of \(\text{Trace}(\mathcal{M}(T,n))\) towards \(\text{Trace}(\mathcal{U}(T))\), it suffices to show that the difference of the right hand side of (4.7) and
\[\begin{split}\sum_{i=1}^{n}e^{T\lambda_{n}^{i}/2}=& \frac{1}{2}\sum_{i=1}^{n}e^{T\lambda_{n}^{i}/2}+\frac{1}{2}\sum_{i=1}^{n}e^{T \lambda_{n}^{i}/2}+\frac{(-1)^{\lfloor Tn^{2}\rfloor}}{2}\sum_{i=1}^{n}e^{T \lambda_{n}^{i}/2}\\ &+\frac{(-1)^{\lfloor Tn^{2}\rfloor-1}}{2}\sum_{i=1}^{n}e^{T \lambda_{n}^{i}/2}.\end{split} \tag{4.8}\]
tends to \(0\) as \(n\) tends to infinity.
Now we may choose \(\epsilon=1/100\). For this we separate the eigenvalues into four different classes: (1) eigenvalues in the bulk: \(\lambda_{n}^{i,+}\)'s and \(\lambda_{n}^{i,-}\)'s less or equal to \(-n^{\epsilon}\); (2) outlier eigenvalues: \(\lambda_{n}^{i,+}\)'s and \(\lambda_{n}^{i,-}\)'s greater than \(n^{\epsilon}\); (3) eigenvalues at right edge: \(\lambda_{n}^{i,+}\)'s in \((-n^{\epsilon},n^{\epsilon})\); and (4) eigenvalues at left edge: \(\lambda_{n}^{i,+}\)'s in \((-n^{\epsilon},n^{\epsilon})\).
Then as \(n\to\infty\), contribution to the sum of the bulk eigenvalues (1) becomes negligible because there are no more than \(n\) of them, with each contributing no more than \(e^{-Tn^{\epsilon}/2}\).
We then check that as \(n\to\infty\), with probability tending to \(1\) there are no outlier eigenvalues (2). Choose a sequence \(T_{n},n\in\mathbb{N}\) of positive numbers with \(T_{n}n^{2}\), \(N\in\mathbb{N}\) even integers and \(\sup_{n}|T_{n}-T|n^{2}<\infty\). We have
\[\text{Trace}\left(\left(\frac{H_{n}}{2}\right)^{T_{n}n^{2}}\right)=\sum_{i} \left(1+\frac{\lambda_{n}^{i,+}}{2n^{2}}\right)^{T_{n}n^{2}}+\sum_{i}\left(1+ \frac{\lambda_{n}^{i,-}}{2n^{2}}\right)^{T_{n}n^{2}}\]
Were there an outlier, then
\[\text{Trace}\left(\left(\frac{H_{n}}{2}\right)^{T_{n}n^{2}}\right)\geq(1+\frac {n^{\epsilon}}{2n^{2}})^{T_{n}n^{2}}. \tag{4.9}\]
We have proved that the left hand side converges in distribution as \(n\to\infty\) to an almost surely finite limit, but the right hand side tends to infinity, leading to a contradiction.
Finally we consider the eigenvalues at the edge. We use the approximations
\[(1+\frac{\lambda_{n}^{i,\pm}}{2n^{2}})^{\lfloor Tn^{2}\rfloor}\sim e^{T\lambda_{n }^{i,\pm}/2},\quad(1+\frac{\lambda_{n}^{i,\pm}}{2n^{2}})^{\lfloor Tn^{2} \rfloor-1}\sim e^{T\lambda_{n}^{i,\pm}/2}.\]
The additive error is upper bounded in norm by
\[(e^{n^{-2+2\epsilon}/2}-1)(1+\frac{\lambda_{n}^{i,\pm}}{2n^{2}})^{\lfloor Tn^{2 }\rfloor},\]
\[(e^{n^{-2+2\epsilon}/2}-1)(1+\frac{\lambda_{n}^{i,\pm}}{2n^{2}})^{\lfloor Tn^{2 }\rfloor-1}.\]
For this purpose it suffices to prove that
\[(e^{n^{-2+2\epsilon}/2}-1)\sum_{i=1}^{N}(1+\frac{\lambda_{n}^{i}}{2n^{2}})^{ \lfloor Tn^{2}\rfloor-\epsilon},\quad\epsilon\in\{0,1\}\]
tends to \(0\) in probability, which is known from the previous proofs since \(e^{n^{-2+2\epsilon}/2}-1\) converges to \(0\) and the summation converges to an almost surely finite random variable.
## 5. Tracy-Widom fluctuation for potentials with shifted means
In this section we outline the procedure to derive Proposition 1.6. It essentially follows from an application of the main result of [37], Section 5, but as we will use a similar yet different framework in the forthcoming sections, we choose to give a sketch of the arguments involved for sake of completeness.
Consider a sequence of \(\mathbb{R}^{2}\)-valued discrete-time random process \(((y_{n,1,k},y_{n,2,k});1\leq k\leq n)\). Given \(m_{n}=o(n)\) a scaling index which we will take to be \(m_{n}=n^{1/3}\). We build an \(n\times n\) tridiagonal matrix \(\overline{H}_{n}\) for each \(n\in\mathbb{N}_{+}\):
Consider \(T_{n}\) the shift operator \((T_{n}v)_{k}=v_{k+1}\) that maps \(\mathbb{R}_{1}\times\mathbb{R}_{2}\cdots\) to itself. Consider \((T_{n}^{t}v)_{k}\) its adjoint, denoting by \(R_{n}\) the restriction operator \((R_{n}v)_{k}=v_{k}1_{k\leq n}\). Consider \(\Delta_{n}=m_{n}(I-T_{n}^{t})\) the operator of difference operator, and set
\[\overline{H}_{n}=R_{n}\left(-\Delta_{n}\Delta_{n}^{t}+(\Delta_{n}y_{n,1})_{ \times}+(\Delta_{n}y_{n,2})_{\times}\frac{1}{2}(T_{n}+T_{n}^{t})\right), \tag{5.1}\]
with the symbol \(\times\) standing for element by element multiplication of the corresponding vector. The matrix representation of \(\overline{H}_{n}\) is symmetric tridiagonal, with diagonal elements \((2m_{n}^{2}+m_{n}(y_{n,1,k}-y_{n,1,k-1}),k\geq 1)\) and with elements \((-m_{n}^{2}+m_{n}(y_{n,2,k}-y_{n,2,k-1})/2,k\geq 1)\) up and below the diagonal. Denote further by \(y_{n,i}(x):=y_{n,i,\lfloor xm_{n}\rfloor}1_{xm_{n}\in[0,n]}\). The following assumptions are made:
_Assumption 5.1_.: We can find a continuous path \(x\to y(x)\) with
\[\begin{split}(y_{n,i}(x);x\geq 0)\quad i=1,2\text{ are tight in law},\\ (y_{n,1}(x)+y_{n,2}(x);x\geq 0)\Rightarrow(y(x);x\geq 0)\text{ in law}, \end{split} \tag{5.2}\]
for the Skorokhod topology.
_Assumption 5.2_.: We have a decomposition
\[y_{n,i,k}=m_{n}^{-1}\sum_{\ell=1}^{k}\eta_{n,i,\ell}+w_{n,i,k}, \tag{5.3}\]
given \(\eta_{n,i,k}\geq 0\), and for deterministic non-decreasing function \(\bar{\eta}(x)>0,\zeta(x)\geq 1\), and random constants \(\kappa_{n}(\omega)\geq 1\) on the same probability space that satisfy: the random constants \(\kappa_{n}\) are tight in law, and
\[\bar{\eta}(x)/\kappa_{n}-\kappa_{n}\leq\eta_{n,1}(x)+\eta_{n,2}(x)\leq\kappa_{n }(1+\bar{\eta}(x)), \tag{5.4}\]
\[\eta_{n,2}(x)\leq 2m_{n}^{2}, \tag{5.5}\]
\[|w_{n,1}(\zeta)-w_{n,1}(x)|^{2}+|w_{n,2}(\zeta)-w_{n,2}(x)|^{2}\leq\kappa_{n}( 1+\bar{\eta}(x)/\zeta(x)), \tag{5.6}\]
given any \(n\) and \(x,\zeta\in[0,n/m_{n}]\) such that \(|x-\zeta|\leq 1\).
Consider the random Schrodinger operator
\[H=-\frac{d^{2}}{dx^{2}}+y^{\prime}(x) \tag{5.7}\]
which maps \(H^{1}_{loc}\) to \(\mathcal{D}\), the space of distributions on \([0,\infty)\). More precisely, the operator is defined on the Hilbert space \(L^{*}\subset L^{2}(\mathbb{R}^{+})\) which consists of functions satisfying \(f(0)=0\), and
\[\|f\|_{*}^{2}:=\int_{0}^{\infty}f^{\prime}(x)^{2}+(1+\bar{\eta}(x))f^{2}(x)dx<\infty.\]
We say that \((\lambda,f)\in\mathbb{R}\times L^{*}\setminus\{0\}\) with \(\|f\|_{2}=1\) is an eigenvalue of (5.7) if it satisfies (5.7), or equivalently, for any \(\varphi\in\mathcal{C}_{0}^{\infty}\), we have
\[\int f\varphi^{\prime\prime}dx=\int-\lambda f\varphi-yf^{\prime}\varphi-yf \varphi^{\prime}dx.\]
Then the main convergence result from [37], Chapter 5 is:
**Theorem 5.3**.: _[Theorem 5.1 of [37]] Under Assumptions 5.1 and 5.2, given any fixed \(k\), then the smallest \(k\) eigenvalues of \(\overline{H}_{n}\) converge to the smallest \(k\) eigenvalues of the random Schrodinger operator \(H\)._
Denote by \(\Delta y_{n,k}=y_{n,k}-y_{n,k-1}\), then by Corollary 6.1 of [37], assuming that for some \(a\in\mathbb{R}\) and \(h\in\mathcal{C}_{1}(\mathbb{R}^{+})\), the random process \(y_{n}\) with \(y_{n,0}=0\) has independent increments and satisfy
\[m_{n}\mathbb{E}\Delta y_{n,k}=h^{\prime}(k/m_{n})+o(1),\quad m_{n}\mathbb{E}( \Delta y_{n,k})^{2}=a^{2}+o(1),\quad m_{n}\mathbb{E}(\Delta y_{n,k})^{4}=o(1),\]
uniformly for \(k/m_{n}\) on compacts of \((0,\infty)\), then \(y_{n}(t)=y_{n,\lfloor tm_{n}\rfloor}\) converges in distribution to \(h(t)+ab_{t}\), for \(b\) the standard Brownian motion, with respect to the Skorokhod topology.
For the Gaussian \(\beta\)-ensemble, thanks to the tridiagonal matrix model introduced in [12], in [37] the authors chose random potentials to be
\[y_{n,1,k}=-n^{-1/6}(2/\beta)^{1/2}\sum_{\ell=1}^{k}g_{\ell},\]
\[y_{n,2,k}=n^{-1/6}\sum_{\ell=1}^{k}2(\sqrt{n}-\frac{1}{\sqrt{\beta}}\chi_{ \beta(n-\ell)}).\]
where \(g_{\ell}\) are independent Gaussians with variance \(2\) and \(\chi.\) are the chi-distributions with parameter specified in the subscript. Then by [37], Lemma 6.2, there is convergence in Skorokhod topology \(y_{n,i}(\cdot)\Rightarrow(2/\beta)^{1/2}b_{x}+\frac{x^{2}}{2}(i-1),i=1,2.\) This proves that the edge fluctuations of the Gaussian \(\beta\)-ensemble are governed by the Tracy-Widom \((\beta)\)- distribution, which for general values of \(\beta\) are also defined in [37].
For our random Schrodinger operator \(H_{n}^{\beta}\) defined in (1.17) of Proposition 1.6, we instead take, in the special case \(m_{n}=n^{1/3}\),
\[y_{n,1,k}=n^{1/3}\sum_{\ell=1}^{k}\left(\frac{\ell}{n}-\frac{2}{\sqrt{\beta}} \frac{1}{\sqrt{n}}\mathfrak{a}(\ell)\right).\]
\[y_{n,2,k}=0.\]
and in the general case \(m_{n}=o(n)\), take
\[y_{n,1,k}=m_{n}\sum_{\ell=1}^{k}\left(\frac{\ell}{(m_{n})^{3}}-\frac{2}{\sqrt{ \beta}}\frac{1}{(m_{n})^{3/2}}\mathfrak{a}(\ell)\right).\]
Then in all these cases
\[y_{n,1}(\cdot)\Rightarrow-\frac{2}{\sqrt{\beta}}b_{x}+\frac{x^{2}}{2}\]
and
\[y_{n,2}(\cdot)\Rightarrow 0.\]
As in the case of \(\beta\)-ensembles, we see that Assumption 5.2 holds with \(\bar{\eta}(x)=x\), and it is not hard to check that the moment condition 5.6 is also satisfied. In our case \(y(x)=\frac{2}{\sqrt{\beta}}b_{x}+\frac{x^{2}}{2}\), so by Theorem 5.3, our limiting object is the random Schrodinger operator \(H\) (5.7) with integrated potential \(y\), and this is exactly the stochastic Airy operator \(\mathcal{H}_{\beta}\) defined in (1.16). Combining all these discussions, we have proved \(H_{n}^{\beta}\) has fluctuation at the top edge described by Tracy-Widom \(\beta\)-distribution as \(n\rightarrow\infty\). This completes the proof of Proposition 1.6.
## 6. Schrodinger operator interpretation of edge scaling limit
In this section we investigate properties of the random Schrodinger operator with zero boundary condition
\[\mathcal{G}_{\sigma}:=-\frac{d^{2}}{dx^{2}}+\sigma b_{x}^{\prime},\quad x\in[0,1], \tag{6.1}\]
and prove that the rescaled largest eigenvalues of the random Schrodinger operator \(H_{n}\) (1.1) converge to that of \(\mathcal{G}_{\sigma}\) as \(n\rightarrow\infty\).
### Riccati transform
In this section we recall some properties of the Riccati transform corresponding to \(\mathcal{G}_{\sigma}\) that will be used later.
Recall that \((\lambda,\varphi)\) is an eigenvalue-eigenvector pair to \(\mathcal{G}_{\sigma}\) if we have
\[\varphi^{\prime\prime}(x)=\sigma\varphi(x)b_{x}^{\prime}-\lambda\varphi(x). \tag{6.2}\]
Now we set \(p(x)=\varphi^{\prime}(x)/\varphi(x)\), so that \(p(x)\) satisfies \(p(0)=\infty\) and solves the SDE
\[p^{\prime}(x)=-\lambda-p^{2}(x)-\sigma b^{\prime}(x). \tag{6.3}\]
To encode the dependence of \(p\) on \(\lambda\), we may also write \(p(x,\lambda)\) for \(p(x)\). To account for the blowup of \(p(x)\) to \(-\infty\), we assume that every time \(p(x)\) reaches \(-\infty\), it immediately restarts at \(+\infty\). As in [37], we may consider \(p\) taking values in a countable disjoint union of reals \(\mathbb{R}_{0},\mathbb{R}_{-1},\mathbb{R}_{-2},\cdots\). We order points \((n,x)\) in lexicographic order and endow the topology of two-point compactification of each copy of \(\mathbb{R}\), where we glue the endpoints following the lexicographic order, that is, we glue up \((n,-\infty)\) and \((n-1,+\infty)\) for each \(n\in\mathbb{Z}_{\leq 0}\).
Our argument is based on the following lemma (see also [37],Lemma 3.2):
**Lemma 6.1**.: _For fixed \(\lambda\), denote by \((-n,y)=p(1,\lambda)\). Then the total number \(n\) of blow-ups of \(p(x)\) to \(-\infty\) for \(x\in[0,1]\) equals the number of eigenvalues of \(\mathcal{G}_{\sigma}\) in \((-\infty,\lambda]\)._
Proof.: By definition, \(\lambda\) is an eigenvalue of \(\mathcal{G}_{\sigma}\) is equivalent to \(p_{\lambda}\) (solution to the SDE (6.3) with parameter \(\lambda\)) has a blow up at \(x=1\) to \(-\infty\). Almost surely, for \(\lambda\) sufficiently negative no blowup of \(p_{\lambda}\) occurs on \([0,1]\). Increasing \(\lambda\), then continuity and monotonicity pushes the existing blowups to the start of the interval and new blow ups emerge near \(x=1\). Each such \(\lambda\) where a new blow up of \(p_{\lambda}\) occurs corresponds to a new eigenvalue.
The properties of \(\mathcal{G}_{\sigma}\) stated in Proposition 1.7 can be derived with the help of this Riccati transform \(p(x)\). But as the conclusions of Proposition 1.7 have essentially been covered by [15] and [35], we choose to omit the proof.
### The random operator as edge scaling limit
In this section we prove that the random Schrodinger operator \(\mathcal{G}_{\sigma}\) precisely describes the edge scaling limit of \(H_{n}\) defined in (1.1). Our proof uses the same notation as in Section 5, and is an adaptation of the proof of [37], Section 5. Before going to the details, we show how our proof is similar to, and different from [37]: the main idea of [37] is that if the on-diagonal and off-diagonal potentials (of the rescaled and re-centered matrix \(\bar{H}_{n}\) in (5.1)) sum up to the potential of the Schrodinger operator \(\mathcal{H}_{\beta}\), then the eigenvalues of that matrix converge to eigenvalues of \(\mathcal{H}_{\beta}\). We use the same idea here. However, in [37] where the limiting law is Tracy-Widom, we need the potentials have a deterministic slope (see the function \(\bar{\eta}(x)>0\) in Assumption 5.2) so that resulting operator \(\mathcal{H}_{\beta}\) defined on \([0,\infty)\) has eigenvalues bounded from below, giving rise to a discrete spectrum. This corresponds to the \(+x\) term in the definition of \(\mathcal{H}_{\beta}\), (1.16). In our case, we don't have this deterministic slope and the potentials have mean \(0\). This does not cause any issue because our Schrodinger operator \(\mathcal{G}_{\sigma}\) is defined on the compact interval \([0,1]\), hence a priori has a discrete spectrum almost surely.
As in Section 5, we consider the rescaled matrix
\[\overline{H}_{n}=R_{n}\left(-\Delta_{n}\Delta_{n}^{t}+(\Delta_{n}y_{n,1})_{ \times}+(\Delta_{n}y_{n,2})_{\times}\frac{1}{2}(T_{n}+T_{n}^{t})\right), \tag{6.4}\]
but now we take \(m_{n}=n\). We take \(y_{n,2,k}=0\) for each \(n\), \(k\) and we take
\[y_{n,1,k}=-n^{-1/2}\sum_{\ell=1}^{k}\sigma\mathfrak{a}(\ell)=w_{n,1,k}. \tag{6.5}\]
With this choice, the matrix \(\overline{H}_{n}\) is tridiagonal, with \(-n^{2}\) above and below the diagonal and \(2n^{2}-n^{1/2}\sigma\mathfrak{a}(\ell)\) on the diagonal. Thus \(\overline{H}_{n}=-n^{2}(H_{n}-2I_{n})\) where \(H_{n}\) is the matrix representation (1.8) of (1.1).
Now we define \(y_{n,1}(x)=y_{n,1,\lfloor xn\rfloor}1_{xn\in[0,n]}\), we have the convergence of
\[y_{n,1}(x),x\in[0,1]\Rightarrow\sigma b(x),x\in[0,1] \tag{6.6}\]
in law, with respect to the Skorokhod topology on paths, where \(b\) is a standard Brownian motion with \(b(0)=0\).
Define also \(w_{n,1}(x)=w_{n,1,\lfloor xn\rfloor}1_{xn\in[0,n]}\), the following estimate is useful:
**Lemma 6.2**.: _we can find a tight sequence of random variables \(\kappa_{n}\) such that_
\[|w_{n,1}(\zeta)-w_{n,1}(0)|^{2}\leq\kappa_{n} \tag{6.7}\]
_for any \(n\in\mathbb{N}_{+}\), and any \(\zeta\in[0,1]\)._
Proof.: To show tightness of
\[\sup_{\ell=0,1,\cdots,n}\left|w_{n,1,\ell}-w_{n,1,0}\right|^{2}\]
we use the \(L^{p}\) maximal inequality of martingales to deduce that
\[\mathbb{E}\sup_{\ell=0,\cdots,n}\left|w_{n,1,\ell}-w_{n,1,0}\right|^{4}\leq 16 \mathbb{E}|w_{n,1,n}-w_{n,1,0}|^{4}<C<\infty,\]
by the moment assumptions on \(\mathfrak{a}(\ell)\) in Assumption 1.1. Indeed, a uniform fourth moment bound on \(\mathfrak{a}(\cdot)\) suffices.
Since the upper bound \(\kappa_{n}\) in Lemma 6.2 is tight, to prove the convergence in Theorem, it suffices to prove convergence for deterministic coefficients (as in [37], Proposition 5.2). Then the remaining part of this section is to prove:
**Proposition 6.3** (Deterministic convergence).: _We assume convergence (6.6) holds in a deterministic way and the bound (6.7) holds for a deterministic constant \(\kappa\). Then given any \(k\in\mathbb{N}_{+}\), the smallest \(k\) eigenvalues of \(\overline{H}_{n}\) converge to the lowest \(k\) eigenvalues of \(\mathcal{G}_{\sigma}\)._
To deduce tightness, consider a discrete version of the norm \(\|\cdot\|_{*}\) as follows: for \(v\in\mathbb{R}^{n}\), define a norm \(\|v\|_{2}^{2}:=n^{-1}\sum_{k=1}^{n}v_{k}^{2}\). Recall that we define the difference quotient \(\Delta_{n}=n(I-T_{n}^{t})\), then we set the norm
\[\|v\|_{*n}^{2}:=\|\Delta_{n}v\|_{2}^{2}+\|v\|_{2}^{2}. \tag{6.8}\]
We prove the following bound on \(H_{n}\):
**Lemma 6.4**.: _We can find \(c_{11},c_{12},c_{13}>0\) such that for any \(n\) and \(v\) we obtain_
\[c_{11}\|v\|_{*n}^{2}-c_{12}\|v\|_{2}^{2}\leq\langle v,\overline{H}_{n}v\rangle \leq c_{13}\|v\|_{*n}^{2}.\]
Proof.: By definition of \(\overline{H}_{n}\), let \(w_{k}=w_{1,k}\), recall by definition \(\Delta v_{k}=n(v_{k+1}-v_{k})\),
\[n\langle v,\overline{H}_{n}v\rangle=\sum_{k=0}^{n}(\Delta v_{k})^{2}+\sum_{k=0 }^{n}(\Delta w_{k}v_{k}^{2}), \tag{6.9}\]
Write \(A\), \(B\) the two sums. Then \(A=n|\Delta_{n}v|_{2}^{2}\). To estimate \(B\), we rearrange the order of summation by
\[\sum_{k=0}^{n}\Delta w_{k}v_{k}^{2}=n\sum_{k=0}^{n}w_{k}(v_{k-1}^{2}-v_{k}^{2}) \tag{6.10}\]
where \(v_{-1}=0\). Since by assumption \(w_{k}\) is uniformly bounded, we use the elementary inequality
\[2n|v_{k+1}^{2}-v_{k}^{2}|\leq pn^{2}|v_{k+1}-v_{k}|^{2}+\frac{1}{p}|v_{k+1}+v _{k}|^{2}\leq p|\Delta v_{k}|^{2}+\frac{2}{p}(|v_{k}|^{2}+|v_{k+1}|^{2})\]
and by choosing \(p\) sufficiently small to deduce the resulting estimate, so that in the lower bound the coefficient in front of \(\|v\|_{*n}^{2}\) is positive.
Now we prove the operator convergence. Take an embedding of the domain \(\mathbb{R}^{n}\) of \(\overline{H}_{n}\) in \(L^{2}([0,1])\) isometrically via identifying \(v\in\mathbb{R}^{n}\) with step function \(v(x)=v_{\lfloor nx\rfloor}\) supported on \([0,1]\). Consider \(L_{n}^{*}\in L^{2}([0,1])\) the space consisting of these step functions, and denote by \(\mathcal{P}_{n}\) the \(L^{2}\) projection from \(L^{2}([0,1])\) onto it. Denote by \((T_{n}f)(x)=f(x+n^{-1})\) the shift operator, and \(R_{n}(f)=f1_{[0,1]}\) the restriction operator. Then we have the following properties: (i) \(\mathcal{P}_{n}\), \(T_{n}\) and \(\Delta_{n}\) commute; (ii) given \(f\in L^{2}\) then \(\mathcal{P}_{n}f\to f\) in \(L^{2}\); (iii) given \(f^{\prime}\in L^{2}\) and \(f(0)=0\) there is \(\Delta_{n}f\to f^{\prime}\) in \(L^{2}\).
**Lemma 6.5**.: _Suppose \(f_{n}\in L^{*}_{n}\) with \(f_{n}\to f\) in \(L^{2}\) weakly, and moreover \(\Delta_{n}f_{n}\to f^{\prime}\) in \(L^{2}\) weakly. Then given any \(\varphi\in\mathcal{C}^{\infty}_{0}([0,1])\) we have \(\langle\varphi,\overline{H}_{n}f_{n}\rangle\to\langle\varphi,\mathcal{G}_{ \sigma}f\rangle\). Further,_
\[\langle\mathcal{P}_{n}\varphi,\overline{H}_{n}\mathcal{P}_{n}\varphi\rangle= \langle\varphi,\overline{H}_{n}\mathcal{P}_{n}\varphi\rangle\to\langle \varphi,\mathcal{G}_{\sigma}\varphi\rangle. \tag{6.11}\]
Proof.: We adapt the proof of [37], Lemma 5.7. We assume that \(\varphi\) is supported in \((0,1)\), hence drop \(R_{n}\) from \(H_{n}\). The convergence for the free Laplacian part
\[\langle\varphi,\Delta_{n}\Delta_{n}^{t}f\rangle=\langle\Delta_{n}\Delta_{n}^{ t}\varphi,f\rangle\to\langle\varphi^{\prime\prime},f\rangle=\langle\varphi,f^{ \prime\prime}\rangle\]
is self-evident, so we only need to check the potential term. Note that if \(I\subset[0,1]\) is a finite interval, \(g_{n}\to_{L^{2}}g\) strongly and \(h_{n}\to h\) converges weakly and the sequence is bounded in \(L^{2}(I)\) then
\[\langle g_{n},h_{n}1_{I}\rangle\to\langle g,h1_{I}\rangle. \tag{6.12}\]
Now the potential term is
\[\langle\varphi,((\Delta_{n}y_{n,1})_{\times}+(\Delta_{n}y_{n,2})_{\times} \frac{1}{2}(T_{n}+T_{n}^{t}))f\rangle\]
Note that we have no \(y_{n,2}\) terms. We write \(y_{n}=y_{n,1}\) and approximate the right by
\[\langle\varphi,(\Delta_{n}y_{n})_{\times}f_{n}\rangle=\langle\Delta_{n}^{t}f_ {n},\varphi y_{n}\rangle+\langle f_{n},y_{n}\Delta_{n}^{t}\varphi\rangle+n^{-1 }\langle\Delta_{n}^{t}f_{n},y_{n}\Delta_{n}^{t}\varphi\rangle.\]
Now the first two terms converge to the expected limits thanks to (6.12) and (6.6) and the last term converges to \(0\) as \(n\to\infty\).
Since in our case there is no \(y_{n,2}\) terms, we don't need the error estimate as in [37], equation (5.16). This completes the proof.
**Lemma 6.6**.: _Given any sequence \(f_{n}\in L^{*}_{n}\) with \(\|f_{n}\|_{*n}\leq c\) and \(\|f_{n}\|_{2}=1,\) then we have a \(f\in L^{*}\) and subsequence \(n_{k}\) with \(f_{n_{k}}\to_{L^{2}}f\), and for any \(\varphi\in\mathcal{C}^{\infty}_{0}\), there is \(\langle\varphi,\overline{H}_{n_{k}}f_{n_{k}}\rangle\to\langle\varphi,\mathcal{G }_{\sigma}f\rangle\)._
Proof.: As \(f_{n}\), \(\Delta_{n}f_{n}\) are bounded in \(L^{2}\), upon taking a subsequence we have \(f_{n}\to f\) weakly in \(L^{2}\), and \(\Delta_{n}f_{n}\to\tilde{f}\) weakly in \(L^{2}\). Taking \(\varphi=1_{[0,t]}\) shows \(f\) is differentiable with \(f^{\prime}=\tilde{f}\). By lower semi-continuity \(f\in L^{*}\), and this provides enough tightness for us to deduce that \(f_{n}\to f\) strongly in \(L^{2}\). The rest follows from Lemma 6.5.
We are in the position to complete the proof of Theorem 1.8. We introduce the following notations that will only be used in the following two lemmas: let \((\lambda_{n,k},v_{n,k}),k\geq 0\) be the smallest eigenvalues and normalized embedded eigenfunctions of \(\overline{H}_{n}\), and let \((\Lambda_{k},f_{k})\) be the same for \(\mathcal{G}_{\sigma}\).
The proof of the following lemma is the same as [37], Lemma 5.9.
**Lemma 6.7**.: _For any \(k\geq 0\) we have \(\underline{\lambda}_{k}=\liminf_{n}\lambda_{n,k}\geq\Lambda_{k}\)._
Proof.: The eigenvalues of \(\overline{H}_{n}\) are uniformly bounded from below, thus for some subsequence, it converges to a limit
\[(\lambda_{n,1},\cdots,\lambda_{n,k})\to(\zeta_{1},\cdots,\zeta_{k}=\underline {\lambda}_{k}).\]
The eigenfunctions corresponding to these eigenvalues have bounded \(L^{*}_{n}\) norm thanks to Lemma 6.4, so by Lemma 6.6, along a further subsequence, these eigenfunctions converge in \(L^{2}\). Moreover, the limiting eigenfunctions must be orthogonal as well, so they correspond to \(k\) distinct states.
The last remaining step of the proof of Theorem 1.8 is the following lemma:
**Lemma 6.8**.: _For each \(k\geq 0\) the convergence \(\lambda_{n,k}\to\Lambda_{k}\) and \(v_{n,k}\to_{L^{2}}f_{k}\) holds._
Proof.: The proof is the same as [37], Lemma 5.10. Assume by induction the claim is verified up to \(k-1\). Choose a \(f_{k}^{\epsilon}\in\mathcal{C}_{0}^{\infty}([0,1])\) that is \(\epsilon\)-close to \(f_{k}\) in \(L^{*}\). Define a vector
\[f_{n,k}=\mathcal{P}_{n}f_{k}^{\epsilon}-\sum_{\ell=1}^{k-1}\langle v_{n,\ell}, \mathcal{P}_{n}f_{k}^{\epsilon}\rangle v_{n,\ell}. \tag{6.13}\]
A uniform bound for the \(L^{*}_{n}\) norm of \(v_{n,\ell}\) follows from Lemma 6.4, and \(\|\mathcal{P}_{n}f_{k}^{\epsilon}-v_{n,k}\|_{2}\leq 2\epsilon\) for \(n\) large, so that the \(L^{*}_{n}\) norm of the summation is no more than \(c\epsilon\). We have
\[\lim\sup_{n\to\infty}\lambda_{n,k}\leq\lim\sup_{n\to\infty}\frac{\langle f_{n, k},\overline{H}_{n}f_{n,k}\rangle}{\langle f_{n,k},f_{n,k}\rangle}=\lim\sup_{n \to\infty}\frac{\langle\mathcal{P}_{n}f_{k}^{\epsilon},\overline{H}_{n} \mathcal{P}_{n}f_{k}^{\epsilon}\rangle}{\langle\mathcal{P}_{n}f_{k}^{\epsilon },\mathcal{P}_{n}f_{k}^{\epsilon}\rangle}+o_{\epsilon}(1). \tag{6.14}\]
By (6.11), \(\lim_{n\to\infty}\langle\mathcal{P}_{n}f_{k}^{\epsilon},\overline{H}_{n} \mathcal{P}_{n}f_{k}^{\epsilon}\rangle=\langle f_{k}^{\epsilon},\mathcal{G}_{ \sigma}f_{k}^{\epsilon}\rangle\), so that the right hand side equals
\[\frac{\langle f_{k}^{\epsilon},\mathcal{G}_{\sigma}f_{k}^{\epsilon}\rangle}{ \langle f_{k}^{\epsilon},f_{k}^{\epsilon}\rangle}+o_{\epsilon}(1)=\frac{ \langle f_{k},\mathcal{G}_{\sigma}f_{k}\rangle}{\langle f_{k},f_{k}\rangle}+ o_{\epsilon}(1).\]
Finally, in the \(\epsilon\to 0\) limit, the right converges to \(\langle f_{k},\mathcal{G}_{\sigma}f_{k}\rangle/\langle f_{k},f_{k}\rangle= \Lambda_{k}\). Further, for a subsequence of \(v_{n,k}\) we find a further subsequence converging strongly in \(L^{2}\) to \(g\in L^{*}\). The \(g\) satisfies \(\mathcal{G}_{\sigma}g=\Lambda_{k}g\) in distribution so \(g=f_{k}\) and \(v_{n,k}\to_{L^{2}}f_{k}.\) This completes the proof.
### Equivalence of operators
In this section we prove Corollary 1.9, that is, the equivalence of \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\) and \(\mathcal{U}(T)\) under the given coupling of Brownian motion \(W\). We do not claim any originality for this result as it seems to be covered by the very recent paper [30], yet we keep the proof as it is very short.
Proof.: We follow the proof of [17], Corollary 2.2. As detailed in Section 6.2, we identify the matrix \(H_{n}\) as an operator on \(L^{2}([0,1])\) by interpreting \(H_{n}(\pi_{n}f)\) as a piecewise constant function on the intervals \([0,n^{-1}),[n^{-1},2n^{-1}),\cdots,[1-n^{-1},1)\), which takes values as \(n^{1/2}\) multiples of the value of \(H_{n}(\pi_{n}f).\) We have detailed in Section 6.2 a coupling of \(H_{n},n\in\mathbb{N}\) whose scaled largest eigenvalues and eigenvectors converge to that of \(-\frac{1}{2}\mathcal{G}_{\sigma}\) almost surely, with the Brownian motion \(W\) arising as we take the limit (1.12).
Under that coupling, the largest eigenvalues and associated eigenvectors of \(\mathcal{M}(T,n),n\in\mathbb{N}\) converge to that of \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\) with probability one. For the eigenvectors, this is because \(\mathcal{M}(T,n)\), \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\), and \(-\frac{1}{2}\mathcal{G}_{\sigma}\) have the same eigenvectors, and convergence of eigenvalues follow from approximating exponential of eigenvalues of \(\mathcal{G}_{\sigma}\) by high powers of eigenvalues of \(\mathcal{G}_{\sigma}\) just as we did in Section 4.2, and use the convergence results in that section.
Since eigenvalues of \(-\frac{1}{2}\mathcal{G}_{\sigma}\) converge almost surely to \(-\infty\) thanks to Proposition 1.7, we have the a.s. strong convergence of matrices \(\mathcal{M}(T,n),n\in\mathbb{N}\) (as operators on \(L^{2}([0,1])\) to \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\). This implies weak convergence in terms of finite dimensional distributions, so that \(\int_{0}^{1}f(x)(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}g)(x)dx\), \(f,g\in L^{2}([0,1])\), and \(\int_{0}^{1}f(x)(\mathcal{U}(T)g)(x)\),\(f,g\in L^{2}([0,1])\), are identical, and their joint distributions, coupled with the law of the Brownian motion \(W\) that appears in the definition of \(\mathcal{G}_{\sigma}\) and \(\mathcal{U}(T)\), are equal. That is, the pair of laws \((W,e^{-\frac{T}{2}\mathcal{G}_{\sigma}})\) and \((W,\mathcal{U}(T))\) are identical, so that we can find a unique, up to set of measure 0, deterministic function \(F\) for which \((W,F(W))\) has the same law as \((W,e^{-\frac{T}{2}\mathcal{G}_{\sigma}})\) and \((W,\mathcal{U}(T))\). Thus, via the appropriate coupling of \(W\), we identify \(e^{-\frac{T}{2}\mathcal{G}_{\sigma}}\) and \(\mathcal{U}(T)\).
## 7. Tail estimates of top eigenvalue
In this section we prove Theorem 1.10. With \(\text{RSO}_{\sigma}=-\Lambda_{0}(\sigma)\) where \(\Lambda_{0}(\sigma)\) is the smallest eigenvalue of \(\mathcal{G}_{\sigma}\), we can use the variational characterization of \(\Lambda_{0}(\sigma)\) and the Riccati transform \(p(x)\) (6.3) to deduce upper and lower tail estimates of \(\text{RSO}_{\sigma}\). We follow similar arguments as in [37],Section 4 to derive left and right tail estimates up to the first order. Indeed, in our case \(\text{RSO}_{\sigma}\) has the same right tail asymptotic as \(\text{TW}_{\beta}\), yet the left tail asymptotic of \(\text{RSO}_{\sigma}\) is very different from that of \(\text{TW}_{\beta}\).
**Right tail, lower bound** Observe that
\[\begin{split}\mathbb{P}(\text{RSO}_{\sigma}>a)=\mathbb{P}( \Lambda_{0}(\sigma)<-a)&\geq\mathbb{P}(\langle f,\mathcal{G}_{ \sigma}f\rangle<-a\langle f,f,\rangle)\\ &=\mathbb{P}(\sigma\|f\|_{4}^{2}\mathfrak{g}<-a\|f\|_{2}^{2}-\|f ^{\prime}\|_{2}^{2})\end{split} \tag{7.1}\]
given any \(f\in L^{*}\), and \(\mathfrak{g}\) is the standard Gaussian. Now we take \(f(x)=\text{sech}(\sqrt{a}(x-\frac{1}{2}))\). Then with \(\sim\) denotes the \(a\uparrow\infty\) asymptotic, we have the following asymptotic (with the norm \(\|\cdot\|\) taken on \(\mathbb{R}\)):
\[a\|f\|_{2}^{2}\sim 2\sqrt{a},\quad\|f^{\prime}\|_{2}^{2}\sim\frac{2}{3}\sqrt{ a},\quad\|f\|_{4}^{4}\sim\frac{4}{3\sqrt{a}}. \tag{7.2}\]
While \(f\) does not satisfy the boundary condition \(f(0)=f(1)=0\), their values decay exponentially fast as \(a\) increases, and outside \([0,1]\)\(f\) can be bounded by an exponentially vanishing function whose \(L^{2}\), \(L^{4}\) norm and \(L^{2}\) norm of its derivative (all the norms are with respect to \(L^{2}(\mathbb{R}\setminus[0,1])\)) are all negligible compared to the estimate (7.2). So upon a slight modification of \(f\), we may assume \(f\) satisfies the boundary conditions \(f(0)=f(1)=0\) and \(f\) satisfies estimate (7.2) where all the norms are taken on \(L^{2}([0,1])\).
Therefore
\[\mathbb{P}(\text{RSO}_{\sigma}>a)\geq\mathbb{P}\left(\sigma\times\frac{2}{ \sqrt{3}}a^{-1/4}\mathfrak{g}<-a^{1/2}(2+\frac{2}{3}+o(1))\right),\]
so by properties of Gaussian \(\mathbb{P}(\mathfrak{g}>c)=e^{-c^{2}(\frac{1}{2}+o(a))}\) we deduce that for \(a\uparrow\infty\),
\[\mathbb{P}(\text{RSO}_{\sigma}>a)\geq e^{-\frac{8}{3\sigma^{2}}a^{3/2}(1+o(1) )}.\]
**Left tail, upper bound** We use the same idea but this time choose, for sufficiently large \(a\),
\[f_{a}(x)=\begin{cases}a^{0.1}x,\quad x\in[0,a^{-0.1}],\\ 1,\quad x\in[a^{-0.1},1-a^{-0.1}],\\ a^{0.1}(1-x),\quad x\in[1-a^{-0.1},1].\end{cases}\]
Then we have the \(a\uparrow\infty\) asymptotic
\[a\|f\|_{2}^{2}\sim a,,\quad\|f^{\prime}\|_{2}^{2}=o(a^{0.5}),\quad\|f\|_{4}^{ 4}\sim 1.\]
Therefore
\[\mathbb{P}(\text{RSO}_{\sigma}<-a)\leq\mathbb{P}\left(\sigma\mathfrak{g}>a \right)=e^{-\frac{a^{2}}{2\sigma^{2}}(1+o(1))}.\]
**Left tail, lower bound** By the diffusion description,
\[\mathbb{P}(\text{RSO}_{\sigma}<-a)=\mathbb{P}_{\infty}(p_{a}\text{ does not explode on [0,1]})\]
where \(p_{a}\) solves the SDE
\[p^{\prime}(x)=-a-p^{2}(x)-\sigma b^{\prime}(x)\]
with \(p_{a}(0)=+\infty\). By monotonicity, this probability is larger than the probability that \(p_{a}\), solving the SDE with initial condition \(p_{a}(0)=1\), satisfies \(p_{a}(x)\in[0,2]\) for all \(x\in[0,1]\). In the following, for each \(s\in\mathbb{R}\cup\{\infty\}\) we denote by \(\mathbb{P}_{s}\) the probability distribution of the diffusion process \(p(x)\) with initial value \(p(0)=s\). To estimate this probability we use the Cameron-Martin-Girsanov transform
\[\mathbb{P}_{1}(p_{\lambda}(x)\in[0,2]\text{ for all }x\in[0,1])\] \[=\mathbb{E}_{1}\left[\exp\left(-\frac{1}{\sigma}\int_{0}^{1}(-a-b _{x}^{2})db_{x}-\frac{1}{2\sigma^{2}}\int_{0}^{1}(-a-b_{x}^{2})^{2}dx\right);b _{x}\in[0,2]\forall x\in[0,1].\right]\]
for a standard Brownian motion \(b\). On the conditioned event we have
\[\frac{1}{2\sigma^{2}}\int_{0}^{1}(-a-b_{x}^{2})^{2}dx=\frac{a^{2}}{2\sigma^{2 }}+O(a),\]
and by Ito's formula, on the conditioned event,
\[\int_{0}^{1}(-a-b_{x}^{2})db_{x}=a(b_{0}-b_{1})-\frac{1}{3}(b_{1}^{3}-b_{0}^{3 })+\sigma^{2}\int_{0}^{1}b_{x}dx=O(a).\]
Therefore we conclude that
\[\mathbb{P}(\text{RSO}_{\sigma}<-a)\geq e^{-\frac{a^{2}}{2\sigma^{2}}(1+o(1)}.\]
**Right tail, upper bound** We follow the ideas in [37], Section 4 yet the computation in our setting is slightly simpler. Now we fix a value \(a>0\), and for any \(r\in\mathbb{R}\) denote by \(\mathfrak{m}_{r}\) the passage time to the specified level \(r\) of the diffusion process
\[dp(x)=\sigma db_{x}+(a-p^{2}(x))dx,\quad p(0)=s,\quad x\in[0,1].\]
Then we have for any \(a>>1\) and \(c>0\),
\[\mathbb{P}(\text{SAO}_{\sigma}>a)=\mathbb{P}_{\infty}(\mathfrak{m}_{-\infty} \leq 1)\leq\mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-\sqrt{a}}\leq 1). \tag{7.3}\]
For fixed \(a>0\) denote by \(\mathfrak{m}_{\pm}=\mathfrak{m}_{\pm\sqrt{a}}\), and consider the event \(\mathcal{A}=\{\mathfrak{m}_{+}>\mathfrak{m}_{-}\}\), then for any \(c>0\),
\[\mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-}\leq 1) =\mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-}\leq 1,\mathcal{A})+ \mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-}\leq 1,\mathcal{A}^{c})\] \[\leq\mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-}\leq 1,\mathcal{A})+ \mathbb{P}_{\sqrt{a}}(\mathfrak{m}_{-}\leq 1)\] \[\leq\mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-}\leq 1,\mathcal{A})+ \mathbb{P}_{\sqrt{a}}(\mathfrak{m}_{\sqrt{a}-c}\leq 1)\mathbb{P}_{\sqrt{a}-c}( \mathfrak{m}_{-}\leq 1),\]
with inequalities following from the fact that the possibility of hitting any level below the initial value decreases as the initial value increases, and increases when the diffusion process \(p_{a}\) is running on a longer time interval. We claim that
_Claim 7.1_.: We can find a sufficiently large \(c>0\) so that \(\mathbb{P}_{\sqrt{a}}(\mathfrak{m}_{\sqrt{a}-c}>1)\) is bounded uniformly away from zero (that is independent of \(a>>c\)).
Once this claim is justified, we can deduce that there is a numerical constant \(c^{\prime}\) such that
\[\mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-\sqrt{a}}\leq 1)\leq c^{\prime} \mathbb{P}_{\sqrt{a}-c}\left(\mathfrak{m}_{-\sqrt{a}}\leq 1,\mathfrak{m}_{ \sqrt{a}}>\mathfrak{m}_{-\sqrt{a}}\right). \tag{7.4}\]
Now we can complete the proof via Girsanov transform:
\[\mathbb{P}_{\sqrt{a}-c}(\mathfrak{m}_{-}\leq 1,\mathcal{A})=\mathbb{E}_{\sqrt{a}-c }[R(q),\mathfrak{m}_{+}>\mathfrak{m}_{-},\mathfrak{m}_{-}\leq 1], \tag{7.5}\]
for \(q\) the diffusion with sign-reversed drift
\[dq(x)=\sigma db(x)+(q^{2}(x)-a)dx,\]
and the Girsanov change of measure factor is 1
Footnote 1: The Girsanov density has this form only if \(q\) has the sign-reversed drift, and an intuitive explanation showing why choosing \(q\) with this sign reversed drift leads to an almost optimal estimate can be found in [11], Section 2.2. Namely, \(q\) gives a very fast trajectory from \(\sqrt{a}\) to \(-\sqrt{a}\) but does not go below \(-\sqrt{a}\) with high probability.
\[\log R(q)=\frac{2}{\sigma^{2}}\int_{0}^{1\wedge\mathfrak{m}_{-}}(a-q^{2}(x))dq (x).\]
By Ito's lemma, for any \(z>0\) we have
\[\int_{0}^{z}(a-q^{2}(x))dq(x)=a(q(z)-q(0))-\frac{1}{3}(q^{3}(z)-q^{3}(0))+ \sigma^{2}\int_{0}^{z}q(x)dx. \tag{7.6}\]
Whenever \(z\leq\mathfrak{m}_{-}\wedge\mathfrak{m}_{+}\) we have \(|q(x)|\leq\sqrt{a}\) for \(x\in[0,z]\) so the last term on the right is bounded by a term of order \(O(a)\). For the first two terms on the right hand side, note that when \(z=\mathfrak{m}_{-}\), since \(q(0)=\sqrt{a}-c\), the first line equals \(-(4/3)a^{3/2}+O(a)\). Combining everything, we conclude that
\[\mathbb{P}(\text{RSO}_{\sigma}>a)\leq c^{\prime}e^{-\frac{8}{3\sigma^{2}}a^{3 /2}(1+o(1))}. \tag{7.7}\]
The last remaining step is to prove Claim 7.1. Indeed, the probability in question is bounded from below by the probability that its (reflected downward once reaching \(\sqrt{a}\)) version never reaches \(\sqrt{a}-c\). Under these constraints, i.e. \(p(x)\in[\sqrt{a}-c,\sqrt{a}]\) for \(x\in[0,1]\), the drift of \(p\) is bounded from below by \(0\). Thus with positive probability \(p(x)\) will never reach \(\sqrt{a}-c\) on the time interval \([0,1]\), and this probability is uniform for all \(a\) sufficiently large.
This completes the proof of Theorem 1.10.
## Acknowledgements
The author thanks Professor James Norris for suggesting him to investigate edge scaling limits of random Schrodinger operators.
## Statements and Declarations
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
## Appendix A Exponential moments of random variables
We quote the following lemma from [17], Lemma 4.1 that is very useful for exponential moment bounds under Assumption 1.1:
**Lemma A.1**.: _Assume that for a given random variable \(\zeta\), we can find \(C>0\), \(0<\gamma<2/3\) so that \(\mathbb{E}[|\zeta|^{\ell}]\leq C^{\ell}\ell^{\gamma^{\ell}}\) for all \(\ell\in\mathbb{N}\). Then we can find \(C^{\prime}>0\), \(2<\gamma^{\prime}<3\) with_
\[\mathbb{E}[e^{v\zeta}]\leq\exp(v\mathbb{E}[\zeta]+C^{\prime}(v^{2}+|v|^{ \gamma^{\prime}}),\quad v\in\mathbb{R},\] (A.1)
\[\mathbb{E}[|1+v\zeta|^{\ell}]\leq\exp(|v|\ell|\mathbb{E}[\zeta]|+C^{\prime}(v ^{2}\ell^{2}+|v|^{\gamma^{\prime}}\ell^{\gamma^{\prime}})),\quad v\in\mathbb{ R},\ell\in\mathbb{N}.\] (A.2)
## Appendix B Brownian bridge approximation of random walk
In this section we outline technical lemmas that guarantee how a random walk bridge can be well approximated by a Brownian bridge. The proof of Proposition 3.1 is given as follows, which is essentially an adaptation of the proof in [17], Appendix B: (note that Appendix B of [17] is contained in its ArXiv version, and in the supplementary material downloadable from the Project Euclid website. We follow the numbering of the ArXiv version here).
We begin with a technical proposition from [31], Theorem 6.4:
**Proposition B.1**.: _Given any \(C>0\), we shall find \(\tilde{C}>0\) such that given any \(n\in\mathbb{N}\), there is a probability space that supports a random walk bridge \(X^{x,y;n,T_{n}}\) and a Brownian bridge \(\widetilde{B}^{x,y;n}\) that connects \(\lfloor nx\rfloor\) to \(\lfloor ny\rfloor\) in \(T_{n}n^{2}\) time, that satisfies_
\[\mathbb{P}\left(\sup_{0\leq t\leq T_{n}}|X^{x,y;n,T_{n}}(t)- \widetilde{B}^{x,y;n}(tn^{2})|\geq\tilde{C}\log n\right)\leq\tilde{C}n^{-C}.\] (B.1)
_Note that by Brownian scaling, for each \(n\in\mathbb{N}\), the process \(B^{x,y;n}(t):=n^{-1}(\tilde{B}^{x,y;n}(Tn^{2})),t\in[0,T_{n}]\) is a standard Brownian bridge that connects \(n^{-1}\lfloor nx\rfloor\) to \(n^{-1}\lfloor ny\rfloor\). By the above Proposition,_
\[\mathbb{P}\left(\sup_{0\leq t\leq T_{n}}|n^{-1}(X^{x,y;n,T_{n}}(t )-B^{x,y;n}(t))|\geq\tilde{C}n^{-1}\log n\right)\leq\tilde{C}n^{-C}.\] (B.2)
Consider a probability space that supports a Brownian bridge \(B^{x,y}\) connecting \(x\) to \(y\) within time \(T\), then by Brownian scaling, for each \(n\in\mathbb{N}\) the process
\[\begin{split}&\left(B^{x,y}(t\frac{T}{T_{n}})-(1-\frac{t}{T_{n}})x -\frac{t}{T_{n}}y\right)(\frac{T}{T_{n}})^{1/2}\\ &+(1-\frac{t}{T_{n}})\left(n^{-1}\lfloor nx\rfloor\right)+\frac{ t}{T_{n}}\left(n^{-1}\lfloor ny\rfloor\right),\quad t\in[0,T_{n}],\end{split}\] (B.3)
has the same law as the Brownian bridge \(B^{x,y;n}\) we described just above. Therefore, we start with a probability space on which \(B^{x,y}\) is defined, then we enlarge the probability space by using conditioned distributions of \(X^{x,y;n,T_{n}}\) given \(B^{x,y;n}\) to obtain copies of random walk bridges \(X^{x,y;n,T_{n}},n\in\mathbb{N}\). Then the property (3.3) follows from the coupling (B.2), the Borel-Cantelli lemma, and Levy's modulus of continuity:
\[\mathbb{P}\left(\limsup_{\epsilon\to 0}\frac{\sup_{0\leq t_{1}\leq t_{2} \leq t,t_{2}-t_{1}\leq\epsilon}|B^{x,y}(t_{2})-B^{x,y}(t_{1})|}{\sqrt{2 \epsilon\log(1/\epsilon)}}=1\right)=1.\] (B.4)
The property (3.2) shall be verified as follows: we quote [17], Lemma B2:
**Lemma B.2**.: _Given \(f_{1}\), \(f_{2}\) measurable functions on \([0,T]\) possessing local times, then given any \(\epsilon>0\),_
\[\sup_{h\in\mathbb{R}}|L_{h}(f_{1})-L_{h}(f_{2})|\leq\frac{T}{ \epsilon^{2}}\sup_{0\leq t\leq T}|f_{1}(t)-f_{2}(t)|+\sum_{i=1}^{2}\sup_{h_{1},h_{2}\in\mathbb{R},|h_{1}-h_{2}|\leq\epsilon}|L_{h_{1}}(f_{i})-L_{h_{2}}(f_{ i})|.\] (B.5)
Now for any fixed \(n\in\mathbb{N}\), we apply the lemma for \(f_{1}:=B^{x,y}\), \(f_{2}:=X^{x,y;n,T_{n}}\), and \(\epsilon=n^{-2/5}\). We use the following estimate
\[\sup_{h_{1},h_{2}\in\mathbb{R},|h_{1}-h_{2}|\leq n^{-2/5}}|L_{h_{1}}(B^{x,y})-L _{h_{2}}(B^{x,y})|\leq\mathcal{C}n^{-1/5}(\log n)^{1/2},N\in\mathbb{N},\] (B.6)
almost surely, via the following reasoning: first, the laws of \(B^{x,y}(t)-x,t\in[0,T/2]\) and \(B^{x,y}(T-t)-y,t\in[0,T/2]\) are mutually absolutely continuous towards that of the standard Brownian motion on \([0,T/2]\). Then one should apply a version of estimate (B.6) for standard Brownian motion, which can be found from [41], estimate 2.1.
We also need the following estimate:
**Lemma B.3**.: _Given any \(\tilde{\epsilon}>0\), we can find random variable \(\mathcal{C}_{\tilde{\epsilon}}\) so that almost surely,_
\[\sup_{h_{1},h_{2}\in\mathbb{R},|h_{1}-h_{2}|\leq n^{-2/5}}|L_{h_{1}}(X^{x,y;n,T_{n}})-L_{h_{2}}(X^{x,y;n,T_{n}})|\leq\mathcal{C}_{\tilde{\epsilon}}n^{-1/5 +\tilde{\epsilon}},\quad n\in\mathbb{N}.\] (B.7)
Having proved this lemma, the estimate (3.2) follows from (3.3), (B.6), (B.5), and selecting \(\tilde{\epsilon}>0\) sufficiently small. The proof of Lemma B.3 is given as follows:
Proof.: We follow [17], Lemma B3 and only give a sketch. We work with even \(\lfloor nx\rfloor\), \(\lfloor ny\rfloor\), and \(T_{n}n^{2}\), with the other case being the same. First assume \(h_{1},h_{2}\) take the form \(n^{-1}(2a)\) given \(a\in\mathbb{Z}\), so consider simple symmetric random walk with \(T_{n}n^{2}\) steps from \(\lfloor nx\rfloor\), restricted to even times, and conditioned on ending at \(\lfloor ny\rfloor\). By [3], Proposition 3.1, we get the estimate
\[\begin{split}&\mathbb{P}\left(\sup_{h_{1},h_{2}\in n^{-1}(2 \mathbb{Z}),|h_{1}-h_{2}|\leq n^{-2/5}}|L_{h_{1}}(X^{x,y;n,T_{n}})-L_{h_{2}}(X^ {x,y;n,T_{n}})|\geq n^{-(1-\tilde{\epsilon})/5\lambda}\right)\\ &\leq\mathcal{C}n(e^{-\lambda/C}+n^{-14}).\end{split}\] (B.8)
The multiplicative factor \(n\) in the resulting estimate follows from conditioning on the walk ending at \(\lfloor ny\rfloor\). Now take \(\lambda=n^{\tilde{\epsilon}/2}\) and use the Borel-Cantelli lemma, we have verified estimate (B.7) for \(h_{1},h_{2}\in n^{-1}(2\mathbb{Z})\).
Then we remove constraints on \(h_{1},h_{2}\). In the first step assume \(h_{1},h_{2}\) have the form \(N^{-1}(2a+e)\) given \(a\in\mathbb{Z}\) and \(e\in(-1,1)\setminus\{0\}\). Taking \(e=\frac{1}{2}\) and by a tail bound on binomial distribution, the conditional probability
\[\mathbb{P}\left(|L_{n^{-1}(2a+e)}(X^{x,y;n,T_{n}})-u|\geq n^{-1/5}u^{1/2}\mid L _{n^{-1}(2a)}(X^{x,y;n,T_{n}})=u\right)\] (B.9)
vanishes faster than a polynomial in the \(n\to\infty\) limit. As we have already verified that \(L_{n^{-1}(2a)}(X^{x,y;n,T_{n}})=O(1)\) as \(n\to\infty\) for any \(a\in\mathbb{Z}\) almost surely, we now condition on this almost sure event and apply Borel-Cantelli lemma to deduce that (B.7) holds for \(h_{1},h_{2}\) of the form \(n^{-1}(2a+e)\).
It remains to consider \(h_{1},h_{2}\) having the form \(n^{-1}(2a+1)\) given \(a\in\mathbb{Z}\). This follows from the combinatorial identity
\[\left|L_{n^{-1}(2a+1)}(X^{x,y;n,T_{n}})-\frac{L_{n^{-1}(2a+\frac{1}{2})}(X^{x, y;n,T_{n}})+L_{n^{-1}(2a+\frac{3}{2})}(X^{x,y;n,T_{n}})}{2}\right|\leq n^{-1}.\]
## Appendix C Exponential moment estimates of local times
In this paper we frequently need exponential moment estimates of averages of local times of the random walk bridge. We collect relevant technical lemmas in this section and give a sketch of proof to each of them. These results are adapted from Proposition 4.2 and 4.3 of [17].
We now compute the tails of random walk bridges.
**Proposition C.1**.: _Given \(T_{0}>0\) and \(\theta\in\mathbb{R}\), then uniformly for \(x,y\in[0,1]\),_
\[\sup_{n\in\mathbb{N}}\sup_{\tilde{T}\in\mathcal{T}(x,y;n,T_{0})} \mathbb{E}\left[\exp\left(\theta n^{-2}\sum_{i=0}^{\tilde{T}n^{2}}n^{-1}X^{x,y; n,\tilde{T}}(in^{-2})\right)\right]<\infty,\] (C.1)
_where \(\mathcal{T}(x,y;n,T_{0})\) denotes the set of \(\tilde{T}\in[0,T_{0})\) such that \(\tilde{T}n^{2}\) is an integer that has the same parity as \(\lfloor nx\rfloor-\lfloor ny\rfloor\)._
Proof.: Consider the random walk bridge
\[\tilde{X}^{x,y;n,\tilde{T}}(t):=n^{-1}X^{x,y;n,\tilde{T}}(t),\quad t \in[0,T]\]
with endpoints \(x_{n}\), \(y_{n}\) such that \(|x_{n}-x|\leq n^{-1}\) and \(y_{n}-y|\leq n^{-1}\). Assume for simplicity \(\theta>0\) and \(x_{n}\leq y_{n}\). The random variable in the expectation is upper bounded by \(e^{\theta(T_{0}+n^{-2})\tilde{M}(n,\tilde{T})}\), where \(\tilde{M}(n,\tilde{T})\) is defined by \(\tilde{M}(n,\tilde{T}):=\max_{t\in[0,\tilde{T}]}\tilde{X}^{x,y;n,\tilde{T}}(t).\) Define a Markov process \(Y\) such that \(Y(0)=y_{n}\), that \(Y\) moves if and only if \(\tilde{X}^{x,y;n,\tilde{T}}(t)\geq y_{n}\). In that case \(Y\) moves up (or down) if and only if \(\tilde{X}^{x,y;n,\tilde{T}}\) moves up or down. Then \(Y(t)\geq X(t)\) for \(t=0,n^{-2},2n^{-2},\cdots,\tilde{T}\). The increments of \(Y\) are a simple symmetric random walk, and the maximum \(J_{\tilde{T}n^{2}}\) of a simple random walk with \(\tilde{T}n^{2}\) steps has, by [7], Theorem 6.2.1 and inequality 6.2.3,
\[\mathbb{E}[(J_{\tilde{T}n^{2}})^{m}]\leq\sqrt{m!}(C\tilde{T}^{1/2 }n)^{m},\quad m\in\mathbb{N},\tilde{T}\in\mathcal{T}(x,y;n,\infty),N\in \mathbb{N}\] (C.2)
for some \(C<\infty\). The exponential moment uniform bound now follows.
We also need the following Proposition, adapted from [17], Proposition 4.3:
**Proposition C.2**.: _Given any \(T_{0}>0\), \(1\leq p<3\) and \(\theta>0\), we have the estimate of uniform integrability_
\[\sup_{n\in\mathbb{N}}\sup_{\tilde{T}\in\mathcal{T}(x,y;n,T_{0})} \mathbb{E}\left[\exp\left(\theta n^{-1}\sum_{h\in n^{-1}\mathbb{Z}}L_{h}(X^{x, y;n,\tilde{T}})^{p}\right)\right]<\infty.\] (C.3)
Before giving the proof we introduce some useful terminologies. For a random walk bridge \(X^{x,y;n,\tilde{T}}\), we introduce the following two transforms. We define the quantile transform \(Q^{n,\tilde{T}}\) of \(X^{x,y;n,\tilde{T}}\) (introduced in [2]) as follows: first find a unique permutation \(\kappa\) of \(\{1,2,\cdots,\tilde{T}n^{2}\}\) such that
\[l\to X^{x,y;n,\tilde{T}}(\kappa(l)n^{-2})\]
is increasing in \(l\), and that if \(l_{1}\), \(l_{2}\) maps to the same value then \(\kappa(l_{1})<\kappa(l_{2})\) whenever \(l_{1}<l_{2}\). In this way, the quantile transform \(Q^{n,\tilde{T}}\) satisfies \(Q^{N,\tilde{T}}(0)=0\), and
\[Q^{n,\tilde{T}}(ln^{-2})=\sum_{l_{1}=1}^{l}\left(X^{x,y;n,\tilde{T}}(\kappa(l_{ 1})n^{-2})-X^{x,y;n,\tilde{T}}(\kappa(l_{1}-1)n^{-2}),\right),\quad l=1,2\cdots,\tilde{T}n^{2}.\] (C.4)
Define also the Vervaat transform (introduced in [43]) \(V^{n,\tilde{T}}\) obtained from \(X^{x,y;n,\tilde{T}}\) via splitting the path \(X^{x,y;n,\tilde{T}}\) at the first point it achieves the global minimum, then attach its first part at the end of its second part, then moving the resulting path by a constant so the starting point is zero. Now we give the proof of Proposition C.2:
Proof.: We essentially follow the proof of Proposition 4.3 in [17]. Given any \(h\in n^{-1}\mathbb{Z}\), denote by \(u_{h}^{n,\tilde{T}}\), \(d_{h}^{n,\tilde{T}}\) the numbers of upward and downward steps of \(X^{x,y;n,\tilde{T}}\) whose previous step is \(n-nh\), and consider
\[t_{h}^{n,\tilde{T}}=\sum_{n^{-1}\mathbb{Z}\ni h_{1}>h}\left(u_{h_{1}}^{n, \tilde{T}}+d_{h_{1}}^{n,\tilde{T}}\right),\] (C.5)
from the classical inequality \((a+b)^{c}\leq 2^{c}(a^{c}+b^{c})\) with positive \(a,b,c\), we deduce that
\[\begin{split}&\sum_{h\in n^{-1}\mathbb{Z}}L_{h}(X^{x,y;n,\tilde{T}})^ {p}=n^{-p}\sum_{h\in n^{-1}\mathbb{Z}}\left(u_{h}^{n,\tilde{T}}+d_{h}^{n, \tilde{T}}\right)^{p}\\ &\leq n^{-p}2^{p-1}\sum_{h\in n^{-1}\mathbb{Z}}\left((u_{h}^{n, \tilde{T}}+d_{h}^{n,\tilde{T}})(u_{h}^{n,\tilde{T}})^{p-1}+(u_{h}^{n,\tilde{T }}+d_{h}^{n,\tilde{T}})(d_{h}^{n,\tilde{T}})^{p-1}\right).\end{split}\] (C.6)
By the combinatorial identity in [2] (5.3),
\[Q^{n,\tilde{T}}(t_{h-n^{-1}}^{n,\tilde{T}}n^{-2})=u_{h}^{n,\tilde{T}}+(n-nh- \lfloor nx\rfloor)_{+}-(n-nh-\lfloor ny\rfloor)_{+},\quad h\in n^{-1}\mathbb{Z},\]
which leads to the estimate
\[u_{h}^{n,\tilde{T}}\leq Q^{n,\tilde{T}}(t_{h-n^{-1}}^{n,\tilde{T}}n^{-2})+n \lvert x-y\rvert+1,\quad h\in n^{-1}\mathbb{Z}.\] (C.7)
Since we have the restriction \(\lvert d_{h-n^{-1}}^{n,\tilde{T}}-u_{h}^{n,\tilde{T}}\rvert\leq 1\) for all \(h\in n^{-1}\mathbb{Z}\), we have
\[d_{h}^{n,\tilde{T}}\leq Q^{n,\tilde{T}}(t_{h}^{n,\tilde{T}}n^{-2})+n\lvert x- y\rvert+2,\quad h\in n^{-1}\mathbb{Z}.\] (C.8)
Combining this with \(u_{h}^{n,\tilde{T}}+d_{h}^{n,\tilde{T}}=t_{h-n^{-1}}^{n,\tilde{T}}-t_{h}^{n, \tilde{T}},h\in n^{-1}\mathbb{Z}\), we deduce that
\[\sum_{h\in n^{-1}\mathbb{Z}}L_{h}(X^{x,y;n,\tilde{T}})^{p}\] \[\leq n^{-p}2^{p-1}\sum_{h\in n^{-1}\mathbb{Z}}(t_{h-n^{-1}}^{n, \tilde{T}}-t_{h}^{n,\tilde{T}})\left(Q^{n,\tilde{T}}(t_{h-n^{-1}}^{n,\tilde{ T}}n^{-2})+n\lvert x-y\rvert+1\right)^{p-1}\] \[+n^{-p}2^{p-1}\sum_{h\in n^{-1}\mathbb{Z}}(t_{h-n^{-1}}^{n,\tilde{ T}}-t_{h}^{n,\tilde{T}})\left(Q^{n,\tilde{T}}(t_{h}^{n,\tilde{T}}n^{-2})+n \lvert x-y\rvert+2\right)^{p-1}.\]
Denoting by \(M(n,\tilde{T})\) the maximal value taken by \(n^{-1}Q^{n,\tilde{T}}\), using
\[\sum_{h\in n^{-1}\mathbb{Z}}\left(t_{h-n^{-1}}^{n,\tilde{T}}-t_{h}^{n,\tilde{T }}\right)=\tilde{T}n^{2},\]
we get
\[n^{-1}\sum_{h\in n^{-1}\mathbb{Z}}L_{h}(X^{x,y;n,\tilde{T}})^{p}\leq 2^{2p-1} \tilde{T}\left(M(n,\tilde{T})^{p-1}+(|x-y|+2n^{-1})^{p-1}\right).\] (C.9)
By [2] Corollary 7.4, the distribution of \(M(n,\tilde{T})\) is identical to the distribution of the maximum of the Vervaat transform after normalization, that is, to \(n^{-1}V^{n,\tilde{T}}\). By definition, the maximum of \(V^{n,\tilde{T}}\) is equal to the width of \(X^{x,y;n,\tilde{T}}.\) By the same proof as in Proposition C.1, we get a uniform in \(n\) estimate of the \((p-1)\)-st moment of the width of simple symmetric random walks, normalized by \(n^{-1}\), with \(\tilde{T}n^{2}\) steps. This completes the proof.
We note in passing a useful remark that will be used in Section 3.
_Remark C.3_.: The estimate (C.3) also holds if we replace \(\sum_{h\in n^{-1}\mathbb{Z}}\) by \(\sum_{h\in c+n^{-1}\mathbb{Z}}\) given some \(c\in\mathbb{R}\). This follows from the combinatorial constraint
\[L_{h}(X^{x,y;n,\tilde{T}})\leq L_{n^{-1}\lfloor nh\rfloor}(X^{x,y;n,\tilde{T} })+L_{n^{-1}\lceil nh\rceil}(X^{x,y;n,\tilde{T}}).\]
|
2304.06775 | PointCLIMB: An Exemplar-Free Point Cloud Class Incremental Benchmark | Point clouds offer comprehensive and precise data regarding the contour and
configuration of objects. Employing such geometric and topological 3D
information of objects in class incremental learning can aid endless
application in 3D-computer vision. Well known 3D-point cloud class incremental
learning methods for addressing catastrophic forgetting generally entail the
usage of previously encountered data, which can present difficulties in
situations where there are restrictions on memory or when there are concerns
about the legality of the data. Towards this we pioneer to leverage exemplar
free class incremental learning on Point Clouds. In this paper we propose
PointCLIMB: An exemplar Free Class Incremental Learning Benchmark. We focus on
a pragmatic perspective to consider novel classes for class incremental
learning on 3D point clouds. We setup a benchmark for 3D Exemplar free class
incremental learning. We investigate performance of various backbones on
3D-Exemplar Free Class Incremental Learning framework. We demonstrate our
results on ModelNet40 dataset. | Shivanand Kundargi, Tejas Anvekar, Ramesh Ashok Tabib, Uma Mudenagudi | 2023-04-13T18:47:29Z | http://arxiv.org/abs/2304.06775v1 | # PointCLIMB: An Exemplar-Free Point Cloud Class Incremental Benchmark
###### Abstract
Point clouds offer comprehensive and precise data regarding the contour and configuration of objects. Employing such geometric and topological 3D information of objects in class incremental learning can aid endless application in 3D-computer vision. Well known 3D-point cloud class incremental learning methods for addressing catastrophic forgetting generally entail the usage of previously encountered data, which can present difficulties in situations where there are restrictions on memory or when there are concerns about the legality of the data. Towards this we pioneer to leverage exemplar free class incremental learning on Point Clouds. In this paper we propose PointCLIMB: An exemplar Free Class Incremental Learning Benchmark. We focus on a pragmatic perspective to consider novel classes for class incremental learning on 3D point clouds. We setup a benchmark for 3D Exemplar free class incremental learning. We investigate performance of various backbones on 3D-Exemplar Free Class Incremental Learning framework. We demonstrate our results on ModelNet40 dataset.
## 1 Introduction
Point cloud analysis with pioneering works exploring global [16] and local [12, 3, 17, 25] geometries has become an increasingly popular approach for understanding 3D objects and environments, with many potential applications [24] in real-time settings. Considering the real-time applications of point cloud analysis, there is always a new incoming stream of data available to the learner. Comparing this realistic scenario to human cognition, humans leverage their existing knowledge and build upon it when learning new things, rather than starting from scratch every time. In real-world, for the tasks such as autonomous driving and robotic applications, the data is not static but rather accumulates over time. Therefore, mimicking the human cognition there is a dire need for models built upon such real world applications to learn incrementally as new data is added, rather than retraining the entire model from scratch each time. This approach is known as class incremental learning(a subset of continual or life long learning) [13], which allows the model to adapt to changes in the data distribution over time, while retaining knowledge learned from previous data [15].
Class-incremental learning has been investigated to a certain extent in 2D(image data) realm [10, 19, 20], its exploration on 3D data(point cloud data) has been relatively limited. There have been initial works towards mitigating catastrophic forgetting in point cloud class incremental learning 3D-FSCIL [4], I3DOL [6]. these methods have an unfortunate lacunae: They require extensive memory bank for replay of point cloud data from previous task. The problem of memory constraints in class-incremental learning is a matter of great concern in computer vision for two primary reasons. Firstly, numerous applications with point cloud analysis operate on devices with limited memory capacity, rendering the issue of memory consumption becomes a crucial consideration [11, 21]. Secondly, many 3D-computer vision applications acquire data that is subject to legal restrictions [1, 28, 5], making storage a difficult and often infeasible task. These caveats lead us to ask How can 3D-computer vision systems incrementally incorporate new geometric information without storing previous data.
Towards modeling 3D-FECIL(3D Exemplar Free Class incremental learning), we propose PointCLIMB: An Exemplar-Free Point Cloud Class Incremental Benchmark. We summarize our contributions as follows
* We are the first to model 3D-EFCIL and benchmark the results on Modelnet40 dataset [26].
* We are the first to investigate the importance of backbone(feature extractor) in 3D-EFCIL.
* We propose to employ a weighted knowledge distillation loss towards mitigating catastrophic forgetting.
* We propose to model a pragmatic approach to benchmark 3D-EFCIL, contrast to other works in point cloud
class-incremental learning which never focus on pragmatic approach considering novel data arrival.
## 2 Related Works
**Point Cloud Analysis** With the recent emergence of Lidar sensors, numerous studies have been conducted on directly classifying 3D point cloud objects. voxel and multiview based methods [22, 27] were the works carried out initially towards point cloud analysis. PointNet [16], was the first to employ multi-layer perceptron (MLP) networks to interpret 3D point clouds. Addressing the limitation of considering only global topological information, PointNet++ [17] was proposed which considers the deep hierarchical features for pointcloud processing. DGCNN [25] proposed Edge-Conv a graph based dynamic convolution for exploiting local geometry of a point cloud. PointMLP [12], a purely residual MLP network that achieves high performance without the need for complex local geometric extractors. Despite its simplicity, PointMLP performs remarkably well and is highly competitive with more sophisticated models. These local and global topology aware methodologies never explore learning similarities between region of point cloud.Towards addressing the aforementioned challenge Point Relation-Aware Network (PRA-Net) [3] was proposed, which is composed of an Intra-region Structure Learning (ISL) module and an Inter-region Relation Learning (IRL) module. The ISL module has the ability to dynamically integrate local structural information into point features, while the IRL module can effectively capture inter-region interactions using a differentiable region division method and a representative point-based technique, which is both adaptable and efficient. Hence methods that explore both inter-region Relation and Intra-region Structure Learning which extract superior geometric features might strengthen the backbone for Class Incremental Learning on point clouds
**Incremental Learning** Task Incremental, Domain Incremental and Class Incremental are the three main categories in continual learning. The issue of catastrophic forgetting has been widely recognized for many years, with evidence dating back to the 1980s when [14] demonstrated that algorithms trained with backpropagation were prone to this phenomenon. Subsequent research by [18] corroborated these findings and extended them to a broader range of tasks trained using backpropagation. A comprehensive review of early attempts to address catastrophic forgetting is provided by [7].There have been many subsequent works in 2D continual learning whereas in 3D realm, the continual learning framework has been underexplored. 3D-FSCIL [4] explored the few shot aspect of class incremental learning on point clouds. They propose microshapes to mitigate catastrophic forgetting. The existing 3D class-incremental learning methods store exemplars or microshapes data securities and memory.I3DOL [6] proposes to mitigate the issue of catastrophic forgetting that can result from the presence of redundant geometric information,towards this an attention mechanism that is sensitive to geometric properties has been introduced. This mechanism quantifies the significance of local geometric structures and identifies distinctive 3D geometric attributes that make substantial contributions to incremental learning of classes.Though aforementioned 3D Class-Incremental learning methods are well established they make extensive replay of previous data for training incrementally.
**Exemplar-Free Class-Incremental learning** According to recent reviews of class incremental learning (CIL), the majority of methods aimed at mitigating catastrophic forgetting incorporate techniques that involve replaying samples from past classes [2],which have data privacy restrictions or storage limitations [23]. Towards addressing the aforementioned challenge Learning Without Forgetting [10] was proposed as knowledge distillation method for class-incremental learning set-up. [8] weight and data regularisation based methods were proposed to mitigate catastrophic forgetting without storing exemplars.
## 3 PointCLIMB
In this section, we present a practical scenario-based assessment of the 3D-EFCIL model, dubbed as PointCLIMB. Additionally, we propose robust benchmark models, conduct comprehensive experiments, and elucidate the rationale behind the selection of backbone architectures.
Exemplar free class incremental learning (3D-EFCIL) is quintessential yet an under-explored paradigm in the realm of point cloud analysis. To facilitate benchmarking in realistic scenarios for 3D-EFCIL, this paper introduces **PointCLIMB**: An Exemplar Free **Point** Cloud **CL**ass **I**nce**M**ental **B**enchmark. PointCLIMB investigates the need for point cloud backbones that extracts superior geometric features capable of mitigating the challenge of catastrophic forgetting in realistic 3D-EFCIL settings. We propose to employ Census; a weighted knowledge distillation loss between old and new class logits of point cloud backbones. Census is dynamic to incremental classes that enhances the performance of point cloud backbones in 3D-EFCIL. We employ veristic task sampler which mimicks the natural way of sampling the tasks to be learnt incrementally in 3D-EFCIL setting.
Incremental learning problem \(\mathcal{T}\) consists of sequence of \(m\) tasks:
\[\mathcal{T}=[(C^{1},D^{1}),(C^{2},D^{2}),...\,(C^{m},D^{m})] \tag{1}\]
### PointCLIMB Settings
The veristic task sampler is used to model \(\mathcal{T}\) by selecting tasks based on a naturalistic setting that imitates the paradigm of novel data arrival in Class Incremental learning best explained in Algorithm 1. Where each task \(t\) is represented by a set of classes \(C^{t}=\{c_{1}^{t},c_{2}^{t},...\)\(,c_{m^{t}}^{t}\}\) and the training data \(D^{t}\). We use \(M^{t}\) to represent the total number of classes in all tasks up to and including task \(t:M^{t}=\sum_{i=1}^{t}C^{i}\). The 3D class-incremental problem in which \(D^{t}=\{(p_{1},y_{1}),(p_{2},y_{2}),...\),\((p_{lt},y_{lt})\}\), where \(p\) is point cloud with \(n\) points such that \(p\in\mathbb{R}^{n\times 3}\). During training for task \(t\), the learner only has access to \(C^{t},D^{t}\) where as during inference the evaluation is done for the union of all previous tasks \(\bigcup_{i=1}^{t}C^{i},D^{i}\). For instance if we encounter task \(t=2\), the learner has access to \((C^{2},D^{2})\) where as evaluation is done for \(\{(C^{1},D^{1}),(C^{2},D^{2})\}\). Towards modeling 3D-EFCIL problem setting; 1: we don not allow class overlaps between tasks (i.e, \(C^{i}\cap C^{j}=\)Oif \(i\neq j\)), 2: we do not maintain any coreset(exemplars) of previous task \(t-1\) for training the current task \(t\).
We consider incremental learners, the teacher model \(O(p,\theta_{O})\) parameterized by weights \(\theta_{O}\) and the student
Figure 1: The process of network optimization in **PointCLIMB** can be illustrated through two tasks. The first, base task involves training a teacher model using a feature extractor \(f\) with parameters \(\phi_{O}\) and a linear classifier \(g\) with parameters \(V_{O}\). The operation \(\mathcal{A}\), which can be mean, max or sum, is applied to make the output symmetric. In the second, class incremental novel task N; a student model is introduced when a novel task N arrives. The weights of the student model are initialized by copying the weights of the teacher model. Specifically, \(g(;V_{S})=g(;V_{O})\cup g(;V_{\zeta})\), where \(g(;V_{\zeta})\) represents the weights associated with the novel class. The teacher model is kept frozen during this process. To mitigate the issue of catastrophic forgetting, we use census knowledge distillation loss, which compares the logits of the teacher and student models.
model \(S(p,\theta_{S})\) parameterized by weights \(\theta_{S}\) to indicate the output logits of the network on input \(p\). We further split the neural network in a feature extractor \(f\) with weights \(\phi\) and linear classifier \(g\) with weights \(V\) according to \(O(p,\theta_{O})=g\big{(}f(p;\phi_{O});V_{O}\big{)}\) and \(S(p,\theta_{S})=g\big{(}f(p;\phi_{O});V_{S}\big{)}\). The student model's weights are designed such that \(V_{S}=V_{O}\cup V_{\zeta}\) where \(V_{\zeta}\) is novel task specific parameters as shown in Figure 1. We use \((\hat{g}_{O};\tau)=\sigma(O(p;\theta_{O}),\tau)\) and \((\hat{g}_{S};\tau)=\sigma(S(p;\theta_{S}),\tau)\) to identify teacher and student network predictions, where \(\sigma(\tau)\) indicates the softmax functions with temperature \(\tau\).
```
Input : tc, low, and high. /* tc is total number of classes, low represents least no of categories present in at task, and high represents maximum no of categories present in at task. */ Output : The lists \(\mathcal{T}\). // \(\mathcal{T}\) is task list for continual learning.
1classes \(\leftarrow\) arrange(0,tc-1); shuffle(classes)
2TL \(\leftarrow\) []; \(\mathcal{T}\) []
3condition \(\leftarrow\) 0 whiletc \(\neq\)0and condition\(\geq\)0do
4base \(\leftarrow\) RANDOMIT(low, high)
5condition \(:=\) tc- base
6ifcondition\(\leq\)0then
7TL.append (tc)
8break
9else
10TL.append (base)
11tc \(:=\) condition
12s \(\gets\) 0
13for\(i\gets\)0to length(TL)do
14\(\mathcal{T}\).append \(\big{(}\)classes \([\)TL [s]:TL [s]+TL [i]\(]\(\big{)}\)
15s \(:=\) TL [i]
16return\(\mathcal{T}\);
```
**Algorithm 1**Veristic Task Sampler: A pragmatic sampler mimicking the paradigm of class incremental learning
### Influence of Backbones on PointCLIMB
To investigate the importance of superior and robust feature extraction in PointCLIMB, we train point cloud class-incremental classifiers as per PointCLIMB Settings. For our case study we select three types of open-source state-of-the-art backbones.
* Global per-point based PointNet [16].
* Local neighbourhood (Intra-region) based PointNet++ [17], DGCNN [25], PointMLP [12].
* Intra-region structure aware and Inter-region relation aware PRA-Net [3].
Backbones act as a feature extractor whose features are used for knowledge distillation in exemplar free class-incremental learning Methods. None of the previous works on 3D point cloud class incremental learning focuses on the importance of feature extraction. We investigate the significance of feature extractor by evaluating different state of the art classification networks as feature extractor on PointCLIMB settings. Our findings infer that, backbones that incorporate both Intra-region structure and Inter-region relation aware properties perform significantly better in PointCLIMB settings compared to other types as depicted in Table 1. The reason for this may be the extracted features of PRA-Net [3] best describe the topology, similarity, proximity, and symmetry of a point cloud when compared with graph based semantic features of DGCNN. Another reason is due to the incorporation of gated units in IRL blocks of PRA-Net which resembles highway-connection classifier networks (HCNs) [9]. We conclude that networks with gated unit incorporates high stability as explained by [9] in 2D realm.
### Knowledge Distillation
The process of Knowledge Distillation in 3D-EFCIL involves the transfer of geometric and topological knowledge from a previously trained model (known as the teacher model: \(O(p,\theta_{O})\)) to a new model (referred to as the student model: \(S(p,\theta_{S})\)). This transfer of knowledge aims to enable the student model to accurately classify new point cloud categories, while still retaining its ability to classify the categories it was previously trained on.
**Learning Without Forgetting** (LwF) enables the student model to learn new tasks without forgetting the knowledge it has already acquired, by leveraging the distillation of knowledge from the teacher model and can be expressed as:
\[\mathcal{L}_{LwF}=\lambda\mathcal{L}_{distill}+\mathcal{L}_{class} \tag{2}\]
where \(\mathcal{L}_{distill}\) is the distillation loss term, which measures the difference between the output probabilities of the teacher model and the student model, and is given by:
\[\mathcal{L}_{distill}=-\frac{1}{N}\sum_{i=1}^{n}(\hat{g}^{i}_{O};\tau)\log\big{(} (\hat{g}^{i}_{S};\tau)\big{)} \tag{3}\]
Here, \(\hat{g}^{i}_{O}\) represents the output probability of the teacher model for the i-th input sample, and \(\hat{g}^{i}_{S}\) represents the output probability of the student model for the same input sample
with temperature \(\tau\). The second term, \(\mathcal{L}_{class}\), is the classification loss term, which measures the deviation of the student model's output probabilities from the true labels of the current task, and is given by:
\[\mathcal{L}_{class}=-\frac{1}{N}\sum_{i=1}^{n}y^{i}\log\hat{y}_{S}^{i} \tag{4}\]
Here, \(y^{i}\) represents the true label of the i-th input sample. Finally, \(\lambda\) is a hyperparameter that balances the relative importance of the distillation and classification loss terms.
**Census Knowledge Distillation** is an improved variant of LwF that dynamically adjusts the weight of the distillation loss term for every new increment of task \(t\), based on the current count of classes in task \(\eta\) and the number of tasks elapsed \(T\). This dynamic weight helps to ensure robustness towards catastrophic forgetting and can be expressed as
\[\mathcal{L}_{census}=(\eta*T)\mathcal{L}_{distill} \tag{5}\]
where \(\mathcal{L}_{distill}\) is briefed in Eq. 3 Allocating dynamic weights to the knowledge distillation loss based on the arrival of new tasks enhances the importance of geometric features associated with the newly arrived classes. This approach can help mitigate the issue of task recency bias, ultimately leading to improved performance. To investigate the significance of employed Census knowledge distillation loss, we train PointNet [16], PointNet++ [17], DGCNN [25], PointMLP [12], and PRA-Net [3] on PointCLIMB settings and compare with LwF [10] as depicted in Table 1.
## 4 Experiments
**Training pipeline** of **PointCLIMB** is based on a fine-tuning strategy, which involves two steps. Firstly, a base task is trained on a backbone architecture using the cross-entropy loss given by Eq. 4. We refer to this model as the teacher model.Subsequently, when an incremental task containing a novel class is presented, we compute a weighted knowledge distillation loss between the logits of the weight-frozen teacher model and the weight-shared student model, as shown in Eq. 5. This loss is combined with the cross-entropy loss to train the novel set of classes. In other words, the student model learns from the teacher model while retaining the knowledge of the previous tasks, by jointly minimizing the cross-entropy loss and the weighted knowledge distillation loss. This approach ensures that the student model adapts to the new task while preventing catastrophic forgetting of the previously learned tasks.
**Implementation Details** To train **PointCLIMB**, we used 1024 points and trained each task for 40 epochs using the Adam optimizer with a learning rate of 0.0001 and a batch size of 32. To dynamically vary the logits of the classification layer based on the number of arrived novel classes, we referred to the 3D-FSCIL [4] approach. We also utilized their class and label mapper for arrived novel data and modified the incremental data loader to make it exemplar-free. Our implementation of PointCLIMB was based on the PyTorch framework and ran on an NVIDIA Quadro GV100 32GB.
**Evaluation** of **PointCLIMB**, we used the ModelNet40 dataset and assessed the performance of different backbones. Specifically, we considered a scenario where the base task consists of 20 random classes, and each novel task contains 5 randomly chosen classes for Class-Incremental learning as depicted in Table 1. To quantify the performance of different backbones, we compared naive fine-tuning, LwF [10], and Census. We evaluated PointCLIMB on different scenarios modeled by a veristic task sampler, which involved three scenarios. The first scenario had 20 random classes in the base task and 5 random classes in each novel task, with a total of 5 tasks. The results of this scenario are reported in Table 4. The second scenario involved 10 random classes in the base task and 5 random classes in each novel task, with a total of 7 tasks. The results of this scenario are also reported in Table
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{**Methods**} & **Loss** & **20** & **5** & **5** & **5** & **5** \\ \hline \multirow{4}{*}{**PointNet**[16]} & _Joint_ & 94.77 & 92.11 & 86.98 & 85.43 & 84.67 \\ & _FT_ & 94.10 & 02.65 & 01.61 & 00.31 & 00.24 \\ & LwF & **95.25** & **53.37** & **04.92** & **02.62** & **00.32** \\ & **Census** & **94.01** & **67.77** & **49.25** & **26.95** & **18.47** \\ & _Joint_ & 93.17 & 92.76 & 87.34 & 85.29 & 85.32 \\ & _FT_ & 92.33 & 01.23 & 01.28 & 00.58 & 00.24 \\ & **LwF** & **92.51** & **61.34** & **12.63** & **07.32** & **01.86** \\ & **Census** & **93.39** & **71.24** & **56.26** & **44.09** & **29.09** \\ \cline{2-7} & _Joint_ & 95.44 & 94.62 & 88.13 & 86.15 & 85.87 \\ & \(F\) & 94.67 & 01.88 & 01.23 & 00.63 & 00.68 \\ & _LwF_ & **95.50** & **61.21** & **66.74** & **01.99** & **00.93** \\ & **Census** & **95.25** & **64.39** & **52.30** & **27.49** & **28.68** \\ \cline{2-7} & _Joint_ & 96.52 & 93.93 & 88.49 & 87.96 & 86.69 \\ & _FT_ & 96.42 & 05.47 & 01.40 & 00.40 & 00.60 \\ & _LwF_ & 96.38 & **47.98** & **11.67** & **03.36** & **00.76** \\ & **Census** & **96.58** & **72.37** & **52.56** & **36.27** & **27.19** \\ \cline{2-7} & _Joint_ & 96.84 & 95.41 & 90.94 & 88.01 & 87.23 \\ \cline{2-7} & _FT_ & **96.67** & 01.16 & 00.35 & 00.40 & 00.32 \\ & **LwF** & **96.67** & **52.14** & **04.17** & **00.40** & **00.89** \\ & **Census** & **96.92** & **72.31** & **59.72** & **47.61** & **35.73** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of different backbone’s with PointCLIMB on ModelNet40 dataset. _Joint_ depicts Upper bound and _FT_ represents Fine-Tuning approach. We demonstrate supremacy of census over other knowledge distillation loss for each backbone on 3D-EFCIL. We represent our findings by 1) best by **bold underline** and 2) second best by **bold** accuracy values.
4. Lastly, we considered a scenario consisting of tasks with all uniform classes, where 4 classes were encountered in both the base and novel tasks, with a total of 10 tasks. The results of this scenario are reported in Table 4. The endless scenarios representing the actual paradigm of Class Incremental learning can be modeled using PointCLIMB. We benchmarked three possible scenarios of PointCLIMB for benchmarking, demonstrating the flexibility and effectiveness of our approach.
**Results and Discussion** As shown in Table 1, we provide upper (Joint) and lower bounds (Fine Tuning) on different backbone architectures. To evaluate the performance of LWF [10] with different backbones, we report task-wise accuracies. Moreover, we introduce a novel variant of LWF loss, known as Census knowledge distillation loss, and observe a remarkable improvement in performance compared to the traditional LWF loss. Among all the backbone architectures, PRA-Net with Census Knowledge distillation loss outperforms others due to its capability to extract superior geometric features and use gated units to maintain stability. Thus, for modeling pragmatic scenarios, we recommend using PRA-Net and Census Knowledge Distillation 5 as the superior baseline for 3D-EFCIL towards future research.
To evaluate the performance of the pragmatic scenarios modeled by the veristic task sampler described in Algorithm 1 on different backbones using LWF loss versus the suggested approach (PRA-Net + Census Knowledge Distillation), we conduct extensive ablation experiments with 5, 7, and 10 tasks in Tables 4, 2, and 3, respectively. We run 3 experiments with different random seeds to assess the true response of backbones towards realistic settings sampled by PointCLIMB, and report the mean and standard deviation of all the backbones for better comprehensibility. Our results demonstrate that the suggested baseline outperforms all other backbones with all possible pragmatic settings. Apart from our recommended baseline, PointNet++ [17] performs the second-best in most scenarios and various incremental tasks. Moreover, we observe that all other architectures do not produce stable outcomes, unlike our proposed baseline that remains stable throughout the scenarios and incremental tasks.
### Limitations
Despite its potential, 3D exemplar-free class incremental learning has some limitations that must be considered. One of the main challenges is the need for large amounts of high-quality data to train and evaluate the model. This can be particularly difficult in 3D environments, where obtaining and processing data can be time-consuming and expensive. Current backbone that we test are not robust to noise so this is major limitation. Additionally, 3D exemplar-free class incremental learning may face limitations when dealing with complex and highly variable objects, where there may be significant intra-class, inter-class variation with-in and among incremental tasks. This can lead to difficulty in accurately classifying these objects and may require more specialized models or additional training data.
Although there are some limitations to 3D exemplar-free class incremental learning, we are confident that our study provides valuable insights within point cloud environments. We are optimistic that our research will inspire further investigation into these limitations, ultimately leading to the development of more resilient and efficient methods for 3D exemplar-free class incremental learning.
## 5 Conclusions
In this paper, we pioneered into the uncharted territory of exemplar-free class incremental learning on point clouds (3D-EFCIL) and presented a pragmatic experimen
\begin{table}
\begin{tabular}{r c c c c c c c} \hline \hline
**Methods** & **10** & **5** & **5** & **5** & **5** & **5** \\ \hline
**Pointnet**[16] & 96.11 \(\pm\) 1.29 & 40.88 \(\pm\) 9.36 & 20.39 \(\pm\) 5.63 & 10.26 \(\pm\) 2.29 & 4.56 \(\pm\) 1.19 & **5.59 \(\pm\) 2.29** & 4.47 \(\pm\) 1.62 \\
**Pointnet++**[17] & 95.84 \(\pm\) 2.17 & 47.20 \(\pm\) 8.86 & **22.61 \(\pm\) 3.49** & **13.09 \(\pm\) 5.74** & **6.79 \(\pm\) 1.33** & 4.82 \(\pm\) 1.67 & **6.73 \(\pm\) 4.80** \\
**DGCNN**[25] & **96.78 \(\pm\) 0.73** & **54.80 \(\pm\) 13.58** & 11.60 \(\pm\) 4.42 & 5.59 \(\pm\) 0.97 & 6.22 \(\pm\) 0.64 & 4.42 \(\pm\) 1.17 & 3.92 \(\pm\) 2.25 \\
**PointMLP**[12] & 95.04 \(\pm\) 0.59 & 44.40 \(\pm\) 4.81 & 13.36 \(\pm\) 3.68 & 5.72 \(\pm\) 3.33 & 4.64 \(\pm\) 0.72 & 5.39 \(\pm\) 0.80 & 3.60 \(\pm\) 0.27 \\ \hline
**Ours (PRA-Net [3] + Census)** & **97.45 \(\pm\) 1.86** & **69.49 \(\pm\) 3.99** & **49.92 \(\pm\) 6.74** & **36.19 \(\pm\) 5.19** & **33.47 \(\pm\) 2.28** & **24.04 \(\pm\) 3.89** & **21.96 \(\pm\) 2.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of different backbones on ModelNet40 considering one of the scenarios with 7 tasks modeled by PointCLIMB. We represent our findings by 1) best by **bold underline** and 2) second best by **bold** accuracy values.
\begin{table}
\begin{tabular}{r c c c c c c c c} \hline \hline
**Methods** & **4** & **4** & **4** & **4** & **4** & **4** & **4** & **4** \\ \hline
**Pointnet**[16] & 96.87 \(\pm\) 1.96 & 32.81 \(\pm\) 10.55 & 17.18 \(\pm\) 8.80 & 9.25 \(\pm\) 2.65 & 3.45 \(\pm\) 1.28 & 2.18 \(\pm\) 0.09 & **3.34 \(\pm\) 1.18** & 4.36 \(\pm\) 2.29 & **4.01 \(\pm\) 1.95** & 3.52 \(\pm\) 1.07 \\
**Pointnet++**[17] & **97.18 \(\pm\) 1.54** & **32.81 \(\pm\) 13** & **21.45 \(\pm\) 4.47** & 8.33 \(\pm\) 4.40 & 5.80 \(\pm\) 2.86 & 5.33 \(\pm\) 0.99 & 3.09 \(\pm\) 0.54 & **4.61 \(\pm\) 0.46** & 2.36 \(\pm\) 1.14 & 3.76 \(\pm\) 1.71 \\
**DoCN**[25] & **97.81 \(\pm\) 2.55** & 50.12 \(\pm\) 12.29 & 20.88 \(\pm\) 5.88 & 7.66 \(\pm\) 4.28 & 5.06 \(\pm\) 2.24 & 2.46 \(\pm\) 0.14 & 2.84 \(\pm\) 1.09 & 4.46 \(\pm\) 0.37 & 3.24 \(\pm\) 1.13 & 2.13 \(\pm\) 0.42 \\
**PointMLP**[12] & 96.875 \(\pm\) 0.43 & 30.68 \(\pm\) 7.69 & 16.66 \(\pm\) 6.80 & **9.5 \(\pm\) 3.47** & **7.67 \(\pm\) 1.19** & **5.65 \(\pm\) 1.11** & 2.30 \(\pm\) 1.27 & 4.2 \(\pm\) 1.15 & 2.11 \(\pm\) 0.87 & **4.93 \(\pm\) 2.33** \\ \hline
**Ours (PRA-Net [3] + Census)** & 97.12 \(\pm\) 12.22 & **56.76 \(\pm\) 11.47** & **31.4 \(\pm\) 5.88** & **25.70 \(\pm\) 5.26** & **20.97 \(\pm\) 3.38** & **19.53 \(\pm\) 5.53** & **16.85 \(\pm\) 2.27** & **14.58 \(\pm\) 3.73** & **10.34 \(\pm\) 3.98** & **5.18 \(\pm\) 2.41** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of different backbones on ModelNet40 considering one of the scenarios with uniform tasks modeled by PointCLIMB. We represent our findings by 1) best by **bold underline** and 2) second best by **bold** accuracy values.
tal setup that mirrors real-world continual learning scenarios. Our investigation into various Point cloud classification backbones yielded encouraging results on the ModelNet40 dataset for 3D exemplar-free class incremental learning. Our analysis revealed that backbones with intra-inter local neighbourhood relation awareness significantly outperformed global topology-based and local neighbourhood-based methods on 3D-EFCIL. Furthermore, we explored how the use of weighted knowledge distillation loss (census) could alleviate the issue of catastrophic forgetting in 3D-EFCIL. We anticipate that our work will inspire further research in this field, leading to the emergence of more effective and robust methods for few-shot learning in point clouds.
## 6 Broader Impact
The main goal of this research is to establish realistic benchmarks for 3D exemplar-free class incremental learning, and explore the optimal feature extractors / backbone, and network design choices for knowledge distillation loss in this context.
3D exemplar-free incremental learning has the potential to generate significant and far-reaching impacts in various fields. By enabling machines to learn new object categories without prior examples, exemplar-free incremental learning can facilitate real-time learning and adaptation in dynamic scenarios, making it ideal for robotics, computer vision, and artificial intelligence. This research can improve the efficiency of machine learning algorithms by reducing computational costs and enable autonomous vehicles, robots to operate more safely and efficiently. Additionally, exemplar-free incremental learning can enhance quality control in manufacturing processes by detecting defects and ensuring product specifications are met. Overall, this research has the potential to transform several industries, paving the way for innovative solutions and improving efficiency, accuracy, and safety.
|
2306.08121 | Better Generalization with Semantic IDs: A Case Study in Ranking for
Recommendations | Randomly-hashed item ids are used ubiquitously in recommendation models.
However, the learned representations from random hashing prevents
generalization across similar items, causing problems of learning unseen and
long-tail items, especially when item corpus is large, power-law distributed,
and evolving dynamically. In this paper, we propose using content-derived
features as a replacement for random ids. We show that simply replacing ID
features with content-based embeddings can cause a drop in quality due to
reduced memorization capability. To strike a good balance of memorization and
generalization, we propose to use Semantic IDs -- a compact discrete item
representation learned from frozen content embeddings using RQ-VAE that
captures the hierarchy of concepts in items -- as a replacement for random item
ids. Similar to content embeddings, the compactness of Semantic IDs poses a
problem of easy adaption in recommendation models. We propose novel methods for
adapting Semantic IDs in industry-scale ranking models, through hashing
sub-pieces of of the Semantic-ID sequences. In particular, we find that the
SentencePiece model that is commonly used in LLM tokenization outperforms
manually crafted pieces such as N-grams. To the end, we evaluate our approaches
in a real-world ranking model for YouTube recommendations. Our experiments
demonstrate that Semantic IDs can replace the direct use of video IDs by
improving the generalization ability on new and long-tail item slices without
sacrificing overall model quality. | Anima Singh, Trung Vu, Nikhil Mehta, Raghunandan Keshavan, Maheswaran Sathiamoorthy, Yilin Zheng, Lichan Hong, Lukasz Heldt, Li Wei, Devansh Tandon, Ed H. Chi, Xinyang Yi | 2023-06-13T20:34:15Z | http://arxiv.org/abs/2306.08121v2 | # Better Generalization with Semantic IDs: A Case study in Ranking for Recommendations
###### Abstract
Training good representations for items is critical in recommender models. Typically, an item is assigned a unique randomly generated ID, and is commonly represented by learning an embedding corresponding to the value of the random ID. Although widely used, this approach have limitations when the number of items are large and items are power-law distributed-- typical characteristics of real-world recommendation systems. This leads to the item cold-start problem, where the model is unable to make reliable inferences for tail and previously unseen items. Removing these ID features and their learned embeddings altogether to combat cold-start issue severely degrades the recommendation quality. Content-based item embeddings are more reliable, but they are expensive to store and use, particularly for users' past item interaction sequence. In this paper, we use Semantic IDs, a compact discrete item representations learned from content embeddings using RQ-VAE that captures hierarchy of concepts in items. We showcase how we use them as a replacement of item IDs in a resource-constrained ranking model used in an industrial-scale video sharing platform. Moreover, we show how Semantic IDs improves the generalization ability of our system, without sacrificing top-level metrics.
## 1 Introduction
Recommender systems are widely used across the industry to serve personalized content to users. They play a critical role in helping users discover novel content such as apps (Cheng et al., 2016), music (Kim et al., 2007; Koren et al., 2009) and videos (Covington et al., 2016; Zhao et al., 2019; Gomez-Uribe and Hunt, 2015). In this paper, we consider a neural ranking model in a large industrial-scale video recommendation system. Our recommendation corpus has 100s of millions of videos. Every video gets a unique identifier (called video ID), which is a random string, devoid of any meaning. The ranking model gets as input multiple video IDs, for example, what the user is watching and what is to be recommended. Furthermore, users' features are also typically represented as a list of video IDs they have previously watched - a critical signal for personalization. Given that the central goal of a recommendation system is to connect users to videos, learning good representations of video ID is critical.
The widely used technique to encode categorical features such as video ID is to learn high-dimensional vectors (embeddings). Embedding learning is widely used across many models and is well understood in domains such as natural language processing. However, given the size of the video corpus and the power-law distribution of the views, there are a number of challenges associated with learning good embeddings for video IDs.
Large corpus size and power-law distribution: Given our extremely large video corpus with 100s of millions of videos, learning one embedding vector per video can be quite resource-intensive. More importantly, we have a long-tail of videos with very few views. This leads to the cold-start problem, i.e., we are unable to learn good representations for those videos. Furthermore, in our
recommendation system, many new videos get uploaded everyday, which makes it hard to maintain a reliable 1:1 mapping.
Random collisions: The alternative approach is to use the hashing trick (Weinberger et al., 2009) that maps many videos to the same row -- a technique we adopt in our ranking model. However, this causes random collisions because the IDs of videos are random strings.
Due to these inherent limitations, we are motivated to develop better ways to learn embeddings for the video features. A possible approach is use content-based embeddings that we have access to in the system. These embeddings capture the topicality of the video in a fine-grained manner based on audio-visual features. But as we see in Section 5.2, replacing the embedding table of video IDs with these content embeddings causes a large drop in the capacity of the system, affecting model quality.
In this work, we replace video IDs with Semantic IDs (proposed in Rajput et al. (2023)) and develop a methodology to learn video representations based on these Semantic IDs. We use RQ-VAE to quantize content embeddings into tuples of integers (e.g. (33, 202, 11, 2)) called Semantic IDs. Similar videos have overlapping set of integers, allowing for semantic collisions when learning embeddings.
Our contributions are as follows:
1. We demonstrate how we obtain Semantic IDs for videos in our recommendation system using content embeddings. We show that Semantic IDs capture meaningful hierarchical relationships between videos.
2. We propose a two-stage approach: 1) Efficient compression of content embeddings into Semantic IDs and 2) Training downstream ranking model with Semantic IDs. The efficient compression in Stage 1 means that storing Semantic IDs use a small fraction of storage relative to content embeddings. This unlocks the possibility of using content signals for personalization based on users' watch history, in resource-constrained production models.
3. Through extensive experiments on data from our industrial video recommendation platform, we demonstrate that semantically meaningful collisions are superior to random hashing, given comparable model capacity. We also show that Semantic IDs can replace video IDs to improve generalization of our industrial-scale ranking model.
Since Semantic IDs offer generalization benefits, they also provide relief from the popularity bias issue, wherein popular videos get better representations due to their higher exposure, which causes them to be recommended more.
## 2 Related Work
Embedding learning:Recommender models rely on learning good representation of categorical features. A common technique to encode categorical features is to train embeddings using one-hot embeddings. Word2vec (Mikolov et al., 2013) popularized this in the context of language models. Hashing trick (Weinberger et al., 2009) is typically used when the cardinality is high, but it causes random collisions. Multiple hashing (Zhang et al., 2020) offers some relief, but still leads to random collisions. Deep Hash Embedding (Kang et al., 2021) circumvents this problem by not maintaining embedding tables but at the cost of increased computation in the hidden layers. In contrast, we use Semantic IDs -- a compute efficient way to avoid random collisions during embedding learning for item IDs. By enabling collisions between semantically related items, Semantic IDs improve generalization in recommender models.
Cold-start and content information:Content-based recommender models have been proposed to combat cold-start issues (e.g. Schein et al. (2002), Volkovs et al. (2017)). Recently, embeddings derived from content information are also popular (e.g. DropoutNet (Volkovs et al., 2017), CC-CC (Shi et al., 2019) and Du et al. (2020)). Content embeddings have also been incorporated to improve recommendations beyond cold-start applications. For example, PinSage (Ying et al., 2018) aggregates visual, text and engagement information to represent items. And PinnerFormer (Pancha et al., 2022) uses sequences of PinSage embeddings corresponding to item history to build a sequential recommendation model. In contrast to these efforts, our goal is to develop content-derived
representations that not only generalize well, but can also improve performance relative to using video ID features -- a significantly challenging task. Furthermore, unlike PinnnerFormer (Pancha et al., 2022) which is used for offline inference, our focus is to improve generalization of a ranking model used for real-time inference. Therefore, approaches that significantly increase resource costs (including storage, training and serving) make them infeasible to deploy in production. Semantic IDs offer an efficient compression of content embeddings into discrete tokens, making it feasible to use content signals in production recommendation systems.
_Discrete representations:_ Several techniques exist to discretize embeddings. For instance, VQ-VAE (Van Den Oord et al., 2017), VQ-GAN (Esser et al., 2021) and their variants are used for generative modeling: Parti (Yu et al., 2022) uses Vit-VQGAN (Yu et al., 2021) for generative image modeling and SoundStream (Zeghidour et al., 2021) uses RQ-VAE for generative audio modeling. TIGER (Rajput et al., 2023) used RQ-VAE in the context of recommender applications, which we utilize as well. Traditional techniques like Product Quantization (Jegou et al., 2010) and it's variants are used by many recommender models (e.g. MGQE (Kang et al., 2020) and Hou et al. (2022)), but these do not offer hierarchical semantics, which we leverage in our work.
## 3 Background
In this section, we present key aspects of our real-world industrial-scale video ranking system. These factors provide context and define the constraints and requirements that we need to consider.
Ranker Model:Our production ranking model is a multitask model for video recommendation, which recommends the next video to watch given a video a user is currently watching. Video ID features are the most important features to the model. We represent the current video and the video to be ranked as video IDs. Furthermore, we also represent a user as a sequence of videos IDs they have interacted with - a critical signal for personalized recommendations.
Sequential Training:The model is trained continuously in a sequential manner, i.e., the training is done using logged data in a chronological manner, and the training continues as new data come in. Since the underlying video corpus and users are constantly evolving, the model needs to be able to generalize well under data-distribution shift. In this paper, we focus on data-distribution shift of the video corpus.
Resource Constraints:Due to resource constraints and latency budgets dictated by real-time inference, we cannot increase the resource usage by a lot. For example, a possible approach to avoid using video IDs is to directly use content embeddings as input to the model. However, this approach isn't viable due to high resource costs. For instance, storing 100s of 256-dimension floats to represent users' watch history per training example, for industrial applications with billions of examples, does not scale well. Loading and processing such large inputs also significantly slows down training and increases latency at inference. Furthermore, if we use content embeddings to replace video IDs, we lose the model capacity associated with the video embedding parameters that can lead to model quality degradation. In theory, we can increase the hidden layer capacity to compensate. However, in practice, this exacerbates computational costs, since model capacity in form of dense layers is much more compute-heavy than embedding parameters. Due to resource constraints and latency budgets dictated by real-time inference, we cannot adopt this approach.
## 4 Our Proposed Approach
### Overview
Given content embeddings for our corpus of videos, we propose an _efficient_ two-stage approach* to leverage content signal in a large scale ranking model, with the goal to improve generalization under data-distribution shift.
Footnote *: Using content embeddings directly in the ranking model would be an example of a single-stage alternative.
* _Stage 1: Efficient compression of content embeddings into discrete Semantic IDs_. We use a Residual Quantization technique called RQ-VAE (Lee et al., 2022; Zeghidour et al., 2021;
Rajput et al., 2023) to quantize content embeddings - a 256 dimensional floats, into discrete tokens to capture semantic information about a video. We store all the tokens corresponding to a video using a single 64 bit integer. This translates to a \((256\times 4\text{ bytes})/(8\text{ bytes})=128\times\) compression in terms of storage for a single instance of a video. This compression was imperative to leverage content signals for personalization using user's watch history. Once trained, we freeze the trained RQ-VAE model and use it for training downstream ranking model in Stage 2.
* _Stage 2: Training Ranking model with Semantic IDs._ We use the quantization model from Stage 1, to map each video to its Semantic ID and then train embeddings for Semantic ID n-grams, along with the rest of the ranking model (Section 4.3).
Design ChoicesA key design choice in our proposal is to train and then freeze the RQ-VAE model from Stage 1. The frozen RQ-VAE model is used to generate Semantic IDs for training and serving the ranking model. As discussed in Section 3, the ranking model is sequentially trained on recently logged data. Recent data may include videos that did not exist when the RQ-VAE model was trained. This raises a potential concern that freezing the model could hurt the performance of the ranking model over time. In Section 5.3.3, we show that the performance of ranking models that use Semantic IDs derived from the two RQ-VAE models trained on older vs. more recent data perform comparably, suggesting that the learned semantics are stable over time. In our work, we use RQ-VAE (Lee et al., 2022; Zeghidour et al., 2021; Rajput et al., 2023) instead of VQ-VAE (Van Den Oord et al., 2017) to quantize content embeddings (Section 4.2), since RQ-VAE organically captures hierarchical semantic structure (Figure 2), offering interpretability.
### RQ-VAE for Semantic IDs
Given a content embedding vector for a video, we generate Semantic IDs using Residual-Quantized Variational AutoEncoder (RQ-VAE) Lee et al. (2022); Zeghidour et al. (2021); Rajput et al. (2023) that applies quantization on residuals at multiple levels (see Figure 1).
There are three jointly-trained components: (1) an encoder \(\mathcal{E}\) that maps the content embedding \(\mathbf{x}\in\mathbb{R}^{D}\) to a latent vector \(\mathbf{z}\in\mathbb{R}^{D^{\prime}}\), (2) a residual vector-quantizer with \(L\) levels, each with a codebook \(\mathcal{C}_{l}:=\{\mathbf{e}_{k}^{l}\}_{k=1}^{K}\), where \(\mathbf{e}_{k}^{l}\in\mathbb{R}^{D^{\prime}}\) and \(K\) is the codebook size; the vector-quantizer recursively quantizes the residual \(\mathbf{r}_{l}\) at each level \(l\) to the nearest codebook vector \(\mathbf{e}_{c_{l}}\) (Figure 1) and (3) a decoder \(\mathcal{D}\) that maps the quantized latent \(\hat{\mathbf{z}}\) back to the original embedding space \(\hat{x}\).
We use the following loss to train the RQ-VAE model: \(\mathcal{L}=\mathcal{L}_{recon}+\mathcal{L}_{rqvae}\), where \(\mathcal{L}_{recon}=\|\mathbf{x}-\hat{\mathbf{x}}\|^{2}\) and \(\mathcal{L}_{rqvae}=\sum_{l=1}^{L}\ \beta\|\mathbf{r}_{l}-\text{sg}[\mathbf{e}_{c_{l}}]\|^{2}+\| \text{sg}[\mathbf{r}_{l}]-\mathbf{e}_{c_{l}}\|^{2}\) and sg denotes the stop-gradient operator. \(\mathcal{L}_{recon}\) aims to reconstruct the content embedding \(\mathbf{x}\). The first and the second terms in
Figure 1: RQ-VAE: The input vector \(\mathbf{x}\) is encoded into a latent \(\mathbf{z}\), which is then recursively quantized by looking up the nearest codebook vector of the residual at each level. Each orange box represents a codebook. Within these boxes, each circle labeled with integer indices (\(1\) to \(5\)) represents a codebook vector, and each circle labeled with \(\mathbf{z},\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}\) represents the vector being quantized. We compute the reconstruction vector \(\hat{\mathbf{x}}\) by feeding the quantized latent \(\hat{\mathbf{z}}\) into the decoder. The sequence of indices of the nearest codebook vector at each level represents the Semantic ID for the item. In this figure, the item represented by \(\mathbf{x}\) has \((1,4,5,2)\) as its Semantic ID.
\(\mathcal{L}_{rgrave}\) encourages the encoder and the codebook vectors to be trained such that \(\mathbf{r}_{l}\) and \(\mathbf{e}_{c_{l}}\) move towards each other.
#### 4.2.1 Semantic IDs as hierarchy of concepts
We illustrate the hierarchy of concepts captured by Semantic IDs from the videos in our corpus. Section 5.1 details the hyper-parameters used to train the RQ-VAE model. Intuitively, we can think of Semantic IDs as forming a trie over videos, with higher levels representing coarser concepts and lower levels representing more fine-grained concepts. Figure 2 shows an example subtrie from our trained RQ-VAE model that captures a hierarchy within sports.
### Semantic ID based Video Representation in Ranking
In this section, we discuss how we generate a video representation derived from Semantic IDs to use in the ranking model. For a given video \(v\), an RQ-VAE model (Section 4.2) with \(L\) levels generates a Semantic ID \((c_{1}^{v},...c_{L}^{v})\).
We propose an n-gram based representation of Semantic ID to represent a video. An n-gram is a sequence of n-tokens. First, we extract each n-gram of the Semantic ID for a given video. Let \(|\text{n-grams}|\) represent the number of n-grams per video. For example, for unigram, we extract \(|\text{n-grams}|=L\) unigrams per video. Next, we associate a separate embedding row for each distinct value of the n-gram. More specifically, for an n-gram of Semantic IDs derived from an RQ-VAE model with codebook size \(K\), we train \(|\text{n-grams}|\) embedding tables each with \(K^{n}\) rows. As a result, during training, videos that share Semantic ID tokens will collide to the same embedding row. This allows us to generalize across related popular and long-tail videos. Finally, we sum over \(|\text{n-grams}|\)
\begin{table}
\begin{tabular}{l l l} \hline \hline Shared prefix length & Average pairwise cosine similarity & Typical subtrie size \\ \hline
1 & 0.41 & 150,000-450,000 \\
2 & 0.68 & 20-150 \\
3 & 0.91 & 1-5 \\
4 & 0.97 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Aggregate metrics for videos sharing Semantic ID prefix of length \(n\). The typical subtrie size refers to the 25th-75th percentile range (with rounding).
Figure 2: A subtrie that captures sports videos.
embeddings to generate a feature representation for a video. We train the embedding tables along with the ranking model.
## 5 Experiments
In this section, we evaluate the generalization performance of using content signals to learn video representations for ranking. We either directly use content-embeddings as the video representation or use embeddings derived from Semantic IDs in lieu of video IDs. We compare their performance relative to the baseline that relies on random hashing of video IDs.
### Experimental Setup
#### 5.1.1 Ranking Model
We use a simplified version of the multitask production ranking model (Section 3) for our experiments by only keeping an important subset of the tasks and input features. The ranking model uses 10s of millions of buckets for random hashing to accommodate 100s of millions of videos in our corpus and is trained sequentially on engagement data. We use random hashing of video IDs for three key features: users' watch history, current video, and the candidate video to be ranked.
#### 5.1.2 RQ-VAE Model
We use a \(256\) dimensional content embedding that incorporates visual and audio information about the video as the input. For the RQ-VAE model, we use a \(3\)-layer encoder with dimensions 256, 128 and 64 with ReLU activation for the first 2 layers. Similarly, we use a 3-layer decoder with dimensions 64, 128, and 256, with ReLU activation for the first 2 layers. We apply \(L=4\) levels of quantization using codebook size \(K^{\dagger}\) for each. Given these settings, each video \(v\) has \(4\) unigrams, i.e., \(\{c_{1}^{v},c_{2}^{v},c_{3}^{v},c_{4}^{v}\}\) and \(3\) bigrams, i.e., \(\{(c_{1}^{v}\times c_{2}^{v}),(c_{2}^{v}\times c_{3}^{v}),(c_{3}^{v}\times c_{4 }^{v})\}\).
Vector quantization techniques are known to suffer from _codebook collapse_(Dhariwal et al., 2020), where the model only uses a small proportion of codebook vectors. We reset unused codebook vectors to a random vector within a batch (Zeghidour et al., 2021) to improve codebook utilization. We used \(\beta=0.25\) to compute the loss and trained the model until the reconstruction loss stabilized (\(\approx\)10s of millions steps for our corpus).
#### 5.1.3 Evaluation metrics
We sequentially train the ranking model on the first \(N\) days of data, where each day contains logged data generated from user interactions on that day. We evaluate the model's performance using AUC for CTR (indicated as CTR/AUC) -- one of the important training tasks, on the data from (\(N+1\))-th day. We further slice metric on items that are introduced on the (\(N+1\))-th day. We refer to this as CTR-1D/AUC. CTR/AUC and CTR-1D/AUC metrics evaluate model's ability to generalize over time in the face of data distribution shift and on cold-start items, respectively. A \(0.1\%\) change in CTR/AUC is considered significant for our ranking model.
### Performance of Content Embeddings
As discussed in Section 3, storing dense embeddings for each video in users' watch history is extremely resource intensive. Hence, we were unable to train a large-scale ranking model that directly uses content embeddings to model users' watch history. To study the performance of content embeddings, we train a production-scale model that does not rely on users' watch history signal, and uses content embedding to represent the current video a user is watching, and the candidate video to be ranked. For fair comparison, the baseline also excludes users' watch history signal, and uses random hashing of video IDs for the other two features.
In Table 2, we show that directly using content embeddings, to replace random hashing of video IDs, improves the performance on new videos. However, the overall ranking performance drops
significantly. This illustrates that using content embeddings as input can improve generalization on cold-start items. However, the overall performance drops. While it's possible to improve model performance by increasing dense layer parameters, this option would lead to prohibitively high serving cost in our use case. Instead, we focus on better leveraging content information with a fixed dense-layer size through alternative methods, such as Semantic IDs, that can leverage content signals more efficiently.
### Performance of Semantic IDs
We report results from extensive experiments in order to answer the following research questions:
* **RQ1**: Are semantically meaningful collisions better than random collisions given the same model capacity?
* **RQ2**: Can Semantic IDs based content representation replace video ID in production settings?
* **RQ3**: Does freezing the RQ-VAE model affect production ranking performance over time?
#### 5.3.1 Semantic Collisions vs. Random Collisions (RQ1)
For this study, we use baselines that have the same model capacity, in terms of dense and embedding layers, as the models that use different Semantic IDs-based representation. While these baselines have significantly fewer random hashing buckets that the production model described in Section 5.1.1, this setting allows us to understand the benefits of collisions between semantically related videos vs. random collisions. We defer the evaluation of Semantic IDs relative to production settings to Section 5.3.2.
Table 3 shows that Semantic IDs consistently outperform random hashing of video IDs, for both unigram and bigram based representation. The improvement in CTR/AUC evaluated on the next day's data, i.e., data not seen at training time, offers evidence for better generalization under data-distribution shift. We see a larger increase in performance for videos that are introduced within 24 hours (i.e., CTR-1D/AUC), suggesting that generalization benefits from semantic collisons are higher for cold-start videos.
#### 5.3.2 Semantic IDs to Replace video ID in Production Settings (RQ2)
In this section, we evaluate the applicability of Semantic IDs+ to replace video IDs in a large-scale production setting. Hence, we use the production ranking model described in Section 5.1.1 as our baseline.
\begin{table}
\begin{tabular}{c c c} \hline \hline & \%\(\Delta\) & \%\(\Delta\) \\ & CTR/AUC & CTR-1D/AUC \\ \hline Content embedding & -0.10 & 0.16 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance comparison of directly using content embeddings relative to the baseline with Video-ID based random hashing.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \# of embedding rows & embedding dimension & \%\(\Delta\) & \%\(\Delta\) \\ & embedding rows & CTR/AUC & CTR-1D/AUC \\ \hline VID-Random-hashing & \(4\times K\) & 256 & - & - \\ \hline SID-4Unigram-sum & \(4\times K\) & 256 & +0.14 & +0.23 \\ \hline VID-Random-hashing & \(3\times K^{2}\) & 256 & - & - \\ \hline SID-3Bigram-sum & \(3\times K^{2}\) & 256 & +0.12 & +0.27 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of Semantic ID based representation relative to a corresponding the baseline with the same model capacity. All models have the same dense network architecture.
Table 4 shows gains in CTR-1D/AUC for both Semantic IDs based representations, showcasing their generalization ability on cold-start videos. With more semantically meaningful collisions, SID-4unigram-sum is able to yield a even higher gain for cold-start videos than SID-3bigram-sum. However, due to a significant reduction in embedding parameters, SID-4unigram-sum underperforms relative to the baseline in terms of overall model quality. On the other hand, SID-3bigram-sum, with fewer embedding parameters than the baseline, is able to improve overall performance. This highlights that SID-3bigram-sum is parameter-efficient and can be used in production settings to replace video IDs to improve generalization.
#### 5.3.3 Stability of Semantic IDs over time (RQ3)
To study the stability of Semantic IDs, we train two RQ-VAE models: RQ-VAE\({}_{v0}\) and RQ-VAE\({}_{v1}\), using data that are 6 months apart. Figure 3 shows that the performance of production ranking model trained on recent engagement data (using SID-3Bigram-sum) are comparable for Semantic IDs derived from both RQ-VAE\({}_{v0}\) and RQ-VAE\({}_{v1}\). This confirms that Semantic token space for videos learned via RQ-VAE are stable over time for use in downstream production ranking model.
## 6 Conclusion and Future Work
In this paper, we discussed the disadvantages of using video ID features in our ranking model used in an industrial video sharing platform. We proposed the use of Semantic IDs derived from content embeddings and demonstrated how they can improve generalization by introducing meaningful collisions. In contrast to other work that directly uses content embedding as a feature, Semantic IDs offers a compact content representation for videos that makes it feasible to use content signals for user watch history features -- critical for personalization. We demonstrated approaches to make effective use of Semantic IDs in our ranking model by employing a bigram scheme.
In the future, we plan to more thoroughly investigate the generalization benefits of Semantic IDs with varying number of levels and codebook sizes. Beyond applications in ranking models, we plan to explore Semantic IDs for sequential recommendations.
|
2306.07212 | Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision | A neural network consisting of piecewise affine building blocks, such as
fully-connected layers and ReLU activations, is itself a piecewise affine
function supported on a polyhedral complex. This complex has been previously
studied to characterize theoretical properties of neural networks, but, in
practice, extracting it remains a challenge due to its high combinatorial
complexity. A natural idea described in previous works is to subdivide the
regions via intersections with hyperplanes induced by each neuron. However, we
argue that this view leads to computational redundancy. Instead of regions, we
propose to subdivide edges, leading to a novel method for polyhedral complex
extraction. A key to this are sign-vectors, which encode the combinatorial
structure of the complex. Our approach allows to use standard tensor operations
on a GPU, taking seconds for millions of cells on a consumer grade machine.
Motivated by the growing interest in neural shape representation, we use the
speed and differentiability of our method to optimize geometric properties of
the complex. The code is available at
https://github.com/arturs-berzins/relu_edge_subdivision . | Arturs Berzins | 2023-06-12T16:17:04Z | http://arxiv.org/abs/2306.07212v1 | # Polyhedral Complex Extraction from ReLU Networks using Edge Subdivision
###### Abstract
A neural network consisting of piecewise affine building blocks, such as fully-connected layers and ReLU activations, is itself a piecewise affine function supported on a polyhedral complex. This complex has been previously studied to characterize theoretical properties of neural networks, but, in practice, extracting it remains a challenge due to its high combinatorial complexity. A natural idea described in previous works is to subdivide the regions via intersections with hyperplanes induced by each neuron. However, we argue that this view leads to computational redundancy. Instead of regions, we propose to subdivide edges, leading to a novel method for polyhedral complex extraction. A key to this are sign-vectors, which encode the combinatorial structure of the complex. Our approach allows to use standard tensor operations on a GPU, taking seconds for millions of cells on a consumer grade machine. Motivated by the growing interest in neural shape representation, we use the speed and differentiablility of our method to optimize geometric properties of the complex. The code is available on GitHub1.
Machine Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep
across another fold (see Figure 4). However, considering each region independently leads to computing the same new vertex or, alternatively, identifying the same redundant hyperplane on all \(2^{D-1}\) regions sharing a common edge, where \(D\) is the dimension of the input space. Our method alleviates this redundancy by leveraging continuity and disregarding the regions, instead using solely the unique vertices and edges, i.e. the 1-skeleton. The key idea to _edge subdivision_ is to sequentially consider each neuron, i.e. folded hyperplane, evaluate all vertices with the NN and compare the signs of a vertex pair sharing an edge. If the signs differ, linear interpolation determines the location of a new vertex. The edges containing the connectivity information are updated accordingly. For this, we propose to leverage sign-vectors, which indicate the pre-activation sign of every neuron at every point or for every cell of the complex and altogether encode the combinatorial structure of the whole complex.
Our edge subdivision approach is naturally parallel and the use of sign-vectors affords additional structure, which allows to use basic tensor operations in standard ML frameworks and benefit from the GPU. This allows to handle millions of elements in seconds. The method and the implementation are also agnostic to the input dimension \(D\). However, the use in \(D>8\) is impractical even for small networks due to the exponential growth of the complex (see Figure 8).
Our contributions are summarized as follows:
* A novel method to extract the polyhedral complex of a ReLU NNs in general dimensions with a focus on performance.
* A novel set of experiments directly optimizing the geometric properties of the complex enabled by the fast and differentiable access to the polyhedral complex.
* An open source implementation2 using standard tensor operations in PyTorch and leveraging the GPU.
Footnote 2: github.com/arturs-berzins/relu_edge_subdivision
## 2 Related work
CPWA NNsA NN is itself a CPWA function if it is a composition of affine operators, such as fully-connected layers, convolutional layers, skip connections and CPWA activation functions, such as (leaky) ReLU, absolute value, hard hyperbolic tangent, hard sigmoid, and max-pooling. Many previous authors have investigated CPWA NNs and fully-connected NNs with ReLU activation in particular. Examples include the study of their expressivity in terms of the number of affine regions, (Pascanu et al., 2013; Montufar et al., 2014; Telgarsky, 2015, 2016; Raghu et al., 2017; Serra et al., 2018; Hanin and Rolnick, 2019; Sattelberg et al., 2020; Wang, 2022), their connection to adversarial robustness (Jordan et al., 2019; Hein et al., 2019; Daroczy, 2022), max-affine spline operators, vector quantization, and K-means clustering (Balestriero and Baraniuk, 2021), batch-norm (Balestriero and Baraniuk, 2022), affine constraint enforcement (Balestriero and LeCun, 2022), reverse-engineering (Rolnick and Kording, 2020) and the geometry of the regions (Balestriero et al., 2019; Balestriero and Baraniuk, 2021; Grigsby and Lindsey, 2022).
Number of regionsThe maximum number of regions is known to be polynomial in the width and exponential in the depth and input dimension of the NN (Raghu et al., 2017; Montufar et al., 2014). In practice, however, both randomly initialized and trained NNs have a number of regions which is much smaller, with the growth being only polynomial in the number of neurons, but still exponential in the number of input dimensions (Hein et al., 2019; Hanin and Rolnick, 2019;a) making the problem of counting regions NP-hard Wang (2022). As a consequence, for high-dimensional input-spaces even very small NNs have an extremely large number of regions, making their enumeration difficult.
Serra et al. (2018) devise a mixed-integer linear program to count the number of regions in a ReLU-network with an arbitrary input dimension. It is demonstrated that a NN trained on MNIST with 784-dimensional input and a total of just 22 hidden neurons generates \(O(10^{7})\) regions which takes tens of hours to count on a server-grade machine.
Figure 2: In CPWA NNs, each neuron of each layer sequentially subdivides the polyhedral complex. Each neuron of the first hidden layer contributes an affine hyperplane. Each neuron of the deeper layers contributes a folded hyperplane. Illustrated is the subdivision of a cubic domain in the \(D=3\) input space by the shown NN. While previous methods subdivide the regions (highest dimensional cells), our method subdivides edges.
### Complex extraction
Many works provide illustrations of the regions of a 2D input space, which can be acquired by determining the neuron states at points sampled on the image grid. While this serves as an approximation, there are two known methods operating on the exact complex: region subdivision and marching.
Region subdivisionSeveral works describe (Raghu et al., 2017) and implement (Hanin and Rolnick, 2019; Wang, 2022; Humayun et al., 2022; Huang et al., 2022) region subdivision as a method to extract the exact polyhedral complex from a ReLU-network. Starting with an initial polytope, the idea is to sequentially consider each neuron of each layer. For each neuron calculate the affine map on every existing polytope and determine whether the hyperplane cuts the region in two. Our method builds upon this interpretation, but, instead of the regions, subdivides the edges to solve the redundancy in neighbouring regions.
MarchingLei et al. (2021) propose _Analytical Marching_ to extract the 0-isosurface of a CPWA neural implicit shape with a bounded CPWA 2D boundary in 3D space. The algorithm is initialized by identifying a point on the 0-isosurface and the corresponding activation pattern or state of the face. Each edge of a face is the intersection of the face plane and a boundary plane induced by the affine map of some other neuron in the NN. Consequentially, a vertex of a face is the intersection of the face plane and two boundary planes. However, not all potential edges and vertices are valid, so it is checked whether they have the same state, i.e., whether they lie on the same side of all boundary planes as the face itself. Each valid edge is then used to pivot to a neighboring face by flipping the activation corresponding to the edge neuron. Analytical Marching serves as an exact alternative to classic mesh extraction methods, such as marching cubes, offering a trade-off between precision and performance. However, it is unclear how the method generalizes to the full volumetric complex and higher dimensions.
## 3 Background
It is well known that each of the regions supporting the CPWA NN is an intersection of affine halfspaces forming a convex polyhedral set. Together they partition the input space into a polyhedral complex (Balestriero et al., 2019; Hanin and Rolnick, 2019; Grigsby and Lindsey, 2022).
We start by introducing the relevant terminology from the classic theory on polyhedral complexes and hyperplane arrangements. We then generalize to folded hyperplane arrangements due to CPWA NNs. Lastly, we discuss the intersection-poset and sign-vectors as a means to exploit the combinatorial structure of the folded hyperplane arrangement.
### Polyhedral complexes
We start by reviewing select facts about polyhedral complexes and refer to a more thorough treatment of the topic in the context of geometry (Grunbaum and Ziegler, 2003; Grunert, 2016) and ReLU networks (Grigsby and Lindsey, 2022).
A _hyperplane_\(H:=\left\{\mathbf{x}\in\mathbb{R}^{D}|\mathbf{w}^{\top}\mathbf{x}-b=0\right\}\) is the zero-level set of an affine map with the slopes \(\mathbf{w}\in\mathbb{R}^{D}\) and a threshold \(b\in\mathbb{R}\). We will assume that all hyperplanes are _non-degenerate_, meaning \(\mathbf{w}\neq\mathbf{0}\). The sublevel set \(H^{-}\) and the super-level set \(H^{+}\) are the negative and positive _half-spaces_, respectively.
A _polyhedral set_\(\mathcal{P}\) in \(\mathbb{R}^{D}\) is the closure of an intersection of finitely many half-spaces \(H_{1}^{+},...,H_{m}^{+}\subseteq\mathbb{R}^{D}\). This is called the _H-representation_ of \(\mathcal{P}\). The _dimension_ of the polyhedral set \(\mathcal{P}\) is the dimension of its affine hull.
A hyperplane \(H\) in \(\mathbb{R}^{D}\) is a _cutting hyperplane_ of \(\mathcal{P}\) if there exist \(\mathbf{x}_{1},\mathbf{x}_{2}\in\mathcal{P}\) with \(\mathbf{x}_{1}\in\mathcal{P}\cap H^{+}\) and \(\mathbf{x}_{2}\in\mathcal{P}\cap H^{-}\). A hyperplane \(H\) in \(\mathbb{R}^{n}\) is a _supporting hyperplane_ of \(\mathcal{P}\) if \(H\cap\mathcal{P}\neq\emptyset\) and \(H\) does not cut \(\mathcal{P}\).
The intersection \(F=H\cap\mathcal{P}\) is a _face_ of \(\mathcal{P}\) for some supporting hyperplane \(H\) of \(\mathcal{P}\). \(F=\emptyset\) and \(F=\mathcal{P}\) are _improper_ faces of \(\mathcal{P}\), otherwise \(F\) is _proper_. A _\(k\)-face_ of \(\mathcal{P}\) is a face of \(\mathcal{P}\) of dimension \(k\). A 0-face_ is a _vertex_, 1-face is an _edge_, and \((D-1)\)-face is a _facet_. If \(\mathcal{P}\) is bounded, its _V
Figure 3: Steps of a single iteration of _edge subdivision_. Starting with the current 1-skeleton (0), evaluate the NN at the vertices (1) and determine the sign of the relevant neuron (2). If the signs of a vertex pair sharing an edge differ, the hyperplane must intersect this edge (3). This intersection is a new vertex whose location interpolates the coordinates and values of the vertex pair and splits the edge in two (4). To build new edges, connect the new vertices sharing a face (5).
_representation_ is its set of vertices with the convex hull \(\mathcal{P}\). A _polyhedral complex_\(\mathcal{C}\) of dimension \(D\) is a finite set of polyhedral sets of dimension \(k=0..D\), called the _cells_ of \(\mathcal{C}\), such that (i) if \(C\in\mathcal{C}\) then every face of \(C\) is in \(\mathcal{C}\); (ii) if \(B,C\in\mathcal{C}\) then \(B\cap C\) is a single mutual face of both \(B\) and \(C\).
The _domain_ of \(\mathcal{C}\), denoted \(|\mathcal{C}|\), is the union of its cells. Conversely, we call \(\mathcal{C}\) the _polyhedral decomposition_ of the domain \(|\mathcal{C}|\).
A _polyhedral subcomplex_ of \(\mathcal{C}\) is a subset \(\mathcal{C}^{\prime}\subset\mathcal{C}\) such that for every cell \(C\) in \(\mathcal{C}^{\prime}\), every face of \(C\) is also in \(\mathcal{C}^{\prime}\). The \(k\)_-skeleton_ of \(\mathcal{C}\), denoted \(\mathcal{C}_{k}\), is the subcomplex of all cells of \(\mathcal{C}\) of dimension \(i=0..k\).
### Hyperplane arrangements
A _hyperplane arrangement_ is a finite set of hyperplanes \(\mathcal{H}=\{H_{1},\ldots,H_{m}\}\) in \(\mathbb{R}^{D}\). It induces a polyhedral decomposition \(\mathcal{C}(\mathcal{H})\) of \(\mathbb{R}^{D}\). A \(D\)-dimensional cell in \(\mathcal{C}(\mathcal{H})\) or a _region_ is the closure of a maximal connected region of \(\mathbb{R}^{D}\) not intersected by any hyperplane in \(\mathcal{H}\). For \(k=0..D-1\) the \(k\)-dimensional cells in \(\mathcal{C}(\mathcal{H})\) are defined inductively as the facets of the \((k+1)\)-dimensional cells. A hyperplane arrangement is _generic_ if no more than \(D\) hyperplanes intersect at any single point.
### Sign-vectors
Given a hyperplane arrangement \(\mathcal{H}\), any point \(\mathbf{x}\in\mathbb{R}^{D}\) is assigned a _sign-vector_\(\boldsymbol{\sigma}(\mathbf{x})=(\sigma_{i}(\mathbf{x}))_{i=1..m}\), with
\[\sigma_{i}(\mathbf{x})=\begin{cases}+&\text{if }\mathbf{x}\in H_{i}^{+},\\ 0&\text{if }\mathbf{x}\in H_{i},\\ -&\text{if }\mathbf{x}\in H_{i}^{-}.\end{cases} \tag{1}\]
Similarly, every cell \(C\) of \(\mathcal{C}(\mathcal{H})\) can be associated with a sign-vector \(\boldsymbol{\sigma}(C)\) such that
\[C=\bigcap_{i=1}^{m}H_{i}^{\sigma_{i}(C)}=:\mathcal{H}^{\boldsymbol{\sigma}(C)} \tag{2}\]
with \(H^{0}:=H\)(Matousek, 2002).
### ReLU networks and folded hyperplane arrangements
For our purposes a fully-connected feed-forward NN \(f_{\Theta}\) maps any point \(\mathbf{x}\) in the _domain_\(\mathcal{D}\subset\mathbb{R}^{D}\) to a \(D^{(L)}\)-dimensional output \(f_{\Theta}(\mathbf{x})\in\mathbb{R}^{D^{(L)}}\). The NN is a composition of \(L\) layers with parameters \(\Theta=\left\{\Theta^{(l)}\right\}_{l=1..L}\):
\[f_{\Theta}(\mathbf{x})=(f_{\Theta^{(L)}}^{(L)}\circ\cdots\circ f_{\Theta^{(1) }}^{(1)})(\mathbf{x})\;. \tag{3}\]
Starting at \(\mathbf{x}^{(0)}=\mathbf{x}\), the layers are applied successively for \(l=1..L\) as
\[\mathbf{x}^{(l)}=f_{\Theta^{(l)}}^{(l)}(\mathbf{x}^{(l-1)})=\mathrm{ReLU}( \mathbf{W}^{(l)}\mathbf{x}^{(l-1)}+\mathbf{b}^{(l)}). \tag{4}\]
The layer parameters \(\Theta^{(l)}=\left\{\mathbf{W}^{(l)},\mathbf{b}^{(l)}\right\}\) contain the _weights_\(\mathbf{W}^{(l)}\in\mathbb{R}^{D^{(l-1)}\times D^{(l)}}\) and _biases_\(\mathbf{b}^{(l)}\in\mathbb{R}^{D^{(l)}}\). For simplicity, we adhere to the main line of work focusing on the use of \(\mathrm{ReLU}(x)=\max(0,x)\) as the activation, but the key ideas generalize to any CPWA NN.
In analogy to a hyperplane which is the zero-level set of an affine map, a _folded hyperplane_ is the zero-level set of the pre-activation of a neuron. The \(i\)-th neuron in the \(l\)-th layer induces the folded hyperplane \(H_{i}^{(l)}:=\left\{\mathbf{x}\in\mathbb{R}^{D}|\mathbf{W}_{i}^{(l)\top} \mathbf{x}^{(l-1)}+b_{i}^{(l)}=0\right\}\). Similarly, a finite set of folded hyperplanes \(\mathcal{H}\) is a _folded hyperplane arrangement_ and induces a polyhedral decomposition of the domain \(\mathbb{R}^{D}\)(Hein et al., 2019; Grigsby & Lindsey, 2022). On each region, the folded hyperplane acts like an affine hyperplane and does not fold. The sign-vector is defined analogously and can be evaluated from the neuron pre-activation: \(\sigma_{i}^{(l)}(\mathbf{x})=\mathrm{sgn}(\mathbf{W}_{i}^{(l)\top}\mathbf{x}^ {(l-1)}+b_{i}^{(l)})\).
Theorem 3 in Grigsby & Lindsey (2022) states that almost every ReLU network is generic, which we assume throughout this work.
### Polyhedral combinatorics
A partially ordered set or _poset_ is the pair \((\mathcal{S},\leq)\) of the set \(\mathcal{S}\) together with a binary relation \(\leq\) on \(\mathcal{S}\) (called an _ordering_) satisfying three axioms: reflexivity (\(x\leq x\) for all \(x\)), transitivity (\(x\leq y\) and \(y\leq z\) implies \(x\leq z\)), and weak anti-symmetry (if \(x\leq y\) and \(y\leq x\), then \(x=y\)). For any two elements \(x,y\in\mathcal{S}\) the _meet_\(x\wedge y\) is the greatest lower bound of \(x\) and \(y\). Similarly, the _join_\(x\lor y\) is the least upper bound of \(x\) and \(y\). Neither need exist, but if they do then they are unique (Kishimoto & Levi, 2019).
We will introduce some terminology from graph theory to denote relationships in the poset. \(y\in\mathcal{S}\) is an _ascendant_ of \(x\in\mathcal{S}\) if \(x<y\). Conversely, \(x\) is a _descendant_ of \(y\). The closest common ascendant of both \(x\), \(y\) is the join \(x\lor y\). The closest common descendant of both \(x\), \(y\) is the meet \(x\wedge y\). \(y\in\mathcal{S}\) is a _parent_ of \(x\in\mathcal{S}\) if \(x<y\) and no \(z\in\mathcal{S}\) satisfies \(x<z<y\). Conversely, \(x\) is a _child_ of \(y\).
The _intersection-poset_ of the polyhedral complex \(\mathcal{C}\) is the poset \((\mathcal{C},\subseteq)\) of its cells ordered by inclusion.
## 4 Method
Our method is motivated by the observation illustrated in Figure 4. Due to the continuity of the activation function, all folded hyperplanes are continuous across each other. However, the existing subdivision methods consider each
region independently. As a consequence, upon the intersection with a new folded hyperplane, each new vertex is computed independently \(2^{D-1}\) times in V-representation, since an edge has \(2^{D-1}\) ascendant regions in an unbounded arrangement. Similarly, in H-representation, the hyperplane redundancy check performed via linear programming arrives at the same conclusion on all \(2^{D-1}\) ascendant regions of the shared edge.
Our method alleviates this redundancy by taking into account the continuity and disregarding the regions, instead using only the unique vertices and edges, i.e. the 1-skeleton. Edge subdivision preserves the iterative structure of considering each neuron in each layer sequentially, for each neuron subdividing the edges in five steps:
1. Evaluate the NN at the vertices;
2. Get the sign-vectors of the vertices;
3. Find splitting edges by comparing the signs of vertex pairs;
4. For each splitting edge, compute the new vertex using interpolation and split the edge;
5. Build the intersecting edges (connecting new vertices across splitting faces).
This process is illustrated in Figure 3 and the steps are detailed in the following. We start by discussing how to recover the combinatorial structure of the complex from the sign-vectors.
### Perturbation using sign-vectors
All the combinatorial relationships described in Section 3.5 can be easily evaluated using sign-vectors. However, instead of just determining the relationship of two given cells, edge subdivision and optional post-processing steps rely on building parent cells.
For now, assume the unbounded domain \(\mathbb{R}^{D}\). The number of zeros in the sign-vector of a \(k\)-cell is \((D-k)\). To construct all the parents of this cell, take one of the zeros at a time and set it to \(+\) or \(-\). Consequentially, a \(k\)-cell has \(2(D-k)\) parent cells for \(k=0..D-1\). We call this process _perturbation_ and illustrate it in Figure 5.
Repeating this for all \(k\)-cells in the complex constructs all \((k+1)\)-cells, including the ordering relations. Hence, the \((k+1)\)-skeleton can be built from the \(k\)-skeleton. Starting from the 1-skeleton, perturbation can be applied sequentially to reconstruct the whole intersection-poset, including the regions.
Instead of the unbounded \(\mathbb{R}^{D}\), we will operate on a bounded polyhedral domain \(\mathcal{D}\), which is given by the intersection of \(m\) affine halfspaces. The motivation for this is a simpler implementation since bounded edges have exactly two vertices allowing to use simpler data-structures. In this setting, special care must be taken with _boundary cells_. These are the cells for which any of the first \(m\) entries of the sign-vector is \(0\) (conversely, the first \(m\) entries of an interior cell are \(+\)). Since we are not interested in cells outside the domain, we perturb the zeros on the boundary only toward the interior \(+\). So under the consideration of the bounded domain, a \(k\)-cell has \(z+2(D-k-z)\) parent cells, where \(z\) is the number of zeros in the first \(m\) signs.
### Edge subdivision
To understand how the 1-skeleton is subdivided, i.e. how new \(0\)- and \(1\)-cells are created, consider a new hyperplane \(H\) cutting a \(k\)-cell \(C\). Their intersection \(C^{0}=C\cap H\) is a new \((k-1)\)-cell. Additionally, \(H\) splits \(C\) into two new \(k\)-cells \(C^{+}=C\cap H^{+}\) and \(C^{-}=C\cap H^{-}\). So there are exactly two mechanisms for creating \((k-1)\)-cells: (i) splitting a \((k-1)\)-cell with \(H\) which preserves the dimension and (ii) intersecting a \(k\)-cell with \(H\) which lowers the dimension. Focusing on \(k=0,1,2\) as sources for new \(0,1\)-cells leads us to edge subdivision.
Figure 4: Motivation in \(D=2\): due the continuity of the activation, the two new edges share the same vertex on the common edge of the two regions. Processing each region individually is redundant.
Figure 5: The parenting \((k+1)\)-faces of a \(k\)-face can be obtained by perturbing each zero in its sign-vector at a time. Here, \(k=1\) and the first \(m=6\) entries of the sign vector are hidden for visual clarity since they are all \(+\). This edge and all its ascendants are interior cells.
#### 4.2.1 Steps 0-4
Let \(\mathcal{V}\) and \(\mathcal{E}\) be the set of all vertices and edges at the current iteration (step 0). Let \(H\) be the next folded hyperplane to intersect with and recall that it behaves like an affine hyperplane on each region, folding at its facets. Per generality assumption, no vertex in \(\mathcal{V}\) will intersect \(H\). Each vertex in \(\mathcal{V}\) is \(+\) or a \(-\) sign w.r.t. \(H\). These signs can be determined from the pre-activation values of the neuron corresponding to \(H\), obtained by simply evaluating the NN at the vertices (steps 1, 2).
We call \(E\in\mathcal{E}\) a _splitting_ edge if \(H\) cuts \(E\). Splitting edges can be identified by their two vertices having opposite signs, which we label \(V^{+},V^{-}\) (step 3). The new vertex \(V^{0}=E\cap H\) on the splitting edge \(E\) can be computed by linearly interpolating the positions of \(V^{+},V^{-}\) weighted by their pre-activation values. This new vertex takes the sign \(0\) w.r.t. \(H\). The old splitting edge \(E\) is removed from \(\mathcal{E}\) and the two new _split_ edges \(E^{+}=E\cap H^{+}\) with vertices \(V^{+},V^{0}\) and \(E^{-}=E\cap H^{-}\) with vertices \(V^{-},V^{0}\) are added to \(\mathcal{E}\). The new signs of \(E^{+},V^{+}\) and \(E^{-},V^{-}\) w.r.t. \(H\) are trivially \(+\) and \(-\), respectively (step 4).
#### 4.2.2 Step 5
This completes intersecting and splitting edges with the folded hyperplane. However, new edges are also formed where \(H\) intersects 2-faces. We call \(F\) a _splitting_ 2-face if \(H\) cuts \(F\). We call their intersection \(E^{0}=F\cap H\) an _intersecting_ edge.
In a naive approach, it would seem that we need to track the 2-faces in order to intersect them with \(H\). However, by induction, this would require to track the whole complex. This is impractical due to the amount and layout of the memory - for \(k>1\), a \(k\)-cell can have an arbitrary number of facets, as opposed to exactly 2 vertices for each edge in a bounded domain. Instead, we propose to intersect the 2-faces _implicitly_.
This is enabled by another observation - every bounded splitting 2-face has exactly two splitting edges. Furthermore, the intersecting edge connects the two new vertices on those two splitting edges. Since we have already determined the splitting edges, we use them to implicitly identify splitting 2-faces and append the intersecting edges to \(\mathcal{E}\).
Given two splitting edges we can perform a simple adjacency check using their sign vectors. However, checking all possible pairs has a quadratic memory complexity \(\mathcal{O}(|\hat{\mathcal{E}}|^{2})\) in the number of splitting edges \(|\hat{\mathcal{E}}|\). This is infeasible even for moderately sized NN (see Section 5.1.2).
Instead, we propose a much more efficient method. For each splitting edge build its parenting 2-faces using perturbation as described in Section 4.1. We have a list of splitting 2-faces each pointing to a single splitting edge. In this list, each 2-face comes up exactly twice. We pair the two edges associated with the same 2-face. With at most \(2(D-1)\) 2-faces per edge, the memory requirement is only \(\mathcal{O}(2(D-1)|\hat{\mathcal{E}}|)\). Lastly, it remains to add the intersecting edge to \(\mathcal{E}\). Its sign-vector is inherited from the splitting face with a \(0\) appended w.r.t. \(H\).
This concludes a single iteration of edge subdivision, which is repeated for every neuron in every layer.
#### 4.2.3 Complexity analysis
In Appendix A, we elaborate that all the steps (1-5) of edge subdivision can be implemented in linear time and memory complexity in the number of vertices \(|\mathcal{V}|\), edges \(|\mathcal{E}|\), or splitting edges \(|\hat{\mathcal{E}}|\). We argue further that \(O(|\mathcal{E}|)\) and \(O(|\hat{\mathcal{E}}|)\) can be replaced with \(O(|\mathcal{V}|)\), concluding that the total algorithm is \(O(|\mathcal{V}|)\), and hence optimal.
The number of regions in a randomly initialized or trained NN is known to be \(o(N^{D}/D!)\) where \(N\) is the total number of neurons (Hanin and Rolnick, 2019). Conservatively assuming a proportional number of vertices \(|\mathcal{V}|\) (see Figure 8), we can obtain the complexity of the algorithm with respect to the NN architecture.
## 5 Experiments
We start by describing, validating, and timing our implementation of edge subdivision. As described in Sections 1 and 2, the access to the exact polyhedral complex is intriguing in many theoretical and practical applications. Instead, we consider how the speed and differentiability of our method enable a novel experiment in which an optimization objective is formulated on the geometric properties of the extracted complex. Lastly, we discuss an approach to pruning NN parameters and test a method to modify edge subdivision if the goal is to extract just an iso-level-set, i.e. decision boundary.
### Implementation
We implement the algorithm in PyTorch. Since only vertex positions and bounded edges with exactly two vertices are stored, edge subdivision can run efficiently and exclusively on the GPU. The steps 0-4 can be implemented using standard tensor operations. However, using standard operations step 5 can only be implemented in sub-optimal log-linear time using sorting to pair up identical rows of a tensor. This step can be implemented in linear time using hash-tables, but since efficient hashing on the GPU with custom length keys is non-trivial (Junger et al., 2020; Awad et al., 2023), we hope to address this in future work.
#### 5.1.1 Validation
We first test a basic necessary (although insufficient) condition for the validity of our implementation. For each found vertex, the neurons corresponding to 0 entries in its sign-vector should be 0 at the vertex location. This is indeed satisfied up to a numerical error. Figure 6 (left) shows that the maximum numerical error is roughly seven orders of magnitude smaller than the extent of the domain and even less in lower dimensions. A fixed NN with 4 layer depth and 10 neuron width is used throughout. This error should not be confused with the geometric error in the position of the vertex, which is closely related, but not quantified here.
#### 5.1.2 Performance
We investigate the time and memory behaviour of our implementation by counting vertices and edges on the bounded unit hypercube domain. We consider NNs of four layers and widths of \(10,20,40\) for input dimensions \(D=1..10\). The sizes of the experiments are mainly limited by the memory. The tests are performed on an NVIDIA RTX 3090. Since no data is on the CPU, the maximum allocated memory is measured for just the GPU.
The results are illustrated in Figure 8. Even moderately sized NNs induce complexes with millions of cells, especially as the input dimension increases. It is known that the number of regions scales exponentially with the number of input dimensions. For the number of vertices and edges, we observe a subexponential growth.
Counting the regions is possible as described in Section 4.1, but requires an additional significant amount of memory and time, since using perturbation requires to store and group \(2(D-1)|\mathcal{E}|\) cells which is up to \(\mathcal{O}(10^{8})\) for some of the considered cases.
In Figure 7 we validate our complexity analysis. We reuse the previous results to plot the runtime and memory over the number of vertices, which as expected is log-linear due to the sub-optimal use of sorting.
Lastly, in Figure 9 we compare to SplineCam (Humayun et al., 2023) which is a region subdivision method specifically for \(D=2\). Over the considered tests, our method is on average 20 times faster, since SplineCam uses graph structures on the CPU.
### Vertex distribution
We study the effect of the domain being bounded. We perform edge subdivision on a \(\mathcal{D}=[-100,100]^{D}\) hypercube domain. For each considered dimension, this is repeated on five randomly initialized NNs of 4 layer depth and 10 neuron width.
For every vertex, we compute its distance from the origin \(r=||\mathbf{x}||_{2}\). Figure 6 (right) shows a bi-modal distribution of \(r\). For \(r<100\), we observe an exponentially decaying density of vertices. These are the interior vertices due to the folded hyperplane arrangement. Additionally, there are the domain boundary vertices, which intersect at least one of the hyperplanes defining the domain. These are located at \(r\geq 100\), which corresponds to the second mode of the distribution. For a trained NN, we would generally expect a different distribution in the first mode, for example, the vertices to concentrate more tightly around the training data.
This illustrates a limitation of performing edge subdivision on a bounded domain. If we do not care about the cells on the artificially inserted boundary, then having a large proportion of boundary cells is undesirable for the performance. One simple solution is to replace the hypercube domain with a simplex domain, whose number of vertices (edges) grows linearly (quadratically) with \(D\) as opposed to exponentially. This also motivates a future work for extending edge subdivision to unbounded domains.
Figure 6: Left: Maximum error over all vertices is at least seven orders of magnitude smaller than the size of the hypercube domain serving as a validation. Right: Distribution of the vertex distances from the origin, normalized by the maximum value. The two distinct modes correspond to the interior and boundary vertices.
Figure 7: Our implementation shows log-linear scaling w.r.t. the number of vertices. This is due to a sub-optimal implementation of step 5 using sorting. Leveraging hash tables would improve the whole implementation to linear complexity.
### Geometric loss
We utilize the differentiability and speed of edge subdivision to consider a novel experiment in which an optimization objective is formulated on geometric properties of the extracted complex. In Figure 10, we start with a ReLU NN with two hidden layers of 50 neurons each and \(D=2\). The NN is first trained as a neural implicit representation of a bunny. Then, in each iteration we extract the polyhedral complex and compute the _shape compactness_\(c=4\pi A/P^{2}\) as the ratio of the area \(A\) and the perimeter \(P\). The normalization is such that \(c=1\) for the most compact shape - the circle. Using \(c\) as the loss, the bunny shape converges to a circle in 100 iterations with a standard Adam optimizer.
In general, any geometric loss that depends on the vertex positions can be formulated and optimized, e.g. edge lengths, angles, areas, volumes, curvatures, and other quantities from discrete differential geometry. This holds for both the boundary and the volumetric shape, as well as the polyhedral complex (i.e. the mesh) itself.
### Pruning
Finally, we consider two approaches to pruning, focusing on a geometric context due to the intuitive interpretation. Consider an implicit neural representation of a bounded geometry. Important in this view is the boundary of the shape, similar to the decision boundary in a classification task. We can view the ReLU NN as a compact storage format and many geometric properties of a shape can be computed from just its boundary.
#### 5.4.1 Parameter pruning
In the first view, we refer to _pruning_ as a NN compression technique in which some parameters are removed after training with a negligible drop in the NN performance (Lee et al., 2018). In the context of preserving the shape or decision boundary, all the folded hyperplanes which do not intersect the boundary can be removed. This corresponds to pruning the respective neurons. For ReLU, the boundary can be completely contained in either on the negative or positive half-space of a non-intersecting neuron. The negative neurons can be removed completely as they do not contribute any value anywhere on the shape. The folded hyperplanes of such neurons are highlighted in red in Figure 11. Since the training data is localized on the unit square, the folded hyperplanes not intersecting this domain correspond to _dying-ReLUs_ - neurons which for all training samples are in the rectifying 0 region of ReLU. However, there are also folded hyperplanes intersecting the domain but not the shape itself. Removing all of these allows to compress the \(2,50,50,1\) NN down to \(2,25,19,1\), reducing the number of parameters from \(2751\) to \(589\).
This can be pruned further by also considering the converse case - neurons for which the whole boundary is in the linear activation of ReLU. Since each such neuron contributes the same affine function everywhere on the shape, any linearly dependent (in general any \(>D\) neurons of the same layer) can be compressed down to \(D\) while adjusting the outgoing weights accordingly.
#### 5.4.2 Pruning during edge subdivision
Extracting the whole complex and selecting just the boundary is wasteful, even if the NN is compressed as described
Figure 8: Edge and vertex counts, runtime, and memory usage of randomly initialized NNs of four layers and different widths and input dimensions. Mean \(\pm\) standard deviation over 5 runs.
Figure 9: We achieve a 20 times speed-up on average over SplineCam (Humayun et al., 2023) which is also limited to \(D=2\).
above. We propose a complementary pruning strategy specifically for during edge subdivision. It intends to prune all edges and vertices, for whom we can say with confidence that they will not contribute toward the boundary.
Recall, that a new vertex is created only where an edge splits and such splitting edges are detected by the signs of their vertex pairs disagreeing. We can compute the sign-vector of all current vertices even at any intermediate iteration of subdivision. An edge can be pruned if both its vertices have the same signs w.r.t. all future neurons. For the considered 2D bunny geometry this reduces the number of edges from \(757/2424/2576\) to \(301/76/228\) after finishing each layer (\(399/1240/1316\) to \(305/118/194\) for vertices). While the described condition is sufficient for pruning, it may perhaps be improved further, providing an alternative to Analytical Marching (Lei & Jia, 2020).
## 6 Conclusions
In this work, we observed a redundancy in region subdivision and proposed a novel edge subdivision method for extracting the exact polyhedral complex from ReLU NNs. Our approach allowed to use simple data structures and tensor operations to leverage the GPU improving the performance over previous methods over 20 times. The speed and differentiability allowed us to propose novel applications in which a loss can be formulated on the extracted complex. While we hope this opens interesting avenues in geometry, in higher dimensions the method is limited by the exponential growth of the complex. Further limitations and future research directions include extending the method to unbounded domains, non-generic arrangements, and other CPWA architectures, as well as improving the pruning strategies for more efficient extraction of level-sets. However, the main outlook for improved performance is replacing sorting with hash-tables, improving the whole implementation to linear time and memory in the number of vertices.
## Acknowledgements
This was supported by the European Union's Horizon 2020 Research and Innovation Programme under Grant Agreement number 860843.
|
2307.08208 | Towards Stealthy Backdoor Attacks against Speech Recognition via
Elements of Sound | Deep neural networks (DNNs) have been widely and successfully adopted and
deployed in various applications of speech recognition. Recently, a few works
revealed that these models are vulnerable to backdoor attacks, where the
adversaries can implant malicious prediction behaviors into victim models by
poisoning their training process. In this paper, we revisit poison-only
backdoor attacks against speech recognition. We reveal that existing methods
are not stealthy since their trigger patterns are perceptible to humans or
machine detection. This limitation is mostly because their trigger patterns are
simple noises or separable and distinctive clips. Motivated by these findings,
we propose to exploit elements of sound ($e.g.$, pitch and timbre) to design
more stealthy yet effective poison-only backdoor attacks. Specifically, we
insert a short-duration high-pitched signal as the trigger and increase the
pitch of remaining audio clips to `mask' it for designing stealthy pitch-based
triggers. We manipulate timbre features of victim audios to design the stealthy
timbre-based attack and design a voiceprint selection module to facilitate the
multi-backdoor attack. Our attacks can generate more `natural' poisoned samples
and therefore are more stealthy. Extensive experiments are conducted on
benchmark datasets, which verify the effectiveness of our attacks under
different settings ($e.g.$, all-to-one, all-to-all, clean-label, physical, and
multi-backdoor settings) and their stealthiness. The code for reproducing main
experiments are available at \url{https://github.com/HanboCai/BadSpeech_SoE}. | Hanbo Cai, Pengcheng Zhang, Hai Dong, Yan Xiao, Stefanos Koffas, Yiming Li | 2023-07-17T02:58:25Z | http://arxiv.org/abs/2307.08208v1 | # Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound
###### Abstract
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in various applications of speech recognition. Recently, a few works revealed that these models are vulnerable to backdoor attacks, where the adversaries can implant malicious prediction behaviors into victim models by poisoning their training process. In this paper, we revisit poison-only backdoor attacks against speech recognition. We reveal that existing methods are not stealthy since their trigger patterns are perceptible to humans or machine detection. This limitation is mostly because their trigger patterns are simple noises or separable and distinctive clips. Motivated by these findings, we propose to exploit elements of sound (\(e.g.\), pitch and timbre) to design more stealthy yet effective poison-only backdoor attacks. Specifically, we insert a short-duration high-pitched signal as the trigger and increase the pitch of remaining audio clips to "mask" it for designing stealthy pitch-based triggers. We manipulate timbre features of victim audios to design the stealthy timbre-based attack and design a viceprint selection module to facilitate the multi-backdoor attack. Our attacks can generate more 'natural' poisoned samples and therefore are more stealthy. Extensive experiments are conducted on benchmark datasets, which verify the effectiveness of our attacks under different settings (\(e.g.\), all-to-one, all-to-all, clean-label, physical, and multi-backdoor settings) and their stealthiness. The code for reproducing main experiments are available at [https://github.com/HanboCai/BadSpeech_SoE](https://github.com/HanboCai/BadSpeech_SoE).
Backdoor Attack, Backdoor Learning, Speech Recognition, AI Security, Trustworthy ML.
## I Introduction
Speech recognition has been widely and successfully deployed in many mission-critical applications [1, 2, 3]. In general, obtaining well-performed speech recognition models requires training on large-scale annotated datasets and substantial hardware resources. Accordingly, developers and users usually exploit third-party resources, such as open-source datasets and checkpoints, to alleviate training burdens.
However, recent studies revealed that outsourcing (parts of) training procedures (\(e.g.\), data collection) may also introduce new security risks to DNNs [4]. Arguably, backdoor attack is one of the most emerging yet threatening threats [5]. The backdoor adversaries can implant hidden backdoors to victim DNNs by introducing a few poisoned training samples containing adversary-specified trigger patterns. The adversaries can activate the embedded backdoor via triggers during the inference process of backdoored models to maliciously manipulate their predictions. However, the backdoored models behave normally on benign testing samples. Accordingly, victim users can hardly notice backdoor threats.
Currently, most of the existing backdoor attacks are designed against image or text classification [6, 7, 8, 9, 10, 11]. However, the backdoor analysis in speech recognition is left far behind. In particular, the few feasible attacks in this area are preliminary, whose trigger patterns are simple noises [12, 13, 14, 15, 16] or separable and distinctive audio clips [17, 18, 19]. Accordingly, these attacks are perceptible to humans or can be easily detected and alleviated by algorithms [15, 20]. It raises an intriguing question: _Is it possible to design an effective attack against speech recognition that is stealthy to both human and machine detection_?
The answer to the aforementioned question is positive. Arguably, the core of an effective and stealthy attack is to design more 'natural' trigger patterns. In this paper, we generate more naturally poisoned samples by modifying the elements of sound. We tackle trigger design from two perspectives, including pitch and timbre. Specifically, we first increase the pitch of selected audio samples and then insert a short yet high-pitched signal to generate their poisoned version for the pitch-based attack. The pitch-increased background audio can hide the inserted signal due to audio masking. This method is dubbed pitch boosting and sound masking (PBSM); For the timbre-based attack, we edit the timbre features of selected samples to generate their poisoned counterparts. In particular, we design a viceprint selection module that enables the selection of diverse timbre features for timbre transformation, to further improve its effectiveness under the multi-backdoor setting. We call this method viceprint selection and voice conversion (VSVC). The poisoned samples generated by our PBSM and VSVC are natural and sample-specific. As such, they can bypass both human inspection and machine detection.
In conclusion, our main contributions are three-fold:
* We reveal the stealthiness deficiency of existing attacks against speech recognition and its potential reasons.
* We propose two simple yet effective backdoor attacks against speech recognition (\(i.e.\), PBSM and VSVC) via elements of sound. The poisoned samples of both PBSM and VSVC are more natural and therefore stealthy to both human inspection and machine detection.
* Extensive experiments are conducted to verify the effectiveness of our attacks under different settings (\(e.g.\), all-to-one, all-to-all, clean-label, physical, and multi-backdoor settings) and their resistance to defenses.
The rest of this paper is structured as follows. In Section II, we briefly review related works about speech recognition and backdoor attacks. Section III illustrates our two stealthy backdoor attacks based on elements of sound, \(i.e.\), pitch boosting and sound masking (PBSM) and voiceprint selection and voice conversion (VSVC), in details. The experimental results of our attacks are presented in Section IV. We conclude this paper in Section V at the end.
## II Related Works
### _Speech Recognition_
Speech recognition (SR) plays a vital role in many critical applications [21], allowing devices to comprehend and interpret human speech. Early speech recognition methods were mostly based on Gaussian mixture models (GMMs) and hidden Markov models (HMMs) [22]. However, these methods suffered from relatively high error rates in practice.
Recently, advanced SR methods were all based on deep neural networks (DNNs) due to their high learning capacities. For example, Hinton _et al._[23] applied DNNs to acoustic modeling and achieved promising performance in the TIMIT [24] phoneme recognition task, marking a breakthrough in the field of speech recognition with DNNs. De _et al._[25] applied long short-term memory (LSTM) networks in speech recognition tasks, motivated by the strong temporal nature of speech data. Besides, inspired by the tremendous success of ResNet in image classification [26], Vygon _et al._[27] proposed a novel and effective keyword discovery model with the ResNet backbone. Recently, Axel _et al._[28] exploited the Transformer structure in speech recognition and achieved remarkable performance. Avi _et al._[29] proposed an end-to-end strategy without requiring pre-processing speech data to simplify the speech recognition tasks. Specifically, they adopted one-dimensional convolutional stacks and Transformer-type encoder blocks to process and classify speech data.
### _Backdoor Attacks_
Backdoor attack is an emerging yet critical training-phase threat [5]. In general, the adversaries intend to implant hidden backdoors into the victim model by maliciously manipulating the training procedures (\(e.g.\), samples or loss). The backdoored model will behave normally on predicting benign testing samples whereas its predictions will be misled to adversary-specified target classes whenever its backdoor is activated by the trigger pattern contained in attacked testing samples.
Currently, most of the existing attacks are designed against image classification. These attacks can be divided into different sub-categories based on different criteria, as follows:
**Poisoned-Label and Clean-Label Attacks.** Backdoor attacks can be divided into poisoned-label [6, 30, 11] and clean-label attacks [31, 32, 33] based on whether the target label of poisoned samples is consistent with their ground-truth one. In general, poisoned-label backdoor attacks are more effective compared to the clean-label ones since the 'robust features' related to the target class contained in poisoned samples of clean-label attacks will hinder the learning of trigger patterns [10]. However, clean-label attacks are more stealthy since victim users can identify and filter out poisoned training samples by examining the image-label relationship.
**All-to-One and All-to-All Attacks.** We can separate existing attacks into all-to-one and all-to-all attacks based on the property of the target label [6]. Specifically, all poisoned samples will be assigned the same target label in all-to-one attacks, while the target label of all-to-all attacks is determined based on the ground-truth one of the poisoned samples. For example, the all-to-all adversaries usually adopt \(y^{\prime}=(y+1)\mod K\), where \(K\) is the number of all classes, \(y^{\prime}\) and \(y\) indicate the target label and ground-truth label of the poisoned sample, respectively. Arguably, all existing (poisoned-label) backdoor attacks can be generalized to all-to-all attacks, although it will probably decrease attack effectiveness [5].
**Single-Backdoor and Multi-Backdoor Attacks.** Different from the single-backdoor attacks where the adversaries only implant a single backdoor to the victim models, multi-backdoor methods [6, 34, 35, 36] intend to embed multiple backdoors simultaneously. In general, it is non-trivial to implant multiple backdoors, although we can easily inject a single backdoor. It is mostly because the learning of one backdoor may affect that of the others [36]. As such, multi-backdoor attacks may fail if triggers are not'strong' enough.
**Digital and Physical Attacks.** Different from previous digital attacks where all poisoned samples are obtained completely in the digital space, the physical space is also involved in their generation in the physical attacks. Chen _et al._[37] proposed the first physical backdoor attack where they exploited the glasses as physical trigger against facial recognition. A similar idea was also discussed in [38]. Recently, Li _et al._[39] revealed that existing digital attacks will fail in the physical space and proposed a physical attack enhancement inspired by the expectation over transformation [40]. Most recently, Xu _et al._[41] designed a more stealthy poison-only physical backdoor attack using spatial transformations (\(e.g.\), rotation) with a specific parameter as trigger patterns.
Recently, there are also a few backdoor attacks against speech recognition. Specifically, Liu _et al._[42] reversed potential training samples of a given speech recognition model, based on which to implant hidden backdoors; Ye _et al._[16] designed trigger patterns based on audio steganography; Zhai _et al._[13] designed the first backdoor attack against speaker verification via clustering techniques; Koffas _et al._ exploited ultrasonic pulses as audio triggers; In [17, 19, 18], sounds from the natural environment (\(e.g.\), music and noises) were adopted as trigger patterns; Shi _et al._[14] developed an optimization scheme to generate more defective audio triggers; Most recently, a concurrent work [43] designed stealthy style-based triggers for audio backdoor attacks via style transformations. However, all existing attacks are perceptible to humans or can be easily detected and alleviated by algorithms. How to design an effective backdoor attack against speech recognition
that is stealthy to both human and machine detection is still an important open question and worth further exploration.
## III The Proposed Methods
The sound elements primarily include pitch, timbre, and loudness [44]. In this paper, we discuss how to design more natural yet effective acoustic trigger patterns based on pitch and timbre, respectively. We omit the loudness-type trigger design since it has minor variation and therefore may not contain sufficient information for effective backdoor attacks.
### _Preliminaries_
**Threat Model.** In this paper, we focus on _poison-only_ backdoor attacks against speech recognition, where the adversaries can only modify their released poisoned training dataset. The victim users will exploit the poisoned dataset to train their models with user-specified settings. Accordingly, we assume that the adversaries cannot change and have no information on the training process (\(e.g.\), model structure, loss, and training schedule). This is one of the most difficult settings for backdoor attacks, with the most expansive threat scenarios (\(e.g.\), using third-party samples, training facilities, or models) [5].
**Adversary's Goals.** In summary, the backdoor adversaries have three main goals, including **(1)** effectiveness, **(2)** stealthiness, and **(3)** persistence. Specifically, effectiveness requires that backdoored models can predict poisoned testing samples as the adversary-specified target label, no matter what their ground-truth label is; Stealthiness ensures that the attack cannot be detected by human inspection or simple machine detection. For example, trigger patterns should be stealthy and the poisoning rate should be small; Persistence seeks that the attack is still effective under more difficult settings (\(e.g.\), under potential adaptive defenses and physical-world settings).
**The Main Pipeline of Poison-Only Backdoor Attacks.** In general, how to generate the poisoned dataset \(\hat{\mathcal{D}}\) given its benign version \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) is the main problem of poison-only backdoor attacks. Considering a classification problem with \(K\)-categories, the \(\hat{\mathcal{D}}\) contains two separate subsets, including the benign subset \(\mathcal{D}_{b}\) and the poisoned subset \(\mathcal{D}_{p}\) (\(i.e.\), \(\hat{\mathcal{D}}=\mathcal{D}_{b}\cup\mathcal{D}_{p}\)). Specifically, \(\mathcal{D}_{b}\) is randomly sampled from \(\mathcal{D}\) containing \((1-\gamma)\cdot N\) samples, where \(\gamma\) is dubbed 'poisoning rate'. \(\mathcal{D}_{p}\triangleq\{(G_{x}(\mathbf{x}),G_{y}(y))\,|(\mathbf{x},y)\in\mathcal{D }\backslash\mathcal{D}_{b}\}\), where \(G_{x}:\mathcal{X}\rightarrow\mathcal{X}\) and \(G_{y}:\mathcal{Y}\rightarrow\mathcal{Y}\) are adversary-assigned poisoned instance generator and poisoned label generator, respectively. For example, \(G_{x}(\mathbf{x})=\mathbf{x}+\mathbf{t}\) where \(\mathbf{t}\) is the trigger based on additive noises [45]; \(G_{y}(y)=y_{T}\) where \(y_{T}\) is the target label in all-to-one attacks [5], \(G_{y}(y)=(y+1)\mod K\) in most of the existing all-to-all attacks [6]. After \(\hat{\mathcal{D}}\) is generated and released, the victim users will use it to train their model \(\mathbf{f_{\theta}}:\mathcal{Y}\rightarrow[0,1]^{K}\) via \(\min_{\mathbf{\theta}}\sum_{(\mathbf{x},y)\in\hat{\mathcal{D}}}\mathcal{L}(\hat{\mathbf{f }_{\theta}}(\mathbf{x}),y)\).
### _Attack via Pitch Boosting and Sound Masking_
Arguably, the most straightforward approach to designing pitch-type triggers is to insert sound clips with a very high (or low) frequency in a random position of the victim audio. However, these triggers can be easily filtered out by removing clips with the highest and lowest frequencies. Besides, these triggers are also perceptible to humans since the inserted trigger is most likely different from its surrounding audio clips in the poisoned samples. To tackle these problems, in this paper, we propose to first increase the pitch of selected audio samples and then insert a short yet high-pitched signal to the position with the highest sound energy. This method is dubbed attack via pitch boosting and sound masking (PBSM).
The pitch boosting makes our attack resistant to trigger filtering (as shown in our experiments). The filtering cannot decrease the pitch of poisoned audio since these triggers are natural, although it may remove the high-pitched short signal. Besides, our insertion strategy improves the stealthiness of triggers for both human inspection and machine detection. Specifically, the inserted high-pitched signal is less perceptible to humans due to sound masking while it can bypass classical detection methods based on finding common audio clips since the insert position is usually sample-specific. In other words, different poisoned samples have different insert positions.
Fig. 1: The main pipeline of attacking via our pitch boosting and sound masking (PBSM). The PBSM consists of three main stages, including attack, training, and inference. The attack stage is the core of PBSM, containing two steps (\(i.e.\), pitch boosting and signal injection). In the first step, we exploit short-time Fourier transform to convert the original audio from the time domain to the frequency domain and increase the pitch of the overall audio; In the second step, we identify the position of the highest-amplitude segment in the audio where we insert an adversary-specified high-pitched signal.
In general, our PBSM has two main steps, including **(1)** pitch boosting and **(2)** signal injection, to generate poisoned samples. The details of this process is described in Algorithm 1 and the main pipeline of PBSM is shown in Figure 1.
**Step 1: Pitch Boosting.** A feasible method for pitch boosting is to increase the frequency of selected audio samples. Accordingly, we first perform a short-time Fourier transform (STFT) [46] on the original audio to convert it from the time domain to the frequency domain. After that, in the frequency domain, we multiply the original frequency values by an adversary-specified pitch-shifting coefficient \(p\) (\(p>1\)), leading to a new audio waveform with a boosted pitch. Specifically, we can express the short-time Fourier transform as \(\mathbf{x_{f}}=\mathcal{F}(\mathbf{x})\) (Line 1 in Algorithm 1), where \(\mathbf{x_{f}}\) is the frequency-domain representation of \(\mathbf{x}\). The process of increasing pitch can be expressed as \(\mathbf{x_{P}}=p\cdot\sum_{i=0}^{L_{p}}\mathbf{x_{f}^{(i)}}\) (Line 3 in Algorithm 1). Specifically, in the aforementioned equation, \(L_{p}\) represents the number of points in the frequency domain, the transformation factor \(p\) is represented as \(p=2^{n\_p}/12\), and \(n\_p\) denotes the number of semitones (\(i.e.\), the step of pitch shifting).
**Step 2: Signal Injection.** This process consists of two main stages, including **(1)** location identification and **(2)** signal insertion. In the first stage, we identify the location of the high-amplitude segments in the audio signal. We select the high-amplitude clips since they have stronger energy and can provide better masking effects. Specifically, to find these positions, we iterate through each audio segment to identify the position of the segment with the highest energy in the entire audio sample. The position \(T\) of high-amplitude segments can be obtained by: \(T=\underset{i}{\operatorname{argmax}}(\sum_{i}^{i+L}|\mathbf{x_{P}^{(i)}}|)+L\) (Line 5 in Algorithm 1), where \(L\) is the high-amplitude length. In the second stage, we insert an adversary-specified high-pitched signal \(\mathbf{h}\) in the selected position \(T\). Specifically, this process can be denoted by \(\mathbf{x_{r}}=\mathbf{x_{P}^{(T)}}\oplus\mathbf{h}\) (Line 6 in Algorithm 1), where \(\mathbf{x_{r}}\) is the inserted audio signal after signal injecting, \(\mathbf{x_{P}^{(T)}}\) is the audio segment at position \(T\), and \(\oplus\) denotes the injection operation with the high-pitched signal \(\mathbf{h}\). We conduct the inverse Fourier transformation \(\mathcal{F}^{-1}\)[46] to obtain poisoned audio with pitch-type triggers by turning frequency-domain signals back to the time domain (Line 7 in Algorithm 1).
### _Attack via Voiceprint Selection and Voice Conversion_
To design timbre-type triggers, we can exploit a 'timbre transformer' trained on the audios of an adversary-specified target people (\(e.g.\), the adversary himself) for voice conversion [47]. Specifically, we can assign the poisoned instance generator \(G\) as the (pre-trained) timbre transformer.
Assume that there are multiple timbre candidates for selection. Arguably, the design of timbre-type single-backdoor attacks is straightforward, where the adversaries can arbitrarily choose any single timbre they desire. However, the design of multi-backdoor attacks is challenging since simply selecting multiple timbres at random to design triggers has limited attack effectiveness (as we will show in the experiments). It is mostly because there can be many similarities between timbres. On the one hand, this similarity makes it harder for DNNs to learn backdoors, since similar poisoned samples have different (target) labels. On the other hand, this similarity may lead to false backdoor activation by attacked models at the inference process. Motivated by these understandings, we propose a _voiceprint selection module_ to alleviate these challenges.
In general, our voiceprint selection module consists of three main stages, including **(1)** feature extraction, **(2)** similarity calculation, and **(3)** timbre selection. The main pipeline of our voiceprint selection and voice conversion (VSVC) is shown in Figure 2. Its technical details are as follows.
**Step 1: Feature Extraction.** Following the most classical method, we exploit X-vectors [48] to extract voiceprint features of each timbre candidates, \(i.e.\), \(\mathbf{S_{e}^{(k)}}\gets V(C_{k})\), where \(C_{k}\) is the speech data for the \(k\)-th speaker, \(V\) denotes the process of extracting X-vectors converting each speech into a \(d\)-dimensional feature vector, and \(\mathbf{S_{e}^{(k)}}\) represents the voiceprint embedding for the \(k\)-th speaker. For \(K\) candidates, we ultimately obtain a matrix \(\mathbf{S_{e}}=[\mathbf{S_{e}^{(1)}},...,\mathbf{S_{e}^{(K)}}]\in\mathbb{R}^{d\times K}\) with \(d\) rows and \(K\) columns (Lines 1-3 in Algorithm 2).
**Step 2: Similarity Calculation.** In this step, we calculate the distance between the features of each timbre pair \((i,j)\) as their similarity. Specifically, to represent the voiceprint distances between \(K\) candidates, we construct a similarity matrix \(Sim\) of size \(K^{2}\), where each element \(Sim[i][j]\) is computed as \(d\left(\mathbf{S_{e}^{(i)}},\mathbf{S_{e}^{(j)}}\right)\) (Lines 4-7 in Algorithm 2) with the distance metric \(d\). In this paper, we assign \(d\) as \(\ell_{2}\)-norm for simplicity.
**Step 3: Timbre Selection.** In this step, we select \(M\) candidates with maximum distances, based on the similarity matrix calculated in the previous step. We design a greedy search method to select suitable candidates (Lines 9 in Algorithm 2). Specifically, we select the two timbres with the greatest distance in the similarity matrix to add to the selected set \(\mathcal{C}_{M}\). After that, we select the timbre that has the greatest distance from all the timbres in the selected set from the remaining candidates and add it to the selected set. We repeat the above process until the selected set \(\mathcal{C}_{M}\) contains \(M\) timbres.
**Step 4: Generating the Poisoned Dataset via Voice Conversion.** In this step, we first train a voice conversion model \(G\) (Line 10 in Algorithm 2), based on the the selected set \(\mathcal{C}_{M}\) obtained in the previous step. For each audio \(\mathbf{x}\), \(G(\mathbf{x},i)\) can convert its timbre to that of \(i\)-th element in \(\mathcal{C}_{M}\). After
that, we select \(M\) adversary-specified target labels \(\{y_{T}^{(i)}\}_{i=1}^{M}\). Each target label is associated with a timbre backdoor. The generated poisoned dataset \(\hat{\mathcal{D}}\) contains \((M+1)\) disjoint subsets, including one benign subset \(\mathcal{D}_{b}\) and \(M\) poisoned subsets (\(i.e.,\{\mathcal{D}_{p}^{(i)}\}_{i=1}^{M}\)). Specifically, \(\mathcal{D}_{p}^{(i)}\triangleq\{(G(\mathbf{x},i),y_{T}^{(i)})|(\mathbf{x},y)\in\mathcal{ D}_{b}^{(i)}\}\) where \(\mathcal{D}_{s}^{(i)}\subset\mathcal{D}\), \(\mathcal{D}_{s}^{(i)}\cap\mathcal{D}_{s}^{(i)}=\emptyset\) (\(\forall i\neq j\)) (Lines 11-14 in Algorithm 2), and \(\mathcal{D}_{b}=\mathcal{D}-\bigcup_{i=1}^{M}\mathcal{D}_{s}^{(i)}\) (Line 15 in Algorithm 2). In particular, \(\gamma_{i}\triangleq\frac{|\mathcal{D}_{s}^{(i)}|}{|\mathcal{D}_{s}|}\) is dubbed as the poisoning rate of \(i\)-th timbre-type backdoor.
## IV Experiments
### _Main Settings_
**Dataset Description.** We adopt the most classical benchmark, \(i.e.\), Google Speech Command dataset [49], for our experiments. It consists of 30 common English speech commands. Each command is spoken by multiple individuals in various ways, resulting in a total of 64,728 samples. The dataset has a 16kHz sampling rate where each sample lasts approximately one second. Specifically, we selected 23,726 audios with 10 labels (dubbed 'SPC-10') and 64,721 audios with 30 labels (dubbed 'SPC-30') for a comprehensive comparison.
**Baseline Selection.** We compared our PBSM and VSVC with four representative speech backdoor attacks, including **(1)** position-independent backdoor attack (PIBA) [14], **(2)** dual adaptive backdoor attack (DABA) [17], **(3)** backdoor attack with ultrasonic (dubbed 'Ultrasonic') [15], and **(4)** backdoor attack via style transformation (dubbed 'JingleBack') [43].
**Model Structures.** As the poison-only backdoor attacks, we assume that the adversaries have no information about the victim model. To evaluate the effectiveness across different DNNs, we evaluate all attacks under four classical and advanced DNN structures, including LSTM [50], ResNet-18 [26], KWT [28], and EAT [29]. Specifically, LSTM and ResNet-18 are classical models designed for sequential and non-sequential data, respectively; KWT and EAT are advanced speech recognition models, where KWT exploited transformer structure and EAT was designed in an end-to-end manner.
**Attack Setup.** For all attacks, we set the poisoning rate to 1% and randomly select the 'left' as the target label. For our PBSM method, we increase the pitch by 5 semitones. The length of high-amplitude segments is set to 100 milliseconds. For our VSVC method, we select the VCTK dataset [51] as the timbre candidates dataset and we employ StarGANv2-VC [52] as the voice conversion framework. In particular, we evaluate the single-backdoor VSVC in our main experiments for a fair comparison. The results of multi-backdoor VSVC are included in Section IV-C; For DABA [17] and PIBA [14], we follow the same settings described in their original papers; For the
Fig. 2: The main pipeline of attacking via our voiceprint selection and voice conversion (VSVC). The VSVC consists of three main stages, including attack, training, and inference. The attack stage is the core of VSVC, containing four steps (\(i.e.\), feature extraction, similarity calculation, timbre selection, and voice conversion). In the first step, we adopt X-vectors to extract voiceprint features of each timbre candidate; In the second step, we measure the similarity of each timbre pair based on their distance; In the third step, we select the desired number of timbres based on the principle of smallest similarity; In the fourth step, we generate the poisoned training dataset of the (multi-backdoor) timbre-type attack via voice conversion.
ultrasonic attack [15], we set the duration of the trigger to 100 milliseconds; For JingleBack [43], we exploit the third style used in its paper since it led to the best attack performance. Note that this method may reach better stealthiness if we use other styles introduced in their paper, whereas it will decrease its attack effectiveness as the sacrifice.
**Training Setup.** We extract the log-Mel spectrogram of each audio sample as an input feature, which can graphically characterize a person's speech feature in a combination of temporal and frequency dimensions. All models are trained for 100 epochs. We set the learning rate of EAT and LSTM as 0.0001 and 0.005, respectively. We set the learning rate of the remaining models as 0.01. As for the optimizer selection, the EAT and KWT models are trained using the Adam optimizer, while the default optimizer for the other models is SGD. We run each experiment three times and calculate their average to reduce the side-effects of randomness.
**Training Facilities.** We conduct all our experiments on a server running Ubuntu 18.04, equipped with a single NVIDIA GeForce RTX 3090 GPU with 24GB of VRAM.
**Evaluation Metrics.** Following the most classical settings in existing works [5], we adopt benign accuracy (BA) and attack success rate (ASR) to evaluate the effectiveness of all attacks. Specifically, the BA measures the proportion of benign testing samples that can be correctly classified, while the ASR denotes the proportion of poisoned testing samples that can be maliciously predicted as the target label. The higher the BA and the ASR, the more effective the attack; To evaluate the stealthiness, we invite 10 people to identify whether the poisoned audios (5 for each attack) of an attack sounded natural. The proportion of poisoned samples that are regarded as natural audios by humans is dubbed natural rate (NC). The higher the NC, the more stealthy the attack.
### _Main Results_
**Attack Effectiveness.** As shown in Table I-II, the attack success rates of our PBSM and VSVC are sufficiently high (\(>90\%\)) in all cases on both SPC-10 and SPC-30 datasets. The attack performance of our VSVC is on par with or even better than all baseline attacks except for DABA. For example, the ASR of VSVC is 8% higher than that of JingleBack in attacking LSTM and KWT on the SPC-10 dataset. Besides, our attacks have minor adverse effects on benign accuracy. The decreases of benign accuracy compared to the model training with benign dataset are less than 1% in all cases for our attacks. In contrast, both DABA and JingleBack have a relatively high impact on benign accuracy. These results verify the effectiveness of our attacks.
**Attack Stealthiness.** We notice that the ASRs of baseline attacks (especially DABA and Ultrasonic) are higher than those of ours in some cases. However, it comes at the expense of stealthiness. As shown in Table III, the natural rates of all baseline attacks other than Ultrasonic are significantly lower than our PBSM and VSVC. For example, the natural rates of PIBA, DABA, and JingleBack are all 0% while those of our PBSM and VSVC are near 100%. Ultrasonic has a similar natural rate to that of benign samples simply because humans
\begin{table}
\begin{tabular}{c|c|c|c|c|c c|c c} \hline \hline Model\(\downarrow\) & Metric\(\downarrow\), Method\(\rightarrow\) & No Attacks & PIBA & DABA & Ultrasonic & JingleBack & \begin{tabular}{c} PBSM \\ (Ours) \\ \end{tabular} &
\begin{tabular}{c} VSVC \\ (Ours) \\ \end{tabular} \\ \hline LSTM & BA (\%) & 92.62 & 92.51 & 91.18 & 92.13 & 92.57 & 92.56 & 91.91 \\ ASR (\%) & – & 95.04 & 99.16 & 98.12 & 98.45 & 96.21 & 98.01 \\ \hline \multirow{2}{*}{ResNet-18} & BA (\%) & 95.20 & 93.21 & 92.13 & 94.32 & 94.76 & 94.71 & 94.85 \\ ASR (\%) & – & 98.34 & 99.98 & 97.53 & 93.39 & 96.63 & 93.01 \\ \hline \multirow{2}{*}{KWT} & BA (\%) & 91.13 & 90.62 & 89.19 & 90.33 & 90.20 & 90.45 & 90.21 \\ ASR (\%) & – & 94.21 & 99.45 & 97.13 & 93.54 & 94.02 & 97.03 \\ \hline \multirow{2}{*}{EAT} & BA (\%) & 94.51 & 94.33 & 93.13 & 94.23 & 94.35 & 94.01 & 94.38 \\ ASR (\%) & – & 92.12 & 99.43 & 95.32 & 81.06 & 92.51 & 93.12 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The benign accuracy (BA) and attack success rate (ASR) of methods on the SPC-30 dataset.
\begin{table}
\begin{tabular}{c|c|c|c|c|c c|c c} \hline \hline Model\(\downarrow\) & Metric\(\downarrow\), Method\(\rightarrow\) & No Attacks & PIBA & DABA & Ultrasonic & JingleBack & \begin{tabular}{c} PBSM \\ (Ours) \\ \end{tabular} &
\begin{tabular}{c} VSVC \\ (Ours) \\ \end{tabular} \\ \hline LSTM & BA (\%) & 93.68 & 93.54 & 92.13 & 93.21 & 92.63 & 93.32 & 93.43 \\ ASR (\%) & – & 95.23 & 99.76 & 98.61 & 91.31 & 92.11 & 99.61 \\ \hline \multirow{2}{*}{ResNet-18} & BA (\%) & 95.11 & 94.32 & 94.10 & 94.97 & 94.55 & 94.85 & 94.93 \\ ASR (\%) & – & 96.43 & 99.87 & 99.33 & 95.52 & 95.78 & 97.57 \\ \hline \multirow{2}{*}{KWT} & BA (\%) & 91.35 & 90.21 & 90.10 & 91.11 & 91.19 & 91.27 & 90.96 \\ ASR (\%) & – & 96.24 & 99.54 & 97.13 & 91.52 & 94.39 & 99.22 \\ \hline \multirow{2}{*}{EAT} & BA (\%) & 93.33 & 93.21 & 92.61 & 93.12 & 93.10 & 93.23 & 93.31 \\ ASR (\%) & – & 97.32 & 99.21 & 99.12 & 87.39 & 90.13 & 92.32 \\ \hline \hline \end{tabular}
\end{table} TABLE I: The benign accuracy (BA) and attack success rate (ASR) of methods on the SPC-10 dataset.
\begin{table}
\begin{tabular}{c|c c c c|c c} \hline \hline Benign & PIBA & DABA & Ultrasonic & JingleBack & \begin{tabular}{c} PBSM \\ (Ours) \\ \end{tabular} &
\begin{tabular}{c} VSVC \\ (Ours) \\ \end{tabular} \\ \hline
100 & 0 & 0 & 100 & 0 & 98 & 100 \\ \hline \hline \end{tabular}
\end{table} TABLE III: The natural rates (%) calculated by human validation of samples generated by different methods.
cannot hear ultrasound. However, it does not mean that this attack is stealthy. The victim users can still easily identify this attack by checking the spectrogram of samples (as shown in the area of the black dashed box in Figure (d)d and Figure (d)d). Users can also filter out ultrasonic trigger signals to depress this attack. These results verify the stealthiness of our attacks.
In conclusion, our attacks can preserve high effectiveness while ensuring stealthiness. In contrast, existing baseline methods can be easily detected and defended.
### _Ablation Study_
In this section, we discuss the effects of key parameters, including target label, poisoning rate, high-pitch signal, and timbre, of our PBSM and VSVC. We adopt SPC-10 as an example for our discussions. Unless otherwise specified, all settings are consistent to those stated in Section IV-A.
**Effects of the Poisoning Rate.** To explore the influences of the poisoning rate on our attacks, we conduct experiments with poisoning rates ranging from 0.5% to 2.0% against all four model structures. As shown in Figure 4, the attack success rates (ASRs) of both PBSM and VSVC increase with the increase of the poisoning rate, although our attacks can reach promising attack performance by poisoning only 1% training samples. However, the benign accuracy (BA) will decrease with the increase of the poisoning rates to some extent, \(i.e.\), there is a trade-off between ASR and BA. The adversaries should assign a suitable poisoning rate based on their needs.
**Effects of the Target Label.** To verify that our PBSM and VSVC are still effective under different target labels, we
Fig. 3: The spectrograms of different samples. In this example, we present the visualization of two benign audios (with the label ‘left’ and ‘right’) and their poisoned versions generated by different attacks. In particular, the black dashed box indicates the area where the person can easily identify the abnormality.
conduct experiments with ResNet. As shown in Figure 6, the attack success rates of both PBSM and VSVC are similar across all evaluated target labels. Specifically, the ASRs are larger than 93% in all cases, while the decrease of benign accuracy compared to 'no attack' is less than 1%. These results show that target labels have minor effects on our attacks. The adversaries can select any target class based on their needs.
**Effects of the Pitch Boosting.** In this part, we show that pitch boosting used in our PBSM itself can serve as the pitch-type trigger and explore its effects. Specifically, we increase the pitch range from one semitone to seven semitones and evaluate the attack success rate (ASR). The example of the spectrograms of samples with different boosted semitone is show in Figure 5. As shown in Table IV, the ASR increases with the increase of semitones, as we expected. Specifically, the ASRs are larger than 80% in three out of all four cases when we boost five semitones. However, we have to notice that excessive pitch boosting can lead to significant sound distortion and therefore decreasing attack stealthiness.
**Effects of the Short-duration High-pitch Signal.** To verify that inserting a high-pitch signal is critical for our PBSM, we compare its attack success rate to that of its pitch-only variant where we only increase the pitch without adding the high-pitch signal. As shown in Table V, although the pitch-only method can have some attack effects, introducing high-pitch signal can significantly improve the attack effectiveness. Specifically, the attack success rates of PBSM is 10% higher
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Model\(\rightarrow\) & LSTM & ResNet-18 & KWT & EAT \\ \hline
1 & 5.13 & 33.61 & 38.17 & 37.91 \\ \hline
3 & 70.61 & 69.70 & 79.08 & 46.17 \\ \hline
5 & 80.74 & 85.65 & 82.13 & 73.09 \\ \hline
7 & 86.09 & 89.35 & 83.19 & 81.61 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The attack success rate (%) \(w.r.t.\) different boosted semitones on the SPC-10 dataset.
Fig. 4: The performance of our PBSM and VSVC on the SPC-10 dataset under different poisoning rates.
Fig. 5: The spectrograms of samples whose pitch is boosted with different semitone. In this example, we present the visualization of two benign audios (with the label ‘left’ and ‘right’) and their boosted versions.
Fig. 6: The effects of the target label on our PBSM and VSVC attacks on the SPC-10 dataset.
than that of its pitch-only variant in all cases. These results verify the effectiveness of our PBSM.
**Effects of the Timbre.** To verify that our VSVC is still effective with different timbres, we conducted experiments on the SPC-10 dataset. The example of the spectrograms of samples with different timbres is show in Figure 7. As shown in Table VI, the ASRs of VSVC are similar across all evaluated timbres. Specifically, the ASRs are larger than 91% in all cases, while the decrease of benign accuracy compared to 'no attack' is only about 1%. These results indicate that timbre selection has mild effects on our attack. The adversaries can select any timbre based on their needs.
**Effects of the Voiceprint Selection.** To verify that voiceprint selection is critical for our VSVC under the multi-backdoor setting, we compare its attack success rate to that of its variant where we randomly select timbre candidates for voice conversion. In these experiments, we select three timbre candidates for discussions. As shown in Table VII, although the random selection variant can also have some attack effects, the introduction of voiceprint selection can significantly improve attack effectiveness. Specifically, the attack success rates of VSVC are 5% higher than those of its random selection variant in almost all cases. These results verify the effectiveness of the voiceprint selection introduce in our VSVC.
### _The Resistance to Potential Defenses_
Currently, there are many backdoor defenses designed to reduce backdoor threats in image classification tasks [53, 54, 55]. However, most of them cannot be directly used in audio tasks since they are specified for the image domain. Accordingly, in this paper, we evaluate our attacks under three classical and representative cross-domain defenses, including model pruning [56], fine-tuning [57], and trigger filtering. We conduct experiments with the ResNet-18 model on the SPC-10 dataset for simplicity. Unless otherwise specified, all other settings are the same as those illustrated in Section IV-A.
**The Resistance to Fine-tuning.** As a representative backdoor-removal method, fine-tuning [57] intend to remove model backdoors by fine-tuning it with a few local benign samples. This method is motivated by the catastrophic forgetting property [58] of DNNs. In our experiments, we exploit 10% of benign training samples as our benign data and set the learning rate as 0.005. As shown in Figure 8, the attack success rate decreases with the increase of the tuning epoch. However, even
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Method\(\downarrow\), Model\(\rightarrow\) & LSTM & ResNet-18 & KWT & EAT \\ \hline Pitch-Only & 80.74 & 85.65 & 82.13 & 73.09 \\ \hline PBSM & **92.11** & **95.78** & **94.39** & **90.13** \\ \hline \hline \end{tabular}
\end{table} TABLE V: The attack success rate (%) of pitch-only attack and PBSM attack on the SPC-10 dataset.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline Method\(\downarrow\) & Metric\(\downarrow\) & Model\(\rightarrow\) & LSTM & ResNet-18 & KWT & EAT \\ \hline \multirow{2}{*}{(a)} & BA (\%) & 93.56 & 94.88 & 91.04 & 93.13 \\ & ASR (\%) & 98.52 & 97.51 & 98.71 & 91.33 \\ \hline \multirow{2}{*}{(b)} & BA (\%) & 93.32 & 94.76 & 91.36 & 93.21 \\ & ASR (\%) & 99.08 & 98.53 & 98.81 & 93.11 \\ \hline \multirow{2}{*}{(c)} & BA (\%) & 92.88 & 94.23 & 90.98 & 92.89 \\ & ASR (\%) & 97.60 & 96.65 & 97.87 & 92.30 \\ \hline \multirow{2}{*}{(d)} & BA (\%) & 93.15 & 94.22 & 90.77 & 92.78 \\ & ASR (\%) & 98.15 & 96.73 & 98.69 & 92.14 \\ \hline \multirow{2}{*}{(e)} & BA (\%) & 92.61 & 94.35 & 91.33 & 92.39 \\ & ASR (\%) & 99.17 & 98.92 & 99.08 & 94.47 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The attack success rate (%) of our VSVC attack with different timbres on the SPC-10 dataset.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Method\(\downarrow\) & Metric\(\downarrow\) & Model\(\rightarrow\) & LSTM & ResNet-18 & KWT & EAT \\ \hline \multirow{2}{*}{VSVC (w/o)} & BA (\%) & 91.23 & 94.58 & 88.54 & 91.43 \\ & ASR (\%) & 98.10 & 91.24 & 92.34 & 87.65 \\ \hline \multirow{2}{*}{VSVC (w)} & BA (\%) & 92.05 & 95.05 & 90.13 & 93.14 \\ & ASR (\%) & 92.77 & 97.78 & 97.03 & 93.78 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: The performance of VSVC without and with voiceprint selection under the multiple-backdoor setting.
Fig. 7: The spectrograms of samples with different timbre. In this example, we present the visualization of two benign audios (with the label ‘left’ and ‘right’) and their variants with different timbres.
at the end of this process, the ASRs are still larger than 45% for both our PBSM and VSVC. These results verify that our attacks are resistant to fine-tuning to a large extent.
**The Resistance to Model Pruning.** As another representative backdoor-removal defense, model pruning [56] aims to remove model backdoors by pruning neurons that are dormant during the inference process of benign samples. This method is motivated by the assumption that backdoor and benign neurons are mostly separated in attacked DNNs. As shown in Figure 9, the attack success rates are significantly decreased when pruning large amounts of neurons. However, it comes at the cost of a sharp decrease in benign accuracy. Specifically, the ASR decreases by almost the same amount as the BA for both PBSM and VSVC. This is mostly because the assumption of model pruning does not hold in our attacks due to their global and complex trigger designs. These results verify the resistance of our attacks to model pruning.
**The Resistance to Trigger-removal Defense.** To deactivate the potential backdoor in attacked DNNs, the defenders may remove its high-pitched signals, low-pitched signals, and noises, to remove potential trigger patterns of the suspicious testing audio. Obviously, this method has minor effects on our VSVC since we change the global features of its poisoned samples. However, it may defeat our PBSM since we inject a high-pitched signal after boosting the pitch. Accordingly, we examine whether our PBSM attack is still effective when using pitch-boosted samples without injecting the high-pitch signal to query the PBSM-infected DNNs. As shown in Table VIII, our attack can still reach satisfied attack success rates (\(>65\%\)) even without the high-pitch signals. It is mostly because our boosted pitch can also serve as a trigger pattern (as we mentioned in Section III-B) which cannot be removed by trigger filtering. It verifies the resistance of our attacks again.
### _Discussions_
In this section, we discuss the attack effectiveness of our methods under more difficult settings.
**Attacks under the Clean-Label Setting.** Although our attacks are imperceptible, the label of the poisoned samples usually differs from that of their clean versions. Accordingly, users may identify the attack by inspecting the audio-label relation when they can catch some poisoned samples. To further demonstrate the effectiveness of our methods, we explore whether they are still effective under the clean-label setting. In these experiments, we only select samples from the target class for poisoning instead of sampling data from all classes and changing their label to the target one. As shown in Figure 10, although the performances are relatively weaker than those of attacks under the poisoned-label setting, our attacks are still effective when poisoning 9% samples. Specifically, the average ASRs across all model structures of PBSM and VSVC are 81% and 73%, respectively. These results verify the effectiveness of our PBSM and VSVC under the clean-label setting.
**Attacks under the Over-the-Air (Physical) Setting.** To evaluate the effectiveness of our attack methods in real-world scenarios, we design a physical experiment to assess the performance of our attacks under the over-the-air setting. Specifically, we conduct these experiments in a room, where we use computer speakers to play our backdoor audio and a smartphone is used as the recording device to capture the audio. The obtained audio is input into the attacked DNNs for prediction. We measure the playback volume of the audio and it is similar to that of a normal conversation. We place the smartphone at a distance of 0.5 meters from the speaker. As shown in Figure 11, although the performances are relatively weaker than those of attacks under the digital setting, our attacks are still effective in the real world. Specifically, the
Fig. 8: The resistance of our PBSM and VSVC to fine-tuning.
Fig. 10: Clean-Label Attacks. Fig. 11: Over-the-Air Attacks.
Fig. 9: The resistance of our attacks to model pruning.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Method\(\rightarrow\) & PBSM (w/o) & PBSM (w/) \\ \hline SPC-10 & 65.04\% & 95.78\% \\ \hline SPC-30 & 70.62\% & 96.63\% \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: The attack success rate (%) of PBSM-infected DNNs on pitch-boosted samples with (w/) and without (w/o) injecting the high-pitch signal on SPC-10 and SPC-30 datasets.
average ASRs across all model structures of PBSM and VSVC are 53% and 80%, respectively. The lower ASR of the PBSM is mostly due to the limitations of our evaluated device, which may not effectively capture high-pitched signals.
**Attacks under the All-to-All Setting.** To further illustrate the effectiveness of our PBSM and VSVC, we extend the all-to-one attack setting to a more challenging all-to-all one, where the target label \(y_{t}\) of a poisoned sample (with ground-truth class \(y\)) is set to \(y^{\prime}=(y+1)\mod K\). In particular, we increase the poisoning rate to 15% due to the difficult of this task. We conduct experiments on the SPC-10 dataset with ResNet-18. As shown in the Table IX, both PBSM and VSVC can reach promising performance against samples from all classes, although the performance may have some mild fluctuations across them. These results confirm the feasibility of our attacks under the all-to-all setting.
### _Analyzing Attacks in the Hidden Feature Space_
In this section, we analyze why our PBSM and VSVC attacks are effective from the behaviors of samples in the hidden feature space of attacked DNNs.
**Settings.** In this section, we visualize the features of poisoned samples generated by the backbone (\(i.e.\), the input of fully-connected layers) of attacked DNNs via t-SNE [59]. For simplicity, we adopt 2,500 samples and exploit ResNet-18 trained on the SPC-10 dataset for our analysis.
**Results.** As shown in Figure 12, poisoned samples (marked in black) cluster together regardless of their ground-truth labels. In contrast, the benign samples form separate clusters according to their ground-truth class. These phenomena are consistent with predicted behaviors of the attacked model where it 'assigns' the same label to all samples in the same cluster. These results also verify the effectiveness of our attacks, showing that they can force attacked DNNs to learn features of triggers and ignore the benign features. It enables attacked DNNs to minimize the distance between poisoned samples in the feature space and associate the learned trigger-related features with the target label.
## V Conclusion
In this paper, we revealed that almost all existing poison-only backdoor attacks against speech recognition are not stealthy due to their simple trigger designs. To overcome this deficiency, we proposed two simple yet effective attacks, including pitch boosting and sound masking (PBSM) and voiceprint selection and voice conversion (VSVC), inspired by the elements of sound. Our attacks generated more 'natural' poisoned samples and therefore are more stealthy. We also generalized and evaluated our attacks under more difficult settings, such as all-to-all, clean-label, and physical ones. However, we notice that the attack performance may have some degrades in some cases under these settings. We will explore how to alleviate this problem and design their defense countermeasures in our future works. We hope that our research can provide a deeper understanding of stealthy backdoor attacks in speech recognition, to facilitate the design of more sure and robust speech recognition models.
|
2303.08927 | Azimuthal C/O Variations in a Planet-Forming Disk | The elemental carbon-to-oxygen ratio (C/O) in the atmosphere of a giant
planet is a promising diagnostic of that planet's formation history in a
protoplanetary disk. Alongside efforts in the exoplanet community to measure
C/O in planetary atmospheres, observational and theoretical studies of disks
are increasingly focused on understanding how the gas-phase C/O varies both
with radial location and between disks. This is mostly tied to the icelines of
major volatile carriers such as CO and H2O. Using ALMA observations of CS and
SO, we have unearthed evidence for an entirely novel type of C/O variation in
the protoplanetary disk around HD 100546: an azimuthal variation from a
typical, oxygen-dominated ratio (C/O=0.5) to a carbon-dominated ratio
(C/O>1.0). We show that the spatial distribution and peculiar line kinematics
of both CS and SO molecules can be well-explained by azimuthal variations in
the C/O ratio. We propose a shadowing mechanism that could lead to such a
chemical dichotomy. Our results imply that tracing the formation history of
giant exoplanets using their atmospheric C/O ratios will need to take into
account time-dependent azimuthal C/O variations in a planet's accretion zone. | Luke Keyte, Mihkel Kama, Alice S. Booth, Edwin A. Bergin, L. Ilsedore Cleeves, Ewine F. van Dishoeck, Maria N. Drozdovskaya, Kenji Furuya, Jonathan Rawlings, Oliver Shorttle, Catherine Walsh | 2023-03-15T20:49:29Z | http://arxiv.org/abs/2303.08927v1 | # Azimuthal C/O Variations in a Planet-Forming Disk
###### Abstract
The elemental carbon-to-oxygen ratio (C/O) in the atmosphere of a giant planet is a promising diagnostic of that planet's formation history in a protoplanetary disk. Alongside efforts in the exoplanet community to measure C/O in planetary atmospheres, observational and theoretical studies of disks are increasingly focused on understanding how the gas-phase C/O varies both with radial location and between disks. This is mostly tied to the icelines of major volatile carriers such as CO and H\({}_{2}\)O. Using ALMA observations of CS and SO, we have unearthed evidence for an entirely novel type of C/O variation in the protoplanetary disk around HD 100546: an _azimuthal_ variation from a typical, oxygen-dominated ratio (C/O\(\sim 0.5\)) to a carbon-dominated ratio (C/O\(\gtrsim\)1.0). We show that the spatial distribution and peculiar line kinematics of both CS and SO molecules can be well-explained by azimuthal variations in the C/O ratio. We propose a shadowing mechanism that could lead to such a chemical dichotomy. Our results imply that tracing the formation history of giant exoplanets using their atmospheric C/O ratios will need to take into account time-dependent azimuthal C/O variations in a planet's accretion zone.
03/03/2023 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 202 2020 2020 2020 202 2020 2020 2020 2020 2020 202 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 2020 202 2020 2020 2020 2020 202 2020 202 2020 2020 2020 2020 20 2020 20 202 2020 20 2020 20 202 202 2020 20 2020 20 2020 20 2020 20 202 20 2020 20 2020 20 202 202 2020 20 2020 202 2020 202 2020 20 2020 202 202 2020 20 2020 2020 202 2020 2020 2020 202 2020 20 2020 2020 20 202 2020 202 2020 202 2020 2020 2020 2020 202 202 2020 202 2020 2020 22020 202 2020 2020 202 20
present a remarkable _azimuthal_ C/O variation in the planet-forming disk around HD 100546, the first time such a variation has been observed.
## 1 Results
### Data & observational findings
We present a new detection of CS in HD 100546, alongside ALMA SO observations first presented in (Booth et al., 2022) (Figures 1 and 2). HD 100546 is a well-studied \(2.49\pm 0.02\)\(M_{\odot}\) Herbig Be star, with an estimated age of \(\sim 5\) Myr (Arun et al., 2019), and distance of \(110\pm 1\) pc. The star hosts a bright disk with a central dust cavity out to 13 au and another dust gap between \(r\sim 40\) - 150 au, bounded on both sides by dust rings (Walsh et al., 2014; Fedele et al., 2021). Multiple observations provide evidence for at least two planetary candidates within the disk at \(r\sim 13\) and 55 au (Walsh et al., 2014; Quanz et al., 2015; Currie et al., 2015; Pinilla et al., 2015).
CS is detected at 9\(\sigma\) confidence (\(0.90\pm 0.10\) Jy beam\({}^{-1}\) km s\({}^{-1}\) at emission peak, as measured from the integrated intensity map), using the Aracama Compact Array (ACA) in Cycle 4 (Figure 2, top right). The emission is essentially unresolved, since the beam size (4.78") is almost equal to the radial extent of the gas disk, as traced by CO (Walsh et al., 2017). Fitting a Gaussian, we find the peak of the CS emission to be significantly offset from the host star by \(\sim 1\)". We confirm that the offset is related to a physical characteristic of the source, rather than a pointing error (Supplementary Information 1). We determine the radial separation between the peak of the emission and the host star by exploiting the known inclination and position angle of the disk to deproject the image, finding the emission peak to be radially offset \(\sim 100\) au from the source.
We complement the new CS detection with high resolution Cycle 7 observations of SO, first presented in (Booth et al., 2022) (Figure 2, top left). The emission primarily emanates from the inner dust cavity and the inside edge of the dust ring (\(r\sim 13\) au), displaying a clear azimuthal brightness asymmetry, where emission from the eastern side of the disk is a factor of \(\sim 2\) brighter than from the western side.
The distinct morphologies of the SO and CS emission are mirrored in their respective spectral lines (Figure 3), which have unusual and disparate velocity profiles. The SO line is broad (FWZI \(\sim 15\) km s\({}^{-1}\)) and asymmetric, with a prominent blueshifted Keplerian peak at \(\sim-7.5\) km s\({}^{-1}\). In contrast, the CS emission is narrower (FWZI \(\sim 10\) km s\({}^{-1}\)) and sharply peaked in the red (\(\sim+1.5\) km s\({}^{-1}\)).
In summary, the CS and SO emission display clear azimuthal asymmetries in their spatial morphologies, and peculiar spectral line profiles. It is striking that emission from each of these species appears to emanate from distinct and opposite azimuthal regions of the disk. We argue that these features can be fully explained by chemistry resulting from azimuthal C/O variations in the disk.
### Modelling
To investigate the origin of the spatial and spectral asymmetries in the CS and SO emission, we ran source-specific models using the 2D physical-chemical code DALI (Bruderer et al., 2012; Bruderer, 2013). The disk chemical composition is obtained from a chemical network simulation, in which the gas-grain chemistry is solved time-dependently. Our model uses a geometry outlined in Figure 4, in which the disk is composed of two chemically distinct regions. The majority of the disk has a composition consistent with previous studies of HD 100546, in which C/O=0.5 (Kama et al., 2016). We vary the composition in a small angular region of the disk, such that C/O is elevated within an azimuthally localized 'wedge' (C/O>1), dictated by variations in the gas-phase H\({}_{2}\)O, CO, and atomic O abundances (see Supplementary Information 5).
We explored a wide parameter space, taking into account a range of wedge sizes and azimuthal locations, carbon and oxygen abundances, and chemical timescales. The model presented here incorporates a high-C/O wedge extending azimuthally 60\({}^{\rm o}\), centered ten degrees north of west. Modelled integrated intensity maps for both the CS 7-6 and stacked SO 77-6\({}_{6}\) + 7\({}_{8}\)-6\(\sigma\) emission are presented in Figure 2 (lower panel). Our model reproduces the CS emission morphology well. The emission peak is significantly offset towards the west (\(\sim 75\) au deprojected), with a peak flux that matches the observation within a factor of \(\sim 1.1\). The brightness asymmetry in the SO emission is also reproduced by our model, peaking towards the southwest at a radial separation of \(\sim 20\) au. The peak flux matches the observations within a factor of \(\sim 1.2\). A more refined model may be needed to fully reproduce the SO brightness distribution, although we note that the precise distribution can vary depending on the parameters used for the data reduction (Booth et al., 2022). The east/west asymmetry is always
Figure 1: Aizimuthal disparity of SO and CS emission ALMA 870μm continuum emission map (Fedele et al., 2021), overlaid with SO 7-6\({}_{6}\)+7\({}_{8}\)-6\(\sigma\) emission contours (green) and CS 7-6 emission contours (white). The continuum emission has been scaled by \(r^{1}\) to highlight emission in the outer disk. Contours are logarithmically spaced between \(\simeq\) at peak flux. The position of the star is denoted by the yellow cross. Beam sizes are \(\sim\)0.18”/20 au (SO) and \(\sim\)4.78”/525 au (CS).
maintained, and is the key feature reproduced by our model.
The modelled spectral lines are shown in Figure 3. The modelled CS line profile includes a prominent narrow single peak, red-shifted \(\sim 2\) km s\({}^{-1}\) from the source velocity, matching the observation to within \(\sim 0.5\) km s\({}^{-1}\). While it is possible to match the peak location more precisely by changing the orientation of the high-C/O wedge, this results in a slightly lower peak flux (Supplementary Information 3). The modelled SO line profile reproduces both the double-peaked structure and peak flux value. The linewidth is a close match to the observation, where the flux density in the blue shifted component is \(\sim 1.5\times\) greater than that of the red-shifted component.
Modelled CS and SO abundance maps are presented in Supplementary Information 2. We note that our model predicts that the bulk of the SO emission emanates from the dust ring just outside the cavity, whereas (Booth et al., 2022) used the line kinematics to infer that it originates primarily from within the cavity itself. One possible explanation is that our model lacks a smooth transition between the cavity and dust ring, instead having a sharp boundary. Additionally, the inclination and stellar mass used in our model differs from the values reported in (Booth et al., 2022). Literature values vary between \(i=32-44^{\circ}\)(Walsh et al., 2014; Pineda et al., 2019) and \(M_{*}=2.2-2.5~{}M_{\odot}\)(Pineda et al., 2019; Arun et al., 2019; Wichitranakom et al., 2020) which can result in variations of the predicted inner edge of the SO emission (\(\sim 9-18\) au).
Figure 2: **Detected and modelled SO and CS emission in HD 100546**. _Top left_: Integrated intensity map of the stacked SO \(7_{7}-6_{6}\) and \(7_{8}-6_{7}\) transitions observed with the ALMA ((Booth et al., 2022)). _Top right_: Integrated intensity map of the CS 7-6 transition observed with ACA, with a 1\(\sigma\) clip. _Bottom left_: Modelled stacked SO \(7_{7}-6_{6}\) and \(7_{8}-6_{7}\) integrated intensity map. _Bottom right_: Modelled CS 7-6 integrated intensity map. The position of the star is indicated with the green ’x’.
## 2 Discussion
We have shown that the emission from CS and SO in the HD \(100546\) protoplanetary disk can be well-reproduced by a chemical model that incorporates an azimuthal C/O variation. This adds a new dimension of complexity to the relationship between C/O in disks and their planetary progeny. We now aim to understand the origin of this novel type of chemical dichotomy.
The depletion of volatile elemental carbon and oxygen is ubiquitous in protoplanetary disks around both T Tauri and Herbig Ae/Be stars. For instance, disks around AS 209, MWC 480, and HD 163296 all exhibit sub-stellar C/H and O/H ratios (Bosman et al., 2021). Oxygen is typically more depleted than carbon due to its removal from the disk atmosphere through the freezing out of water onto large dust grains, resulting in elevated C/O ratios (\(\sim 2\)). However, our modelling of HD \(100546\) finds no evidence of significant oxygen depletion relative to carbon in this disk, with a best-fit _disk-averaged_ C/O ratio < 1. The majority of the disk is warm enough to preclude freeze-out of CO and CO\({}_{2}\) (Supplementary Information 4), and water loss through freeze-out onto large grains is tempered by the presence of a dust cavity where large grains are heavily depleted (small grains are not as significant for _permanent_ freeze-out, as they cycle vertically within the disk due to turbulence, releasing their ices upon each return to the disk atmosphere). Photodesorption also limits the extent of the water snow surface (Figure 5) Analysis of water emission from HD 100546 reveals a high H\({}_{2}\)O abundance in the photodesorption region is necessary to match observations, with line kinematics indicating that the emission extends out to r\(\sim 300\) au (Pirovano et al., 2022; van Dishoeck et al., 2021). Therefore, we expect the majority of the HD \(100546\) disk to have a gas-phase C/O ratio closer to 0.5, and attribute the observed asymmetries to a region of elevated C/O localized in azimuth.
The main feature we have identified is an azimuthally confined zone of elevated CS abundance, coincident with a region of depleted SO, which we ascribe to a local enhancement in the C/O ratio (> 1). How could this come about? Asymmetries in the structure of both dust and gas are common in protoplanetary disks, particularly transition disks like HD \(100546\) which have large central cavities (Francis and van der Marel, 2020). Dust asymmetries are often attributed to the trapping of millimeter-sized grains in vortices formed by the Rossby Wave Instability (Lovelace et al., 1999), induced by a planetary companion. Vortices can also form through various hydrodynamical instabilities, or at the edges of low-viscosity 'dead-zones'. In recent years, high-resolution ALMA observations have enabled comparisons between dust asymmetries and molecular gas at small scales. While several studies have drawn tentative links (Law et al., 2021; Zhang et al., 2021; Guzman et al., 2021; Alarcon et al., 2021; Ilee et al., 2021; van der Marel et al., 2021), there is often no clear connection to be made. Furthermore, asymmetries observed in a particular species are often not observed in other species or transitions of the same species within the same disk. The physical mechanisms responsible for gas asymmetries cannot therefore be easily attributed to vortices.
In HD 100546, gas-phase asymmetries have previously been observed in a range of molecular species, including various CO transitions (e.g. Panic et al., 2010; Kama et al., 2016; Miley et al., 2019), OH (Fedele et al., 2015), and SO (Booth et al., 2017). These have often been attributed to temperature variations which are thought to result from obscuration by a warped inner dust disk (Panic et al., 2010; Walsh et al., 2017), such that one side of the disk is 10-20 K cooler than the other. In such a scenario, an azimuthal variation in the temperature structure could have significant impact on the disk chemistry, resulting in azimuthal variations in molecular abundances (Young et al., 2021). However, near-IR observations at small spatial scales (\(\sim\)1 au) find no evidence supporting an inclined inner dust disk (\(r<1\) au) (Garufi et al., 2016; Follette et al., 2017; Lazareff et al., 2017; Bohn et al., 2022), which may indicate that the structure of the gas and dust is significantly different within the inner few au. Currently, no physical mechanism
Figure 3: _Left:_ Observed stacked SO \(7_{7}-6_{6}\) and \(7_{8}-6_{7}\) spectrum extracted using a 0.6\({}^{\circ}\) elliptical mask (grey) and model (blue). _Right:_ Observed CS 7-6 spectrum extracted using a 5\({}^{\circ}\) elliptical mask smoothed by the beam (grey) and model (blue). Line profile velocities have been corrected for the source velocity (\(V_{\rm ISM}=5.7\) km s\({}^{-1}\)).
is known to cause such decoupling between the gas and small dust grains, suggesting that another process might be at play.
An alternative scenario is that asymmetric emission is connected to on-going planet formation within HD 100546's inner cavity. (Booth et al., 2022) propose that the observed SO asymmetry may be tracing shocked gas in the vicinity of a circumplanetary disk. Indeed, the peak of the observed SO emission is cospatial with the location of a protoplanet candidate inferred from scattered light images (Currie et al., 2015) and excess CO emission (Brittain et al., 2019). Comparing Cycle 7 and Cycle 0 spectra of SO emission provides further evidence of a newly-forming planet, as the shift in emission peak is consistent with a hot-spot of molecular gas in orbital rotation within the inner cavity (Booth et al., 2017, 2022). Our findings do not rule out this possibility. While our model can be modified to account for an additional component of SO emission related to a CPD, a CPD alone is not able to account for the asymmetries observed in both the SO and CS. Both the kinematics and the spatially resolved SO emission are consistent with a Keplerian protoplanetary disk. The spatial distribution of the emission indicates that any contribution from a CPD to the total SO flux would be relatively small.
This leads us to propose an alternative explanation, consistent with an azimuthal C/O variation. We suggest that an overdensity of dust associated with the inner protoplanet casts a shadow on an azimuthally localised region of the outer disk. This causes dust temperatures in the disk atmosphere to decrease, which leads to additional H\({}_{2}\)O freeze-out on grain surfaces. In turn, this locks a significant fraction of gas-phase oxygen into ices, causing the local gas-phase C/O ratio to become super-solar. As the disk chemistry rebalances, SO is rapidly destroyed while CS is rapidly formed, on timescales shorter that the shadow transit time.
Falsifying this hypothesis using past observations is not straightforward. Our understanding of dust substructures within HD 100546 is shaped by both infrared and (sub)millimeter observations, which contain many features. These include spiral arms, dark and bright azimuthal wedges, emission hotspots (Garufi et al., 2016; Sissa et al., 2018), and an inner ring sculpted by a maze of ridges and trenches (Perez et al., 2020; Fedele et al., 2021). We note that the location of the protoplanet within HD 100546's inner cavity is not coincident with our high-C/O wedge towards the west, but closer to the region of bright SO emission. Nevertheless, material associated with a forming planet can be highly azimuthally and vertically extended (Zhu et al., 2014). Indeed, several features suggestive of shadowing have been identified on the western side of the disk. The most prominent is a dark region towards the northwest, already linked to a possible large-scale shadow (Garufi et al., 2016). This region covers a similar azimuthal width to the high-C/O wedge used in our model, and overlaps with it significantly (the local minimum in the azimuthal brightness profile is orientated slightly further northwest by \(\sim 10^{\circ}\)). At least one other similar dark wedge was identified on the opposite side of the disk (Norfolk et al., 2022). Other features that could be related to a dust overdensity include a horseshoe-like structure identified in 7mm observations at the inner southwest edge(Wright et al., 2015), and a bar-like structure seen in H\(\alpha\) polarized light, which may be a "streamer" of dust dragged in by gas flowing from the outer to inner disk (Mendigutia et al., 2017). However, it is unclear whether such features could lead to large-scale shadowing. So, while HD 100546 clearly displays a number of morphological features that could be related to shadowing, linking any such feature to the region of high-C/O identified here remains open to interpretation.
To determine if a shadow could cause the required chemical changes to produce C/O > 1, we examined three timescales: cooling, freeze-out, and chemical conversion (Supplementary Information 5). As the shadow falls on a part of the disk, the dust must cool enough for H\({}_{2}\)O to freeze out. Kinetics will then funnel O from other reservoirs (atomic O, CO) into the water-ice sink, eventually elevating the CS abundance and depleting SO within the shadowed region. The combined time needed for these processes to operate must be shorter than the time spent in shadow, which converges to \(\sim 5\) years in the outer disk (based on model parameters). Considering a range physical conditions from our model in the CS-emitting region (Supplementary Information 2), we find that the C/O ratio can
Figure 4: **Geometry of the HD 100546 disk model.** The disk is composed of two chemically distinct regions; the majority of the disk has C/O=0.5, apart from a 60\({}^{\circ}\) arc where C/O=1 We propose that an overdensity of dust, associated with a newly forming planet within the cavity, casts a shadow over an azimuthally localized region of the disk. This results in lower temperatures, which causes additional H\({}_{2}\)O freeze-out, leading to an elevated gas-phase C/O ratio. Note that this schematic is not to scale; the gas disk extends out to \(\sim 500\) au.
exceed unity in \(\leq 5\) years inside of \(r\lesssim 200\) au. These results are illustrated in Figure 6. The shadowing hypothesis can be tested in the near future with high-resolution observations of the CS emission, to establish whether the feature moves. Moving shadows of this nature have already been observed in other disks such as TW Hya (Debes et al., 2017).
Regardless of the precise mechanisms which lead to azimuthal C/O variations in the HD 100546 disk, it is clear that such a chemical dichotomy will have a profound impact on the final composition of planets forming within it. Growing planets are supplied by gas and dust from their surroundings. Planets which move in and out of two chemically distinct regions during the course of their evolution can be expected to have chemically complex envelopes, formed of material accreted from both regions. The degree to which envelope composition mirrors that of either region in the disk will be governed by complex chemical and physical processes. If shadowing is indeed responsible for the azimuthal C/O variation, we may expect its effect on planetary composition to be even more profound in warm disks such as HD 100546, where there is a higher fraction of gas-phase water available for freeze-out.
The results presented here therefore add a new consideration to the way in which we interpret observations and model gas-phase asymmetries in protoplanetary disks. The classical view of a radially varying C/O ratio must be readressed if we are to draw meaningful links between the composition of exoplanet atmospheres, and the disks in which they form. Determining the C/O ratio at small spatial scales must be a major goal of future observations, if we are to build models that can meaningfully predict planetary formation pathways.
## 3 Methods
### Data reduction
HD 100546 was observed with the Atacama Compact Array (ACA) in Band 7 during Cycle 4, in two separate execution blocks on November 14th and November 24th 2016 (program 2016.1.01339.5, PI: M. Kama). The observations cover 8 molecular rotational transitions of 7 sulfur-bearing species/isotopologues, outlined in Table 2. Eight scans were performed for a total on-source time of 50.64 minutes, with baselines ranging from 8-45 m. System temperatures varied from 103-170 K and the average precipitable water vapour was 1.0 mm. J1058+0133 was used as both the bandpass and flux calibrator, while J1147-6753 was used as the phase calibrator.
The data reduction was completed using the ALMA Pipeline in the Common Astronomy Software Package (CASA) version 5.6.1-8. Self-calibration was performed but found to have marginal impact due to low S/N of the data. Continuum and line imaging were performed with the tCLEAN algorithm using natural weighting in order to maximize the S/N ratio of the data. The resulting synthesized beam size was \(\sim 4.78^{\rm\,m}\) x 4.06\({}^{\rm\,s}\), with slight variations depending on the spectral window. We used a cell size of 0.5\({}^{\rm\,s}\) to ensure that the beam is well sampled. Continuum subtraction was performed with the CASA task _uvontsub_, using a single-order polynomial fit to the line free channels. The spectral resolution, bandwidth, synthesized beam size for each of the transitions are listed in Table 2.
CS 7-6 at 342.883 GHz is successfully detected, while all other transitions are undetected. The CS 7-6 integrated intensity map (Figure 2, top right) was generated from a 20\({}^{\rm\,m}\)x20\({}^{\rm\,m}\) region centred on the source, where the integrated intensity is determined between 1-4.5 to 25.5 km s\({}^{-1}\), corresponding to channels expected to contain significant emission (\(\sim\pm 20\) km s\({}^{-1}\) from the source velocity \(V_{\rm LSRK}=5.7\) km s\({}^{-1}\)). The detection is made at a 9\(\sigma\) confidence level, with a peak flux of 0.90 Jy beam\({}^{-1}\) km s\({}^{-1}\), and rms of 0.10 Jy beam\({}^{-1}\) km s\({}^{-1}\) as measured from the emission-free regions of the integrated intensity map. Channel maps are presented in Supplementary Information 6, from which we measure a peak flux density of 0.21 Jy beam\({}^{-1}\) and rms of 0.024 Jy beam\({}^{-1}\).
We extracted the spectrum from the CLEAN cube using an elliptical aperture with a 5.0\({}^{\rm\,m}\) radius centred on the source (approximately the same size as the disk as traced by \({}^{12}\)CO emission (Walsh et al., 2014)), where the edges of the mask were smoothed by the beam. We also extracted a spectrum using a Keplerian mask, which excludes noisy pixels that are not directly associated with emission from a disk in Keplerian rotation (Teague, 2020). The mask identifies which pixels in the image cubes have Doppler shifted line velocities that match the Keplerian velocity, based on the velocity profile of a disk rotating around a star of mass \(M_{*}\)= 2.4 \(M_{\odot}\). Pixels with velocities that do not match the Keplerian velocity are masked. The mask is convolved with a beam of equal size to the observation, in order to provide a buffer between the mask edge and emission edge. The total disk-integrated flux was extracted using the CASA task _speak_, and determined to be 0.62 Jy km s\({}^{-1}\) for the Keplerian-masked cube, where the mask was cut at \(\pm 4\) km s\({}^{-1}\) either side of the source velocity in order to remove noisy channels. The disk-integrated flux extracted from the elliptically masked cube was 1.02 Jy km s\({}^{-1}\). Due to the peculiar nature of the CS line profile, our analysis utilises only the elliptically-masked spectrum, in order to not mistakenly remove any real emission.
We employed a number of techniques in an attempt to extract weak spectral signatures from the remaining spectral windows centred on other molecular species (Table 2). We began by extracting spectra from the CLEAN image cubes using an elliptical aperture with a 5.0\({}^{\rm\,m}\) radius centred on the source. Integrated intensity maps were created from the CLEAN cubes, which yielded no detections.
Next, we applied a Keplerian mask to each of the CLEAN image cubes in order to maximise the S/N in the image plane. We extracted spectra and generated integrated intensity maps from the masked cubes, which again resulted in no detections. We also tried stacking
Figure 5: HD 100546 model dust temperature map. Most of the disk is warm enough to preclude freeze-out of CO and CO\({}_{2}\). The H\({}_{2}\)O snow surface (blue line) largely coincides with a dust gap in the outer disk (white dotted lines). Millimeter-sized grains are highly depleted in this region, removing grain surface area available for permanent H\({}_{2}\)O freeze-out. The outer edge of the snow surface is curtailed by photodesorption.
the SO \(2_{1}\)-\(1_{0}\) and \(8_{8}\)-\(7_{7}\) lines in the image plane by adding together the integrated intensity maps and spectra. However, SO remained undetected.
Finally, we applied a matched filter to the visibility data, to maximize the S/N in the \(uv\)-plane (Loomis et al., 2018). The matched filter technique utilises a template image cube that samples the \(uv\)-space in order to obtain a set of template visibilities. These can then be used as a filter, which is cross-correlated with the data in an attempt to detect any weak emission. Matched filtering has previously been used to successfully detect weak spectral line features in HD 163296, TW Hya, and HD 100546, providing an improvement in the S/N of up to \(\sim\)500% when compared to alternative techniques (Carney et al., 2017; Loomis et al., 2018; Booth et al., 2018). We created template emission profiles for each of the spectral windows in the ACA data by modelling the spectral line emission with the DALI thermo-chemical disk modelling code (see section 3.3). The matched filter was then run for each of the spectral lines individually, which again resulted in non-detections for all lines. We derived \(3\sigma\) upper limits for the disk-integrated flux for each of the non-detected spectral lines, calculated from the elliptically masked integrated intensity maps (see Table 2).
We also report the serendipitous detection of C\({}^{18}\)O 3-2 in the spectral window centred on SO \(2_{1}\)-\(1_{0}\) at 329.385 GHz. The detection is made at a \(\sim 24\sigma\) confidence level, with a peak flux of 7.2 \(\pm\) 0.3 Jy beam\({}^{-1}\) km s\({}^{-1}\), as measured from the integrated intensity map. The disk integrated flux is measured using the procedure outlined above, determined to be 5.41 Jy km s\({}^{-1}\) using a Keplerian mask, and 8.22 Jy km s\({}^{-1}\) using an elliptical mask.
### Complementary data
To complement our CS 7-6 data for HD 100546, we make use of a wide range of archival data. Of particular significance to this study are detections of SO \(7_{7}\)-\(6_{6}\) and SO \(7_{8}\)-\(6_{7}\) first presented in (Booth et al., 2022) (ALMA program 2019.1.00193.S, PI A. S. Booth). We note that the maximum recoverable scale of the observations is \(10.422^{\ast}\) (\(\sim 1150\) au), which is much larger than the gas disk (Walsh et al., 2014), and we therefore do not expect any flux missing on short spacings. Similarly, the maximum recoverable scale of our ACA observations is 21.266" (\(\sim 2350\) au), also much larger than the gas-disk. The full list of line fluxes and upper limits used to constrain our model is presented in Supplementary Information 9.
### Chemical modelling
To investigate the origin of the azimuthal asymmetries in various molecular species in the HD 100546 disk, we ran source specific models using the 2D physical-chemical code DALI (Bruderer et al., 2012; Bruderer, 2013). The code begins with a parameterised gas and dust density distribution and an input stellar spectrum, then uses Monte Carlo radiative transfer to determine the UV radiation field and dust temperature. This provides an initial guess for the gas temperature, which begins an iterative process in which the gas-grain chemistry is solved time-dependently. Finally, the ravracing module is used to obtain spectral image cubes, line profiles, and disk-integrated line fluxes.
#### 3.3.1 Disk parameters
The disk structure is fully parameterised, with a surface density that follows the standard form of a power law with an exponential taper:
\[\Sigma_{\rm gas}=\Sigma_{\rm c}\cdot\left(\frac{r}{R_{\rm c}}\right)^{-\gamma }\cdot\exp\left[-\left(\frac{r}{R_{\rm c}}\right)^{2-\gamma}\right] \tag{1}\]
where \(r\) is the radius, \(\gamma\) is the surface density exponent, \(\Sigma_{\rm c}\) is some critical surface density, and \(R_{\rm c}\) is some critical radius, such that the surface density at \(R_{\rm c}\) is \(\Sigma_{\rm c}/e\). The scale height is then given by:
\[h(r)=h_{\rm c}\left(\frac{r}{R_{\rm c}}\right)^{\psi} \tag{2}\]
where \(h_{\rm c}\) is the scale height at \(R_{\rm c}\), and the power law index of the scale height, \(\psi\), describes the flaring structure of the disk.
\(\Sigma_{\rm gas}\) and \(\Sigma_{\rm dust}\) extend from the dust sublimation radius (\(R_{\rm sub}\)) to the edge of the disk (\(R_{\rm out}\)), and can be varied independantly inside the cavity radius \(R_{\rm cav}\) with the multiplication factors \(\delta_{\rm gas}\) and \(\delta_{\rm dust}\).
The gas-to-dust ratio is denoted \(\Delta_{\rm g/d}\). Dust settling is implemented by considering two different populations of grains; small grains (\(0.005\)-\(1\)um) and large grains (\(0.005\)-\(1\)mm). The vertical density structure of the dust is such that large grains are settled towards the midplane, prescribed by the settling parameter \(\chi\):
\[\rho_{\rm dust,\,\rm large}=\frac{f\Sigma_{\rm dust}}{\sqrt{2\pi\nu}\lambda t} \cdot\exp\left[-\frac{1}{2}\left(\frac{\pi/2-\theta}{\chi h}\right)^{2}\right] \tag{3}\]
\[\rho_{\rm dust,\,small}=\frac{(1-f)\Sigma_{\rm dust}}{\sqrt{2\pi\nu}h}\cdot \exp\left[-\frac{1}{2}\left(\frac{\pi/2-\theta}{h}\right)^{2}\right] \tag{4}\]
where \(f\) is the mass fraction of large grains and \(\theta\) is the opening angle from the midplane as viewed from the central star. The physical disk parameters used in our model are given in Table 1.
#### 3.3.2 Stellar parameters
HD 100546 is a well-studied \(2.49\pm 0.02\,M_{\odot}\) Herbig Be star of spectral type B9V, with an estimated age of \(\sim 5\) Myr (Arun et al., 2019). The star is noted for its proximity, located at a distance of \(110\pm 1\) pc (Gaia Collaboration et al., 2018). We note that this differs non-trivially from previous estimates of \(97\pm 4\) from _Hipparas_. The stellar spectrum
Figure 6: **Cooling, freeze-out, and chemical timescales.** The maximum time available for the combination of cooling, freeze-out, and shadowing to occur is dictated by the period a specific region of the disk remains in shadow, which is related to the orbital period of the shadowing material (black line). The cooling timescale is denoted in light blue, freeze-out timescale in dark blue (for a range of \(\Delta_{\rm g/d}\)), and chemical timescale in red. The green shaded area represents the sum total of these timescales within the range of considered \(\Delta_{\rm g/d}\), which follows the freeze-out timescale since the cooling and chemical timescales are negligible. Purple shaded regions denote location of millimeter dust rings.
was modelled by Bruderer et al. (2012) using dereddened FUSE and IUE observations at UV wavelengths, and then extended to longer wavelengths using the B9V template of (Pickles, 1998). The stellar luminosity is 36 \(L_{\odot}\)(Kama et al., 2016).
#### 3.3.3 Chemical network
The chemical network used in our model is based on a subset of the UMIST 06 (Woodall et al., 2007) network. It consists of 122 species (including neutral and charged PAHs) and 1701 individual reactions. The code includes H\({}_{2}\) formation on dust, freeze-out, thermal and non-thermal desorption, hydrogenation, gas-phase reactions, photodissociation and -ionization, X-ray induced processes, cosmic-ray induced reactions, PAH/small grain charge exchange/hydrogenation, and reactions with vibrationally excited H\({}_{2}\). For grain-surface chemistry, only hydrogenation of simple species is considered (C, CH, CH\({}_{2}\), CH\({}_{3}\), N, NH, NH\({}_{2}\), O, and OH). The details of these processes are described more fully in (Bruderer et al., 2012). Of particular relevance to this study, the network includes reactions for 30 sulfur-bearing species, including all those listed in Table 2. Model parameters of relevance to the disk chemistry are listed in Table 1.
#### 3.3.4 Basic fitting procedure
Our fitting process follows the precedure outlined in (Kama et al., 2016), making use of additional observational constraints and a larger grid of models. We begin by fitting the SED using a grid of 1728 models, in which the parameters R\({}_{\rm gap}\), \(\psi\), \(h_{\rm c}\), \(\delta_{\rm dust}\) and \(\Delta_{\rm g/d}\) are varied. At this stage, \(\Sigma_{\rm gas}\) is kept fixed at an arbitrary value such that changes to \(\Delta_{\rm g/d}\) are equivalent to changes only in the dust mass, thus providing us with a baseline estimate for the total dust mass. We find a best fit total dust mass of \(1.12\times 10^{-3}\)\(M_{\odot}\), consistent with previous studies (Kama et al., 2016).
Next, we use the upper limits of the HD 56 \(\upmu\)m and 112 \(\upmu\)m lines to constrain the maximum gas mass. A second grid of models is run in which \(\Sigma_{\rm gas}\) and \(\Delta_{\rm g/d}\) are varied in lockstep, allowing the gas mass to vary whilst maintaining the best-fit dust mass. We constrain the total gas mass to \(<5.6\times 10^{-1}\)\(M_{\odot}\), equivalent to a gas-to-dust ratio of \(\Delta_{\rm g/d}\)\(\approx 390\) (taking into account dust depletion in the inner cavity). This is not a tight enough constraint to uniquely determine the gas-to-dust ratio, so from this point on we adopt the interstellar value of \(\Delta_{\rm g/d}\)\(=100\), equivalent to a total gas mass of \(1.45\times 10^{-1}\)\(M_{\odot}\). Our model allows for variations in \(\Delta_{\rm g/d}\) within the inner dust cavity, but does not take into account other radial variations such as the observed dust gap in the outer disk between \(r\sim 40-150\) au.
[C]/[H]\({}_{\rm gas}\) and [O]/[H]\({}_{\rm gas}\) are constrained by modelling the CO ladder, the line profiles of the CO 3-2, CO 6-5 and [CI] transitions, and the radial profile of the CO 3-2 emission from (Walsh et al., 2014). We find a best fit oxygen abundance of [O/H]\({}_{\rm gas}\approx(1-7)\times 10^{-5}\). When [O/H]\({}_{\rm gas}\)\(>7\times 10^{-5}\), the [CI] line is underpredicted or the CO ladder is overpredicted, depending on [C/H]\({}_{\rm gas}\). When [O/H]\({}_{\rm gas}\)\(<1\times 10^{-5}\), the CO ladder is underpredicted for all values of [C/H]\({}_{\rm gas}\). Adopting a fiducial oxygen abundance of [O/H]\({}_{\rm gas}\)\(=2\times 10^{-5}\), we find [C/H]\({}_{\rm gas}\)\(\approx(1-2)\times 10^{-5}\). C\({}_{2}\)H upper limits presented in (Kama et al., 2016) constrain the global C/O \(\lesssim 1\), and we adopt [C/H]\({}_{\rm gas}\)\(=1\times 10^{-5}\) for our fiducial model, giving a C/O ratio of 0.5. These values are consistent with those found in previous studies of HD 100546 (e.g. Kama et al., 2016). As with that study, the main uncertainties are due to the limited constraints on the gas-to-dust ratio.
Finally, we constrain the gas-phase elemental sulfur abundance using the disk-integrated CS 7-6, SO 7\({}_{7}\)-6\({}_{6}\) and SO 7\({}_{8}\)-6\(\prime\) fluxes and radial intensity profiles, and upper limits for other sulfur-bearing species listed in Table 2. For this study, we adopt a radially varying sulfur abundance profile, in which [S/H]\(=10^{-9}\) between 13-30 au and [S/H]\(=10^{-8}\) between 150-230 au, coincident with prominent millimeter dust rings. Outside of these regions, the sulfur is further heavily depleted by a factor of 1000. This is based on high resolution SO observations which suggest some level of correlation between SO emission and dust ring location (Booth et al., 2022). If a single sulfur abundance is used for the entire disk, our model overpredicts SO emission from the outer dust cavity. A detailed study into the volatile sulfur abundance in HD 100546 is the focus of an upcoming companion paper (in prep).
#### 3.3.5 Modelling azimuthal asymmetries
We hypothesize that the asymmetries in the observed CS and SO emission can be explained by significant azimuthal variations in the elemental carbon and/or oxygen abundances in HD 100546. In such a scenario, the resulting chemistry leads to an azimuthal disparity in the production of carbon- and oxygen-bearing species.
Our aim is to produce a model in which oxygen-based chemistry dominates in one region of the disk, while carbon-based chemistry dominates in another region of the disk. We investigate whether such a model, using time-dependant chemistry, can self-consistently reproduce the asymmetry in the CS and SO emission.
We simplify our assumptions about the chemistry by constructing a model in which the disk consists of two distinct spatial regions. DALI is a 2-dimensional code which relies upon azimuthal symmetry to produce 3-dimensional outputs, so in order to simulate azimuthal asymmetries it is necessary to run two separate models and splice together the outputs using the following procedure.
Our starting point is the full-disk model outlined in the previous section, where C/O=0.5 (model A). Using the raytracing module in DALI, we obtain spectral image cubes for each transition, with the velocity resolution set to match the observations. Next, we run a second full-disk model in which the carbon and/or oxygen abundances are varied, such that the resulting C/O ratio is different from the first model ('model B'). Using a custom Python script, for each transition the spectral cube from model B is spliced together with its counterpart from model A, following the geometry outlined in Figure 4. The resulting cube consists of a large 'crescent' region taken from model A, in which the C/O ratio is 0.5, and a smaller 'wedge' region extracted from model B, with a different C/O ratio. The size of the wedge is prescribed by the angle 6, and the orientation by the angle \(\Phi\) (measured from the centre of the wedge arc). Angles are measured in the plane of the cube i.e. they are not projected on to the disk, and as such do not directly correlate to angles measured in the disk plane. The result is that the angular region from which the wedge is extracted cuts through very slightly different azimuthal regions for different vertical positions within the disk. A more sophisticated model could take the vertical disk structure into account, but we expect the overall effect on the intensity maps and line profiles to be negligible. We note that that there is precedent in the literature for this kind of chemical modelling (Cleeves et al., 2015). We process the model cubes using the CASA tasks _simobserve_ and _simanalyze_ in order to create simulated ALMA observations, using parameters that match the observations. Spectra are extracted and the cubes are then collapsed to generate moment 0 integrated intensity maps.
We explored a wide parameter space, considering large scale variations in the carbon/oxygen abundance, wedge size and position, and chemical timescale. Our model grid covers elemental carbon and oxygen abundances that range from \(1.0\times 10^{-6}\) to \(2\times 10^{-4}\), and
C/O between 0.3 - 3.0 (\(\sim\) 570 models in total). The angular size and position from which we extracted the wedge used parameters that varied from \(\theta=10-90^{\circ}\) and \(\phi=0-180^{\circ}\). Both the carbon/oxygen abundances and overall size of the wedge are largely constrained by the best-fit full disk model outlined in the previous section; the vast majority of the total disk structure must conform to this model if the general fit is to be maintained. Thus, the wedge region cannot be too large, nor can the carbon/oxygen abundances vary too dramatically without significantly affecting the overall model fit. While large wedge angles better produce the spatial morphology of the SO emission, they lead to disk-integrated CS fluxes that are too high. Smaller wedge angles better reproduce the CS flux, but fail to reproduce the offset in the CS emission from the host star, and lead to SO emission that extends too far around the western side of the disk.
We simulate the effects of disk shadowing by using the output abundances from the C/O = 0.5 model as input abundances to the chemical network for the high-C/O wedge model, such that the chemical conditions at the beginning of shadowing are similar to that of the unshadowed region of the disk. Before running the chemical network, we decrease the input H\({}_{2}\)O, CO, and atomic O abundances by varying amounts in order to investigate a range of C/O ratios. The physical justification for depleting each of these species is discussed in Supplementary Information 5. Models are run for a range of chemical timescales, in order to assess how quickly the chemistry rebalances. This method allows us to constrain the CO depletion factor to \(\lesssim 0.8\), via comparison with the CO 6-5, CO 3-2, and [CI] 1-0 lines obtained by APEX and previously presented in (Kama et al., 2016) (see Supplementary Information 7) (the depletion factor is defined as the ratio of the new abundance to the initial abundance). The CO 6-5 line profile has a significant asymmetry in which the blue-shifted peak is \(\sim\) 1.25x brighter than the red-shifted peak. CO depletion factors higher than \(\sim\) 0.8 fail to reproduce any significant asymmetry. We test the effects of modelling a range of CO depletion factors between \(0\) to 0.8, while at the same time varying the level of atomic oxygen and H\({}_{2}\)O depletion, as to cover a range of C/O ratios. In order to investigate C/O ratios significantly greater than unity, it is necessary to redistribute some of the carbon from the removed CO into other gas-phase species. In this case, we place the carbon into neutral atomic carbon, following (Bergin et al., 2016). The model presented here uses a CO depletion factor of 0.3, an atomic oxygen depletion factor of 0.3, and an H\({}_{2}\)O depletion factor of 0 (i.e. total removal), resulting in a gas-phase C/O ratio of 1.5. The model is run to a chemical age of 5 years, based on the approximate shadow transit time (Supplementary Information 5). The full set of parameters used for the model presented in this study are listed in Table 1 (Supplementary Information 8). Using these values, we find that a wedge size of \(\theta=60^{\circ}\) centred ten degrees north of west west (\(\phi=10^{\circ}\)) accurately reproduces both the CS and SO emission morphology and line profiles.
### Data availability
The data presented here is from the ALMA Cycle 4 program 2016.1.01339 (PI M. Kama). The raw data is publicly available from the ALMA archive. The reduced data and final imaging products are available upon reasonable request from the corresponding author.
### Corresponding author
Correspondence and requests for materials should be addressed to Luke Keyte ([email protected]).
## Acknowledgement
We acknowledge D.Fedele for sharing the ALMA 870\(\upmu\)m continuum data. L.K. acknowledges funding via a Science and Technology Facilities Council (STFC) studentship. E.F.v.D. is supported by A-ERC grant agreement No. 101019751 MOLDISK. M.N.D. acknowledges the Swiss National Science Foundation (SNSF) Ambizione grant no. 180079, the Center for Space and Habitability (CSH) Fellowship, and the IAU Gruber Foundation Fellowship, C.W. acknowledges financial support from the University of Leeds, the Science and Technology Facilities Council, and UK Research and Innovation (grant numbers ST/T000287/1 and MR/T040726/1).
## Author contributions
LK reduced the ACA CS data, ran the chemical models, performed analysis of both the data and models, and wrote the manuscript. MK contributed to the analysis of both the data and models, original research concepts, and writing of the manuscript, and led the proposal for the ACA data. A.S.B provided the ALMA SO data and contributed to the writing of the manuscript. E.A.B., L.I.C., E.F.v.D., M.N.D., K.F., J.R., O.S., and C.W. contributed to the writing of the manuscript.
|
2304.12700 | The Participation Game | Inspired by Turing's famous "imitation game" and recent advances in
generative pre-trained transformers, we pose the participation game to point to
a new frontier in AI evolution where machines will join with humans as
participants in social construction processes. The participation game is a
creative, playful competition that calls for applying, bending, and stretching
the categories humans use to make sense of and order their worlds. After
defining the game and giving reasons for moving beyond imitation as a test of
AI, we highlight parallels between the participation game and processes of
social construction, a hallmark of human intelligence. We then discuss
implications for fundamental constructs of societies and options for
governance. | Mark Thomas Kennedy, Nelson Phillips | 2023-04-25T10:07:13Z | http://arxiv.org/abs/2304.12700v1 | # The Participation Game:
###### Abstract
Inspired by Turing's famous "imitation game" and recent advances in generative pre-trained transformers [1], we pose "the participation game" to point to a new frontier in AI evolution where machines will join with humans as participants in social construction processes. The participation game is a creative, playful, competition that calls for applying, bending, and stretching the categories humans use to make sense of and order the world. After defining the game and giving reasons for moving beyond imitation as a test of AI, we highlight parallels between the participation game and processes of social construction--a hallmark of human intelligence. We then discuss how having artificial participants in reality-making processes holds implications for theory and society that demand new approaches to AI governance.
## Introduction
When Turing [2] asked, "Can machines think?", he proposed "The Imitation Game", a test that used general conversational abilities as a proxy for thinking. As generative AI systems deliver increasingly capable conversational abilities, however, scholars remain that Turing's test was too easy a standard for saying that a machine is thinking.
In response to new technologies [3, 4, 5] and Searle's [6] critiques of Turing, we ask a different question, "Can machines join with humans in co-creating the world?" Inspired by Turing's imitation game, we propose the participation game, a game in which computers join with people in the sorts of conversations that shape the categories people use to enact and make sense of the world. This challenge pushes computer scientists to engage two fundamentals of human intelligence: all knowledge about reality is socially constructed [7], and the social realities of human cultures are human-made [8]. Besdies being a hallmark of human intelligence, the capacity to participate in social construction processes is the very thing that enables humans to enact many types of order--including organizations--that enact and structure the social realities of societies [9]. Even if computers can exert influence in these processes without developing human-like thinking and understanding, having efficacious artificial participants (APs) in reality-making processes will demand both new approaches to AI governance and new theorizing about influence and influencers in public discourse. The participation game offers a way to assess AP capacity and explore the theoretical and practical implications of having APs in social construction.
## The Participation Game
The 'participation game' builds on a parlour game called _Categories_[10] and a variant called _Scattergories_. In _Categories_, four to six participants compete against a clock and each other to generate a unique word for each of a dozen or so categories, where each word must start with a letter drawn at random - often by rolling a many-sided die. For example, if the drawn letter is 'f' and categories include, for example, foods, places, first names, films, fowl, and colors; one could say fruit, France, Frank, Fargo, flamingos, and fuchisa. When time is up, participants share their lists to seek approval that their words match the categories; when words are debated, approval is decided by majority vote. Participants score 2 points for unique approved words, 1 point for approved words others also wrote, and 0 for words rejected in voting. Play proceeds for a fixed period (e.g., half an hour) or until any player reaches a victory threshold (e.g., 21 points). At the end of the game, the highest score wins.
In our view, the game's success has much to do with incentives for words that show creativity not only in stretching or reinterpreting categories, but also in the lively discussions and arguments that follow. When games are played at gatherings where not everyone plays, onlookers often heckle participants and disagree or side with whomever they find convincing. In
any case, the approval process features explanation, argumentation, and negotiation about concepts and ontologies.
Building on _Categories_, we can explain the participation game succinctly, as follows: play _Categories_ with four to six participants, at least one of whom is an artificial participant (AP), where all participants must be truthfully identified as human or artificial from the start. As with _Categories_, play proceeds for the prearranged period or until any player reaches the agreed point threshold for victory, at which point the highest score wins.
For an AP to win, it must be like any successful player: creative in the words it comes up with for each category, persuasive in its arguments for why they should count. Also, the AP will have to be convincing in its critiques of other players' words and arguments for their words. Like other players, the AP can win by simply getting the highest score. We like this feature of the game because it reflects our view that APs can contribute to social construction processes without being mistaken for humans, either by subterfuge or confusion. In contrast to Turing's imitation game, the goal is not for computers to pass as human, but to be influential with humans despite being known as computers.
In Turing's game, communication takes place via typed text akin to chat interfaces now ubiquitous. That familiar interface is a good baseline, but vocal inflections, facial expressions, and physical gestures are all vital dimensions of human connection and persuasion. We suggest the participation game should evolve through levels ranging from typed chat (level 1) to audio chat (level 2) to video chat (level 3) to virtual and augment reality gatherings (level 4) to gatherings with humanoid robots (level 5).
## Why for Raise the Bar on AI?
We believe there are three reasons for raising the bar on AI: (1) scholars argue that the Turing Test is not hard enough to confirm that machines can think, (2) chatbot encounters lead many to see the Turing Test as already having been passed, and (3) the spread of human-AI interactions that mimic human teamwork raises questions about how to manage human-AI collaborations.
### Not Hard Enough
Influential academic arguments suggest the Turing Test is too low a standard for declaring machines intelligent. French [11] and Searle [6] argue, for example, it can be passed without thinking or understanding. In our view, these arguments are persuasive.
### Already Passed
In the years from the first Loebner Prize competition of the early 1990s [12] to the last in 2020, winning chatbots became increasingly impressive. Dreyfus [13] argues the Turing Test was needed to propel this kind of progress. With this progress, the public increasingly accept a shift to chatbot-delivered service transactions.
In the last decade, advances in search and machine learning are enabling new levels of human-computer interaction via natural language. Increasingly, researchers and system builders are embracing "the bitter lesson" [14] that the relentless scale-up of search and learning methods renders explicit knowledge modelling all but obsolete. Conceptually, these advances reflect the Zellig Harris argument [15] that language has a "distributional structure" because words "do not occur arbitrarily relative to each other" but instead in "certain positions relative to certain other elements". Harris' distributional hypothesis means the neighborhoods of words reveal their meanings, and that makes it possible to use maps of neighborhoods to fill in blanks--empty spots on the street, so to speak. Thus, the distributional structure of language [16] explains how machine learning could create so-called large language models (LLMs) in which the meanings of words are relatively small, relatively dense vectors [4, 17]. As LLMs become more capable, they are evolving from guessing missing words to sensibly filling in larger gaps to generating sentences and paragraphs, writing essays, and interactive conversation, especially in a question-and-answer format. These advances are enabling new practical research in many fields and schemes for assessing their capabilities that echo principles of Turing's imitation game, albeit without deception.
As LLMs continue to evolve, OpenAI's Generative Pre-Trained Transformers illustrate the practical implications of the distributional hypothesis (see xGPT [18, 19, 5]). As LLMs are augmented by adding the ability to recognize still images and interpret moving images [20], some are hailing the systems' apparent reasoning abilities as precursors of artificial general intelligence (AGI)--the vaunted but elusive goal of AI that approximates human abilities to converse and answer questions on a wide range of topics. Since the mid-twentieth century, AGI predictions have flip-flopped between just-around-the-corner and decades-away. Although we believe AGI is still some ways away, we regard recent LLM advances as something new, and they are attracting journalistic coverage [21]. Returning to the neighborhood analogy for the distributional hypothesis, LLMs work because turn-taking in conversation is like going from a few model homes to the neighborhood built around them.
### Need to Manage Human-AI Teamwork
Finally, raising the bar for evaluating AI reflects the fact that humans are increasingly following and relying on AIs. Take the world of massive multiplayer online (MMO) games, for example: it has become a site for innovation in and learning
about whether and how humans and AIs can work--and play--together [22]. Besides being great environments for trying out and proving advances in reinforcement learning [23, 24], Suarez et al. [25] argue that MMOs come closer than other type of games or simulated environments to modelling real-world learning contexts. In analyzing trust between human players and AI characters in a large MMO, for example, Ahmad et al. [26] show that human-to-AI trust networks develop and echo the effects of homophily prominent in human-to-human trust networks, but with differences arising from relationships and identities players maintain outside the game. In an MTURK study of human players' perceptions of human-AI teaming in MMOs, Zhang et al. [27] respondents expressed a preference for playing with an AI teammate over an unknown human player if the AI teammate could help them win. Interestingly, players express similar expectations of human and AI teammates.
More generally, studies of human-AI interactions suggest people feel better about themselves when non-human assistance (from AI or robots) is framed as autonomous and as having emotions, especially if people can incorporate the benefits of that assistance into their self-concepts [28]. To us, this suggests people can feel good about themselves when they are part of an effective team, even if the team includes AIs in human-like roles. By this logic, humans may well be able to accept AIs as APs in social construction processes like those modeled in our participation game.
## If Raising the Bar, Why the Participation Game?
The participation game raises the bar for AI to include social interactions that require not only creative reinterpretation of ontologies, but also potential extensions, too. Rather than arguing that machines capable of winning a participation game are thinking, understanding, and feeling as humans do, we argue that having machines participate in social construction processes will present profound challenges to social theory and society. To make that case, we define social construction processes and motivate their to human activity; we note the centrality language, ontologies, categories. and categorization in social construction processes; and we highlight advances that suggest APs may be possible sooner rather than later.
### Importance of Social Construction Processes
Social construction processes underlie all shared representations about reality. In their landmark book _The Social Construction of Reality_, Berger and Luckmann [7] explain the "processes by which any body of 'knowledge' comes to be socially established as'reality" [7]. They argue that language records 'habitalization' in the recognized patterns, or 'typifications', institutionalized in particular cultural contexts. In this tradition [29], institutionalization follows the meaning of the Latin verb for "to establish". When a shared body of knowledge is constructed and transmitted across generations, children receive it "as a given reality confronting the individual in a manner analogous to the reality of the natural world", and "language appears to the child as inherent in the nature of things" [7]. Thus, social construction processes enable people to reason together about realities, and they also shape what can be learned.
As Kuhn [30] argues, the history of science shows the power and persistence of shared understandings of the world. In the natural sciences, objects of study do not depend on human knowledge or assent, and non-social realities push back against errant theories. Still, Kuhn argues even scientists do not "reject paradigms because confronted with anomalies or counterin-stances" [30]. In other words, the power of socially constructed frameworks is such that prediction failures do not guarantee knowledge updating. This dynamic leads Berger and Luckmann to speak of the social construction of reality, not just knowledge about it.
Nonetheless, Searle [8] makes a powerful argument about the construction of what he calls social realities--facts of life that _do_ depend on human knowledge and assent. Whereas Berger and Luckmann defined reality as that which exists without depending on human knowledge or intent, Searle distinguishes social realities as a distinct class of realities instituted in language and through the transmission of shared knowledge about how the world works. Unlike knowledge about phenomena like gravity, where dead-wrong theories are contradicted with force that can be brutish [8], social realities may be reproduced for long periods of time even when they are culturally and historically contingent states of affairs that many regard as sub-optimal.
In our proposed participation game, winning takes more than quickness in coming up with words for categories that start with the same letter: it takes creativity and persuasion in the free-wheeling arguments about more creative interpretations of categories. This creative, persuasive, interpretive use of patterns and symbols is at the heart of social construction processes, and it is essential to what it means to think as humans do about the world not only as it is, but also as it could be.
### Centrality of Language, Ontologies, and Categories
Language is central to human social construction processes: it is both the record of our shared representations of reality and the toolkit for working them out. In Rorty's [31] philosophy of language, language is used not only in a translation process that spreads shared beliefs about reality, but also in an evaluation process that settles, among other things, collective agreements about the ontological status of new things. For Berger and Luckmann [7], language is central to 'objectivation', the process whereby humans assign signs, or names, to patterns people of particular communities find it useful to recognize and reason with as elements of their realities, both social and non-social. In Searle's analysis of social ontologies, language records
the institutionalization of "status functions" that, when accepted and used to reason about the world, become "vehicles of power" to the extent that they establish "obligations, rights, responsibilities, duties, entitlements, authorizations, permissions, requirements, etc.". Searle speaks of the "deontic powers" of language to explain its essential role in institutionalizing such social realities. To construct a deontological theory of the right and wrong of the way things work in any world, one needs language to set it up. As Searle[32] puts it, "no language, no deontology."
Thus, languages reflect shared views of the world and institutionalize social ontologies, the collections of categories humans create to make sense of patterns and "anomalies"[9] deemed significant enough deserve attention, explanation, and names[33]. In a study of the kinds of organizations people can recognize, Ruef[34] defines ontologies as "systems of categories, meanings, and identities within which actors and actions are situated." Following Berger and Luckmann and Searle, we say every category, or entry, in an ontology is socially constructed even if it is for something, like gravity, that does not owe its existence to human intention or recognition. Whereas Searle uses "social ontologies" to speak of things that depend on human recognition and intention, social scientists use the phrase to flag that ontologies are inherently social even for phenomena that do not depend on human knowledge or assent. To us, both are right.
As humans use language to create and share ontologies, categories are "continuously remade, refreshed, and/or maintained, with a lot of skilled work by multiple actors with various interests"--some of whom, in the future, may be artificial intelligences. As we will argue, teams, organizations, and societies face profound changes as AI evolves to gain the culturally oriented intelligence required to join humans in social construction processes.
### Prospects for Artificial Participants
Only a short while ago, the suggestion that an AI might join humans in shaping their realities would have seemed both unlikely and undesirable, but this is changing rapidly. As described earlier, techniques for building large-language models have enabled radical advances in the conversational abilities of AI systems. In the last few years, referring to these systems as "an AI" or "AIs" has become increasingly common, and some researchers have begun to feel that systems seem sentient. For a longer time, humans have been using machines and tools like twitter bots in culture wars where rivals compete to establish or dismiss new social realities. In recent years, climate science and public health have been major fronts in ongoing culture wars, and AI is increasingly being weaponized for use in these skirmishes.
### Implications for Theory and Society
Whilst the participation game pushes AI to join activities arguably at the upper end of the range of human thought, we do not argue that winning will qualify AIs as fully intelligent. What we do argue is that having machines join humans in social construction processes holds implications for fundamental theoretical constructs we use both to build and explain societies. In particular, APs will raise new questions about influence, legitimacy, and agency.
### Influence
The prospect of having APs in social construction raises new questions about influence, a core topic of social science[35]. The idea of machines shaping social realities might seem far-fetched, but this is changing with the rise of social media influencers[36] and their digital doppelgangers, virtual influencers[37, 38]. As virtual influencers spread and become more sophisticated, their growing efficacy will invite questions about the trustworthiness and ethics of their influence[39].
But what capabilities will APs need to be more than mere marionettes? Participating in social construction processes need not require the stronger forms of power human actors use to dictate decisions and control agendas that determine what decisions can be made. Rather, engagement in social construction processes requires the subtler sort of influence that Lukes[40] calls the third form of power--the ability to shape what people believe is real, true, or right. As the ontological status of a potentially new social reality is being weighed and debated by competing voices, exactly who will hold sway over what people come to see as real is not known ex ante. Actors with traditional power bases are often outmaneuvered by innovators from the fringes of society. Even so, social theorists since Marx have linked this subtle form of power to social structures; Marx argued that the organization of industries reflects not just goods such as "cloth, linen, silks", but also the "ideas" and "categories" that establish this social order[41]. Similarly, sociologist Charles Tilly observed that "concepts are tools" for enacting and naturalizing social fault lines capable of supporting persistently inequitable resource distributions[42].
Having APs shape the concepts and categories people use to make sense of and order the world will raise questions about how to assign responsibility, accountability, and liability for both good and bad outcomes of the socially constructed concepts that result. As it is with human influencers people, scholars and leaders of societies will grapple with how to separate scientifically sound solutions to fundamental problems from sensational social media mavens who misinform and mislead. This will be especially important since, like all technologies, APs could be developed to serve both these aims.
### Legitimacy
The prospect of APs also holds theoretical and practical implications for legitimacy, another core topic of social science. Legitimacy is an important construct because it exerts pressures to conform to beliefs, practices, and structures that become litmus for inclusion or exclusion in various segments of society. Also, legitimacy is a factor in the emergence and diffusion of concepts and categories that become fixtures in social ontologies [43]. The effects of legitimacy arise from two levels of judgment about social appropriateness: individuals' own beliefs and their estimates of what relevant collectives believe [44]. Crucially, legitimacy perceptions vary by audience, and in a world where digital technologies for learning offer a growing array of views, the legitimacy of even widespread social phenomena can vary widely from one audience, or subpopulation, to another.
Legitimacy underlies all kinds of social evaluation, but it is especially important to reputations and shifts in criteria of reputations and their relative importance. As environmental, social, and governance (ESG) criteria have become important, for example, the former importance of maximizing shareholder value has declined. Organizational reputations have changed, and numerous new measures and practices are adopted. As AI systems evolve from mere compilation and analysis of organizational performance to also joining with human analysts in the identification and explanation of new patterns, the extent to which their growing participation in both social evaluation and standard setting becomes influential will raise questions about legitimacy that will require new ways of thinking about both legitimacy and reputation.
At the risk of oversimplifying, legitimacy is both a carrot and a stick: there are benefits to fitting what is legitimate and costs to flaunting it [45]. As virtual influencers spread, their potential influence and legitimacy will both reflect and constrain the impact they can have in social construction processes. They will also play a role in defining what is legitimate in society and this latter role has very significant societal implications.
### Agency
APs will also affect how both scholars and societies understand and assign agency--that is, the ability to think and act in ways that depart from commonly accepted beliefs, structures, norms, rules, or routines. Among academic definitions of agency, Sewell's [46] conceptualization of agency is particularly relevant to the participation game. For Sewell, agency is a "capacity to transpose and extend schemas to new contexts" that is "inherent in the knowledge of cultural schemas that characterizes all minimally competent members of society." [46] In folk wisdom, the individual is the locus of agency, but creating new social realities requires a collective response to new ways of seeing and doing things. Like Sewell, we see agency as inextricably related to the social structures that mediate collective agreements about what is real.
In the participation game, the prospect of agenic APs raises fundamental questions about accountability, liability, and freedoms. Even if the agency of APs is constrained by social structures in which they are embedded, as is the case for humans, conceptualizing AIs as having agency affects whether and how they will be held accountable for any influence they have. Also, the degree of agency APs have will affect whether and how they can receive credit for influence they have. In concept, this is akin to the handling of accountability for children. With young children, accountability for the deeds of children--both blame and credit--generally goes to parents or legal guardians. As children mature into independent young adults, however, accountability shifts away from parents and guardians and to the former children.
Questions about the potential agency of APs also hold implications for the social structures these systems will be embedded in with their users and collaborators. As it is with humans, the agency of APs will reflect the constraints and degrees of freedom they have in selecting what ends to pursue. In more technical language, agency is about being able to change objective functions. Among humans, we see this as some people choose to play the game of life as given to them while others choose to play with the game in the hopes of creating worlds more to their liking. When it comes to these kinds of choices, some people have considerable freedom while others labour under burdensome constraints. The art and science of building AIs to collaborate with humans in social construction processes will raise questions about what kinds of inequalities and protections to engineer into human-AI collaborations that will shape social realities.
## Discussion: Implications for Society and AI Governance
The participation game points the way to a new frontier for AI in which AI systems could evolve to participate in social construction processes. We argue that such an evolution will prompt re-thinking basic constructs of social theory; also, it will push societies to develop AI governance institutions AI that ensure safe deployment and use of new kinds of tools--or perhaps we should say "entities"--capable of shaping how people think.
It is too early in the evolution of AI to say what system of governance will work best, but we can anticipate the problem and prepare by examining available models. Table 1 is a high-level summary and comparison of the rights, accountabilities, and freedoms associated with six different approaches to governing entities, both human and non-human, of varying abilities. Each model is an oversight relationship that links an overseer and the overseen and structures their respective rights and accountabilities. The rights column summarizes the rights of the overseen in each row; the accountabilities column characterizes
the balance of accountability between the overseen and overseers in each row. For example, pets have limited rights and very few accountabilities, and people who keep pets have relatively light accountabilities for their care. In contrast, children have special rights that include some protections their overseers do not have. Parents face considerable accountabilities for meeting a standard of care for their children, bu children's accountabilities under the law are limited until they are of age. The governance models are listed in order of increasing rights and accountabilities for the overseen. Our main observation from Table 1 is that even with oversight of pets, pets' overseers face non-trivial accountabilities for pet care. In contrast, the accountabilities of AI system developers are currently uncertain, and that will need to change as systems develop.
Finally, the modern digital commons heralded in its early days as a meeting ground has evolved to be a battle ground as well. To the extent AI systems have contributed to this evolution, we suggest the participation offers a hopeful post-Turing frontier for further AI evolution. There has been great value in imitation, but when imitation becomes deception, trust in science and technology is put at risk, and both science and its service to society are imperiled. In our view, it is time to shift focus from machine imitation of humans--too often deceptive--to constructive machine participation in quintessentially human projects.
|
2307.04432 | Density-dependent relativistic mean field approach and its application
to single-$Λ$ hypernuclei in Oxygen isotopes | The in-medium feature of nuclear force which includes both nucleon-nucleon
($NN$) and hyperon-nucleon ($\Lambda N$) interactions impacts the description
of single-$\Lambda$ hypernuclei. With the alternated mass number or isospin of
hypernuclei, such effects could be unveiled by analyzing systematical evolution
of the bulk and single-particle properties. From a density-dependent
meson-nucleon/hyperon coupling perspective, a new $\Lambda N$ effective
interaction in the covariant density functional (CDF) theory, namely
DD-LZ1-$\Lambda1$, is obtained by fitting the experimental data of $\Lambda$
separation energies for several single-$\Lambda$ hypernuclei. It is then
adopted to study the structure and transition properties of single-$\Lambda$
hypernuclei in Oxygen isotopes, comparing with several selected CDF
Lagrangians. Discrepancy is observed explicitly in the isospin evolution of
$\Lambda1p$ spin-orbit splitting with various effective interactions, ascribed
to their divergence of the meson-hyperon coupling strengths with increasing
density. In particular, the density-dependent CDFs introduce an extra
contribution to enhance the isospin dependence of the splitting, which is
originated from the rearrangement terms of $\Lambda$ self-energies. In
addition, the characteristics of hypernuclear radii are studied along the
isotopic chain. Owing to the impurity effect of $\Lambda$ hyperon, a size
shrinkage is observed in the matter radii of hypernuclei as compared to their
cores of normal nuclei, while its magnitude is elucidated further to correlate
with the incompressibility of nuclear matter. Besides, there exists a sizable
model-dependent trend that $\Lambda$ hyperon radii evolve with the neutron
number, which is decided partly by the in-medium $NN$ interactions as well as
the core polarization effects. | Shi Yuan Ding, Wei Yang, Bao Yuan Sun | 2023-07-10T09:12:20Z | http://arxiv.org/abs/2307.04432v1 | Density-dependent relativistic mean field approach and its application to single-\(\Lambda\) hypernuclei in Oxygen isotopes +
###### Abstract
The in-medium feature of nuclear force which includes both nucleon-nucleon (\(NN\)) and hyperon-nucleon (\(\Lambda N\)) interactions impacts the description of single-\(\Lambda\) hypernuclei. With the alternated mass number or isospin of hypernuclei, such effects could be unveiled by analyzing systematical evolution of the bulk and single-particle properties. From a density-dependent meson-nucleon/hyperon coupling perspective, a new \(\Lambda N\) effective interaction in the covariant density functional (CDF) theory, namely DD-LZ1-\(\Lambda\)1, is obtained by fitting the experimental data of \(\Lambda\) separation energies for several single-\(\Lambda\) hypernuclei. It is then adopted to study the structure and transition properties of single-\(\Lambda\) hypernuclei in Oxygen isotopes, comparing with several selected CDF Lagrangians. Discrepancy is observed explicitly in the isospin evolution of \(\Lambda 1p\) spin-orbit splitting with various effective interactions, ascribed to their divergence of the meson-hyperon coupling strengths with increasing density. In particular, the density-dependent CDFs introduce an extra contribution to enhance the isospin dependence of the splitting, which is originated from the rearrangement terms of \(\Lambda\) self-energies. In addition, the characteristics of hypernuclear radii are studied along the isotopic chain. Owing to the impurity effect of \(\Lambda\) hyperon, a size shrinkage is observed in the matter radii of hypernuclei as compared to their cores of normal nuclei, while its magnitude is elucidated further to correlate with the incompressibility of nuclear matter. Besides, there exists a sizable model-dependent trend that \(\Lambda\) hyperon radii evolve with the neutron number, which is decided partly by the in-medium \(NN\) interactions as well as the core polarization effects.
\({}^{1}\)MOE Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, China
\({}^{2}\)School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China
21.80.+a, 13.75.Ev, 21.30.Fe, 21.60.Jz
## 1 Introduction
The discovery of hyperon, particles containing strange quarks, in 1953 sparked strong interest among experimental and theoretical physicists [1]. The ability of hyperons to enter the nucleus and form a system of hypernuclei makes them sensitive probes for studying the structure and specific nuclear features. The studies on hyperon behavior in the nucleus help us to understand the baryon-baryon interaction in nuclear medium and its effects on nuclear properties [2, 3]. In addition, hyperons are thought to be produced inside neutron stars [4, 5, 6]. The link between hypernucleus and neutron star properties benefits our comprehension of the state of matter in extreme environments, as well as strangeness-bearing nuclear force at high densities. In recent decades, a wealth of hypernuclear data has been generated through induced reactions of meson and electron beams at various radioactive beam facilities, including the Japan Proton Accelerator Research Complex (J-PARC) [7], the Thomas Jefferson National Accelerator Facility (JLab) [8], and the Facility for Antiproton and Ion Research (FAIR) [9]. These advanced facilities have played a pivotal role in advancing our understanding of strangeness in nuclear physics. Notably, single-\(\Lambda\) hypernuclei have been the most extensively studied, with experimental data covering hypernuclei from \({}^{3}_{\Lambda}\)H to \({}^{208}_{\Lambda}\)Pb in various laboratories [2, 3, 10, 11].
When \(\Lambda\) hyperon enters into a nucleus, various phenomena could be observed. For instance, in \({}^{7}_{\Lambda}\)Li, it has been found that the size of the \({}^{6}\)Li core is smaller compared to the free space \({}^{6}\)Li nucleus, as suggested by the measurement of the \(\gamma\)-ray transition probability from \(E2(5/2^{+}\to 1/2^{+})\) in \({}^{7}_{\Lambda}\)Li [12]. In addition, in \({}^{13}_{\Lambda}\)C, it is hinted that the \(\Lambda\) spin-orbit splitting is much smaller than the nucleon's [13]. Recently, the potential for producing neutron-rich hyperfragments at high-intensity heavy-ion accelerator facilities is discussed [14, 15]. The directed flow of hypernuclei (\({}^{3}_{\Lambda}\)H and \({}^{4}_{\Lambda}\)H) just observed at RHIC for the first time in heavy-ion collisions, providing insights into hyperon-nucleon interactions
under finite pressure [16]. These advances highlight the promising prospects for investigating hypernuclear structures using the forthcoming high-intensity heavy-ion accelerator facility HIAF [17, 18]. To provide accurate predictions for these experiments, researchers have performed detailed theoretical work on observables such as hypernuclear binding energy [19, 20], spin-orbit splitting [21, 22, 23], hyperon and hypernuclear matter radius [24, 25, 26, 27, 28]. Overall, these efforts aim to provide valuable insights into the behavior of hypernuclei, and to deepen our understanding of the in-medium baryon interactions.
Due to their ability to provide a self-consistent and unified description of almost all nuclei on the nuclear chart, both non-relativistic and relativistic mean-field theories are widely used in the calculation of finite nuclei and nuclear matter, and have been extended to describe hypernuclear systems with strange degrees of freedom during the development of theoretical models [29, 21, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46]. As a key model utilized in this work, the relativistic mean-field theory has been extensively developed to study hypernulcear properties such as hyperon separation energy [19, 47], spin-orbit splitting [23, 39, 48], hyperon halo [49], hypernuclear deformation [50, 51, 52, 25], cluster structure [53] and drip lines [54]. While most theoretical models have primarily emphasized nonlinear self-coupling interactions for studying hypernuclei, there has been a recent study that explores the effective interactions for single-\(\Lambda\) hypernuclei within the density-dependent relativistic mean-field (DDRMF) model [55]. With three distinct fitting approaches, they propose six new sets of effective \(\Lambda N\) interactions and uncover a significant linear correlation between the ratios \(R_{\sigma}\) and \(R_{\omega}\), representing scalar and vector coupling strengths, respectively, between these effective \(\Lambda N\) and \(NN\) interactions.
Recently, a new type of density-dependent relativistic mean-field Lagrangian, DD-LZ1, has been proposed, inspired by the restoration of pseudo-spin symmetry (PSS) and nuclear medium effects [56]. This new effective Lagrangian has produced satisfactory results in describing the properties of nuclear matter and finite nuclei. With unique density-dependent form, DD-LZ1 eliminates the spurious shell closures that appeared in previous RMF calculations, and reasonably restores the PSS of high orbital angular momentum near the Fermi energy [56]. Applications with this new RMF Lagrangian has been performed for several nuclear many-body characteristics, in both finite nuclei with mass ranging from light to superheavy, and neutron star properties with density ranging from low to high. For instance, a comprehensive macroscopic-microscopic model was developed to evaluate the total energies for even-even nuclei with proton numbers ranging from 8 to 110 [57]. Even with the appearance of hyperon [58, 59], larger maximum masses of neutron stars could be obtained with DD-LZ1 than with several other RMF parameter sets, providing the possibility that the secondary object observed in GW190814 is a neutron star [60, 61, 62]. Utilizing the Thomas-Fermi approximation, different microscopic structures of nonuniform nuclear matter were calculated for the crust of neutron stars and a unified equation of state was established in a vast density range [63, 64]. The different density-dependent behaviors of meson-nucleon couplings impact the microscopic structures of neutron star matter with DD-LZ1, affect correspondingly the description on various physical processes and evolutions of neutron stars.
Apart from dealing with the different nuclear medium effects caused by the interactions themselves, the evolution of isospin also leads to significant changes in the in-medium effects of hypernuclei, thereby affecting the description of their structural properties. In recent years, a series of refined theoretical studies have been conducted on hypernuclei in different isotopic chains using various interaction models. For instance, the no-core shell model has been employed to investigate the systematic evolution of the ground and excited state energies in the Helium and Lithium hyperisotopes [20]. The antisymmetrized molecular dynamics method has been applied to explore the e low-lying level structure of hypernuclei in the Beryllium hyperisotopes [65]. The multidimensionally constrained RMF model has been used to study the shape evolution of hypernuclei in the Argon hyperisotopes [51]. The beyond mean-field approach has been utilized to discuss the evolution of \(p\)-state energies and composition in the Carbon hyperisotopes [23], as well as the hyperon halo structures in the Boron and Carbon hyperisotopes [26, 28]. The studies exhibit the significance of isospin role in the description of hypernuclear structure. In fact, with the development of hypernuclear spectroscopy, new experiments related to hypernuclei have been initiated, such as the planned measurements in the J-PARC project, aiming to study the \(\Lambda\) hyperon binding energies in neutron-rich hyperisotopes of \({}^{124-136}_{\Lambda}\)Sn [66, 67]. These experiments will provide crucial information about the properties of hypernuclei associated with various isospin circumstance.
In view of the essential role of nuclear in-medium effects on hypernuclear structure and their relevance to the isotopic evolution, we aim to further expand the density-dependent RMF model to investigate the structure of single-\(\Lambda\) hypernuclei in Oxygen hyperisotopes. First, we will introduce the theoretical framework of the hypernuclear RMF approach in Sec. **2**. Then, the induced \(\Lambda\)-nucleon (\(\Lambda N\)) effective interactions will be determined by fitting \(\Lambda\) separation energies to the experimental data for DD-LZ1 Lagrangian. To give the results and discussion, the influence of nuclear in-medium effects will be studied in Sec. **3**, on the isospin dependence of hypernuclear bulk properties, hyperon spin-orbit splitting and matter/hyperon radius. Finally, a summary will be given in Sec. **4**.
DDRMF approach for spherical single-\(\Lambda\) hypernuclei
To describe single-\(\Lambda\) hypernuclei within the meson-exchanged type of the relativistic mean-field theory, the covariant Lagrangian density serves as the foundation, which is
\[\mathscr{L}=\mathscr{L}_{B}+\mathscr{L}_{\varphi}+\mathscr{L}_{I}, \tag{1}\]
where the terms of free fields read as
\[\mathscr{L}_{B} = \sum_{B}\bar{\psi}_{B}\left(i\gamma^{\mu}\partial_{\mu}-M_{B} \right)\psi_{B}, \tag{2}\] \[\mathscr{L}_{\varphi} = +\frac{1}{2}\partial^{\mu}\sigma\partial_{\mu}\sigma-\frac{1}{2} m_{\sigma}^{2}\sigma^{2}-\frac{1}{4}\Omega^{\mu\nu}\Omega_{\mu\nu}+\frac{1}{2}m_{ \omega}^{2}\omega^{\mu}\omega_{\mu}\] (3) \[\quad-\frac{1}{4}\vec{R}^{\mu\nu}\cdot\vec{R}_{\mu\nu}+\frac{1}{2 }m_{\rho}^{2}\vec{\rho}^{*}\cdot\vec{\rho}_{\mu}-\frac{1}{4}F^{\mu\nu}F_{\mu \nu},\]
where the index \(B\) (\(B^{\prime}\)) represents nucleon \(N\) or hyperon \(\Lambda\), with its sum \(\sum_{B}\) over nucleon \(N\) and hyperon \(\Lambda\). The masses of the baryon and mesons are given by \(M_{B}\) and \(m_{\phi}\) (\(\phi=\sigma,\omega^{\mu},\vec{\rho}^{*}\)), while \(\Omega^{\mu\nu}\), \(\vec{R}^{\mu\nu}\) and \(F^{\mu\nu}\) are the field tensors of vector mesons \(\omega^{\mu},\vec{\rho}^{*}\) and photon \(A^{\mu}\), respectively. The interaction between nucleon (hyperon) and mesons (photon) is involved by the Lagrangian \(\mathscr{L}_{I}\),
\[\mathscr{L}_{I} = \sum_{B}\bar{\psi}_{B}\left(-g_{\sigma B}\sigma-g_{\omega B}\gamma ^{\mu}\omega_{\mu}\right)\psi_{B} \tag{4}\] \[\quad+\bar{\psi}_{N}\left(-g_{\rho N}\gamma^{\mu}\vec{\tau}\cdot \vec{\rho}_{\mu}-e\gamma^{\mu}\frac{1-\tau_{3}}{2}A_{\mu}\right)\psi_{N}.\]
Here the \(\Lambda\) hyperon (namely \(\psi_{B}\) taken as \(\psi_{\Lambda}\)), which is charge neutral with isospin zero, only takes part in interactions that are spread by isoscalar mesons. The nuclear in-medium effects are introduced phenomenologically via the coupling strengths \(g_{\phi B}\) (\(g_{\phi N}\)), which use baryon-density dependent functions in density-dependent RMF (DDRMF) approach to define the strengths of different meson-baryon (meson-nucleon) couplings [56, 68].
The effective Hamiltonian operator for \(\Lambda\) hypernuclei can be obtained by performing the general Legendre transformation on the Lagrange density \(\mathscr{L}\) in Eq. (1), and it can be written as the sum of the kinetic energy operator \(\hat{T}\) and the potential energy operator \(\hat{V}_{\varphi}\),
\[\hat{H} \equiv \hat{T}+\sum_{\varphi}\hat{V}_{\varphi} \tag{5}\] \[= \int dx\sum_{B}\bar{\psi}_{B}(x)(-i\mathbf{\gamma}\cdot\mathbf{\nabla}+M_{ B})\psi_{B}(x)\] \[+\frac{1}{2}\int dx\sum_{B}\sum_{\varphi}\left[\bar{\psi}_{B} \mathscr{G}_{\varphi B}\psi_{B}\right]_{x}D_{\varphi}(x,x^{\prime})\left[ \bar{\psi}_{B^{\prime}}\mathscr{G}_{\varphi B^{\prime}}\psi_{B^{\prime}} \right]_{x^{\prime}},\]
here \(x\) is four-vector (\(t,\mathbf{x}\)). Correspondingly, we define interaction vertices \(\mathscr{G}_{\varphi B}(x)\) for a various of meson (photon)-nucleon (hyperon) coupling channels, which for isoscalar \(\sigma\) and \(\omega\) mesons are represented as
\[\mathscr{G}_{\sigma B}(x) = +g_{\sigma B}(x), \tag{6a}\] \[\mathscr{G}_{\omega B}^{\mu}(x) = +g_{\omega B}(x)\gamma^{\mu}. \tag{6b}\]
Notably, both nucleons and the \(\Lambda\) hyperon can contribute to the isoscalar meson fields. However, for the remaining isovector mesons and photon fields, it is expected that their interaction vertices solely connect to nucleons since the isoscalar and charge-zero nature of \(\Lambda\) hyperon,
\[\mathscr{G}_{\rho N}^{\mu}(x) = +g_{\rho N}(x)\gamma^{\mu}\vec{\tau}, \tag{7a}\] \[\mathscr{G}_{AN}^{\mu}(x) = +e\gamma^{\mu}\frac{1-\tau_{3}}{2}. \tag{7b}\]
As the retardation effects could be neglected in the majority of RMF models, the meson (photon) propagators \(D_{\phi}\) (\(D_{A}\)) read as
\[D_{\phi}(\mathbf{x},\mathbf{x}^{\prime}) = \frac{1}{4\pi}\frac{e^{-m_{\phi}|\mathbf{x}-\mathbf{x}^{\prime}|}}{|\mathbf{x}-\mathbf{x}^{\prime}|}, \quad D_{A}(\mathbf{x},\mathbf{x}^{\prime}) = \frac{1}{4\pi}\frac{1}{|\mathbf{x}-\mathbf{x}^{\prime}|}. \tag{8}\]
The baryons field operator \(\psi_{B}\) in the Hamiltonian (5) can be second quantized in the positive-energy space under the no-sea approximation as
\[\psi_{B}(x) = \sum_{i}f_{i}(\mathbf{x})e^{-i\epsilon_{i}t}c_{i}. \tag{9}\]
Here, \(f_{i}\) represents the Dirac spinor, while \(c_{i}\) denote the annihilation operators for state \(i\). Accordingly, the energy functional \(E\) is determined by evaluating the expectation value of the Hamiltonian with respect to a trial Hartree-Fock ground state \(|\Phi_{0}\rangle\),
\[E = \left\langle\Phi_{0}|\hat{H}|\Phi_{0}\right\rangle=\left\langle \Phi_{0}|\hat{T}|\Phi_{0}\right\rangle+\sum_{\varphi}\left\langle\Phi_{0} \left|\hat{V}_{\varphi}\right|\Phi_{0}\right\rangle. \tag{10}\]
Then the binding energy of a \(\Lambda\) hypernucleus is written by
\[E = \sum_{B}(E_{\rm kin,B}+E_{\rm e,B}+E_{\rm\omega,B})+E_{\rm\rho,N} +E_{\rm e.m.}+E_{\rm c.m.}+E_{\rm pair}, \tag{11}\]
where the kinetic energy functional of baryons is shown by \(E_{\rm kin,B}\). The contributions of the potential energy functional from \(\sigma\) and \(\omega\) are denoted by the variables \(E_{\rm\sigma,B}\) and \(E_{\rm\omega,B}\). Additionally, \(E_{\rm\rho,N}\) and \(E_{\rm e.m.}\) are used to represent the contributions from \(\rho\) and \(A\), respectively. The center-of-mass adjustment to the mean-field is represented by the term \(E_{\rm c.m.}\), while \(E_{\rm pair}\) takes into account the contribution from nucleon pairing correlations [69].
The role of deformation in single-\(\Lambda\) hypernuclei has been discussed in various density functional models [23, 25, 70, 71], which may generate non-negligible effects on the single-particle energies like in Carbon hyperisotopes [23, 25, 71]. To describe single-\(\Lambda\) hypernuclei, in particularly the Oxygen hyperisotopes discussed hereafter, we just restrict the RMF approach to the spherical symmetry. Correspondingly, the Dirac spinor \(f_{i}(\mathbf{x})\) of the nucleon or hyperon in Eq. (9) has the following form:
\[f_{n\kappa m}(\mathbf{x}) = \frac{1}{r}\left(\begin{array}{c}iG_{a}(r)\Omega_{\kappa m}( \vartheta,\varphi)\\ F_{a}(r)\Omega_{-\kappa m}(\vartheta,\varphi)\end{array}\right), \tag{12}\]
where the index \(a\) consists of the set of quantum numbers \((n\kappa)=(njl)\), and \(\Omega_{\kappa m}\) is the spherical spinor. Meanwhile, the propagators can be expanded in terms of spherical Bessel and spherical harmonic functions as
\[D_{\phi}(\mathbf{x},\mathbf{x}^{\prime}) = \sum_{L=0}^{\infty}\sum_{M=-L}^{L}(-1)^{M}R_{LL}^{\phi}\left(r,r^ {\prime}\right)Y_{LM}\left(\mathbf{\Omega}\right)Y_{L-M}\left(\mathbf{\Omega}^{\prime}\right), \tag{13}\]
where \(\mathbf{\Omega}=(\vartheta,\varphi)\), and \(R_{LL}\) contains the modified Bessel functions \(I\) and \(K\) as
\[R_{LL}^{\phi}\left(r,r^{\prime}\right) = \sqrt{\frac{1}{rr^{\prime}}}I_{L+\frac{1}{2}}\left(m_{\phi}r_{<} \right)K_{L+\frac{1}{2}}\left(m_{\phi}r_{>}\right), \tag{14}\] \[R_{LL}^{A}\left(r,r^{\prime}\right) = \frac{1}{2L+1}\frac{r_{<}^{L}}{r_{>}^{L+1}}. \tag{15}\]
In the DDRMF approach, the meson-baryon coupling strengths are adopted as a function of baryon density \(\rho_{b}\), which are written by
\[g_{\phi B}\left(\rho_{b}\right) = g_{\phi B}(0)f_{\phi B}(\xi)\quad\mbox{or}\quad g_{\phi B} \left(\rho_{b}\right)=g_{\phi B}(0)e^{-a_{\phi B}\xi}, \tag{16}\]
where \(\xi=\rho_{b}/\rho_{0}\) with \(\rho_{0}\) the saturation density of nuclear matter, and
\[f_{\phi B}(\xi) = a_{\phi B}\frac{1+b_{\phi B}(\xi+d_{\phi B})^{2}}{1+c_{\phi B}( \xi+d_{\phi B})^{2}}. \tag{17}\]
The free coupling strength at \(\rho_{b}=0\) is represented by \(g_{\phi B}(0)\) in the expression above. To keep the variational self-consistency between the energy density functional and single-particle properties, the extra terms in baryon self-energies, namely the rearrangement terms, will occur due to the density dependence of the coupling strengths. The single-particle (nucleon or hyperon) properties can be determined by solving the Dirac equation,
\[\varepsilon_{a,B}\begin{pmatrix}G_{a,B}(r)\\ F_{a,B}(r)\end{pmatrix}=\begin{pmatrix}\Sigma_{+}^{B}(r)&-\frac{d}{dr}+\frac{ \kappa_{a,B}}{r}\\ \frac{d}{dr}+\frac{\kappa_{a,B}}{r}&-\left[2M_{B}-\Sigma_{-}^{B}(r)\right] \end{pmatrix}\begin{pmatrix}G_{a,B}(r)\\ F_{a,B}(r)\end{pmatrix}. \tag{18}\]
Here the self-energies \(\Sigma_{\pm}^{B}=\Sigma_{0,B}\pm\Sigma_{S,B}\) composed by the vector and scalar terms. The scalar self-energy \(\Sigma_{S,B}=\Sigma_{S,B}^{\sigma}\), and the time component of the vector one has
\[\Sigma_{0,B}(r)=\sum_{\phi}\Sigma_{0,B}^{\phi}(r)+\Sigma_{R}(r), \tag{19}\]
where \(\phi=\omega,\rho\) for nucleons, and \(\phi=\omega\) for \(\Lambda\) hyperon. The self-energies of nucleon or hyperon include scalar one \(\Sigma_{S,B}\) and vector one \(\Sigma_{0,B}\), in which the coupling of isoscalar mesons contributes as follows,
\[\Sigma_{S,B}^{\sigma}(r)= -g_{\sigma B}(r)\sum_{B^{\prime}}\int r^{\prime 2}dr^{\prime}g_{ \sigma B^{\prime}}(r^{\prime})\rho_{s,B^{\prime}}(r^{\prime})R_{00}^{\sigma}(r,r^{\prime}), \tag{20a}\] \[\Sigma_{0,B}^{\omega}(r)= +g_{\omega B}(r)\sum_{B^{\prime}}\int r^{\prime 2}dr^{\prime}g_{ \omega B^{\prime}}(r^{\prime})\rho_{b,B^{\prime}}(r^{\prime})R_{00}^{\omega}(r,r^{\prime}). \tag{20b}\]
Here, \(\rho_{s,B}\) and \(\rho_{b,B}\) represent the scalar and baryon density, respectively [69]. Additionally, the rearrangement term \(\Sigma_{R}\) appears in DDRMF approach, which contain the summation over all baryons for the isoscalar case of \(\phi=\sigma,\omega\), but only over nucleons for the isovector one. For example, the contribution from \(\sigma-S\) coupling is shown as
\[\Sigma_{R,\sigma}(r)=\sum_{B}\frac{1}{g_{\sigma B}}\frac{\partial g_{\sigma B }}{\partial\rho_{b}}\rho_{s,B}\Sigma_{S,B}^{\sigma}(r). \tag{21}\]
## 3 Results and Discussion
In recent years, there has been extensive theoretical research on hypernuclei, particularly focusing on the simplest single-\(\Lambda\) hypernuclei, using RMF and RHF theories. In this section, we aim to extend the effective interaction DD-LZ1 [56], which has been proven to be successful and promising in determining the properties of nuclear structure in both bulk and single-particle aspects, to incorporate \(\Lambda\) hyperon within the framework of RMF model. To give a comparative study and illustrate the role of nuclear in-medium effects, the calculations with DD-LZ1 will be accompanied by several existing effective \(\Lambda N\) interactions within CDF models. These interactions have been significantly expanded to incorporate the degrees of freedom of the \(\Lambda\) hyperon and have yielded many successful findings in the study of hypernuclear structure and the properties of dense stars. In detail, density-dependent RMF effective interactions DD-LZ1 [56], PKDD [69], DD-ME2, TW99, DDV [72], density-dependent RHF (DDRHF) effective interactions PKO1, PKO2, PKO3 [69], and nonlinear RMF (NLRMF) effective interactions NL-SH [19] and PK1 [73] were selected. In these CDF functionals, the \(\omega\)-tensor coupling which has been proved to be essential in reducing \(\Lambda\)'s spin-orbit splitting in hypernuclei [74] is ignored. The Dirac equation is solved in a radial box size of \(R=20\) fm with a step of \(0.1\) fm. For open-shell hypernuclei, we employ the BCS method to account for pairing correlations. As the strength of hyperon pairing correlations remains uncertain and may become essential in multi-\(\Lambda\) hypernuclei, our current work solely considers pairing correlations between \(nn\) and \(pp\) pairs by using the finite-range Gogny force D1S [75], see Refs. [76, 77, 78, 79] for details. In addition, the blocking effect should be taken into account for the last valence nucleon or hyperon, with a detailed description to the Ref. [69].
### Density dependence of \(\Lambda N\) effective interaction
For the theoretical study of hypernuclear structure, the \(\Lambda N\) interaction must be determined first. Since the \(\Lambda\) hyperon is an electrically neutral particle with isospin zero, our focus lies on the coupling strengths between the isoscalar-scalar \(\sigma\) meson and the isoscalar-vector \(\omega\) meson with the \(\Lambda\) hyperon. For convenience, we introduce the ratio of the coupling strengths between the meson-hyperon and meson-nucleon, \(g_{\phi\Lambda}/g_{\phi N}\). According to the naive
quark model [80], we fix the ratio of the isoscalar-vector meson coupling strength \(g_{\omega\Lambda}/g_{\omega N}\) to 0.666, while the ratio of the isoscalar-scalar one \(g_{\sigma\Lambda}/g_{\sigma N}\) can be obtained by reproducing the \(\Lambda\) hyperon separation energy \(B_{\Lambda}\) experimental data for \({}^{16}_{\Lambda}\)O, \({}^{40}_{\Lambda}\)Ca, and \({}^{208}_{\Lambda}\)Pb [3, 10]. In the fitting process, the hyperon is placed in the \(1s_{1/2}\) ground state, and the \(B_{\Lambda}\) is defined as follows:
\[B_{\Lambda}(^{A}_{\Lambda}Z)=E(^{A-1}Z)-E(^{A}_{\Lambda}Z), \tag{22}\]
Based on the effective interaction DD-LZ1, we finally obtained a new set of \(\Lambda N\) interaction, namely DD-LZ1-\(\Lambda\)1, after a fitting process of Levenberg-Marquardt minimization. Then, we calculated the \(\Lambda\) separation energy \(B_{\Lambda}\) as well as the single-\(\Lambda\) energy, with hyperon occupying the ground state \(1s_{1/2}\) or possible excited states with higher angular momentum \(l_{\Lambda}\). For \(B_{\Lambda}\) of DD-LZ1-\(\Lambda\)1, a remarkable agreement with experimental data is found for most of hypernuclei, except for \({}^{28}_{\Lambda}\)Si with significant deformation and Carbon hyperisotopes with light mass, as shown in Fig. 1. Actually, more accuracy description to the light-mass Carbon hyperisotopes could be obtained, by limiting the mass region of fitting and taking into account the deformation effects [55]. To investigate the deviation in describing the structural properties of single-\(\Lambda\) hypernuclei using different CDF effective interactions, the coupling strength of DD-LZ1-\(\Lambda\)1 in comparison with other selected CDF functionals are listed in Table 1. One could check the root-mean-square deviation \(\Delta\) for \(B_{\Lambda}\) between theoretical calculation and experimental value, which is defined by
\[\Delta\equiv\sqrt{\frac{1}{N}\sum_{i=1}^{N}(B^{\rm exp.}_{\Lambda,i}-B^{\rm cal.}_{\Lambda,i})^{2}}. \tag{23}\]
To reveal the systematics, we define \(\Delta_{1}\) to be the deviation only for \({}^{16}_{\Lambda}\)O, \({}^{40}_{\Lambda}\)Ca, and \({}^{208}_{\Lambda}\)Pb, as well as \(\Delta_{2}\) that suitable for all hypernuclei.
From Table 1, it can be seen that different CDF theoretical models have good descriptions for \({}^{16}_{\Lambda}\)O, \({}^{40}_{\Lambda}\)Ca and \({}^{208}_{\Lambda}\)Pb, and most parameter sets have good consistency for hypernuclear theoretical calculations and experimental data over a large mass range from \({}^{12}_{\Lambda}\)C to \({}^{208}_{\Lambda}\)Pb. In addition, by comparing three different types of CDF effective interactions, we can find that when the ratio of isospin scalar-vector meson coupling strength is fixed to the same value, the ratio of isospin scalar-scalar meson coupling strength \(g_{\sigma\Lambda}/g_{\sigma N}\) may satisfy certain linear correlations with the ratio of isospin scalar-vector meson coupling strength, which has been systematically explored in some works [55, 59, 81]. It should be pointed out that the linear correlation of meson-hyperon coupling strength ratios obtained in the RMF framework is obviously not suitable for density-dependent RHF models [69].
Figure 1: (Color Online) The calculated \(\Lambda\) separation energies \(B_{\Lambda}\) for the single-\(\Lambda\) hypernuclei with the RMF effective interaction DD-LZ1-\(\Lambda\)1 in comparison with the experimental data taken from Ref. [3, 10].
To provide a comprehensive understanding of the in-medium equilibrium in hypernuclei, we present the density dependence of coupling strengths for selected CDF effective interactions in Fig. 2(a) and Fig. 2(b), corresponding to the isoscalar-scalar channel \(g_{\sigma\Lambda}\) and isoscalar-vector one \(g_{\omega\Lambda}\). There are systematic divergences of the meson-hyperon coupling strengths with increasing density among density-dependent RMF, density-dependent RHF, and nonlinear RMF effective interactions. Notably, the density dependence of \(g_{\sigma\Lambda}\) and \(g_{\omega\Lambda}\) is significantly reduced in the DDRHF effective interaction compared to the DDRMF effective interaction. This pronounced reduction in density dependence also influences the description of single-particle properties in hypernuclei, such as \(\Lambda\) hyperon spin-orbit splitting [69]. Furthermore, in contrast to density-dependent interactions, the NLRMF effective interaction exhibits density-independent characteristics for \(g_{\sigma\Lambda}\) and \(g_{\omega\Lambda}\). Consequently, when applying these three types of CDF effective interactions to single-\(\Lambda\) hypernuclei, the systematic deviation could take place in describing the isospin dependence of the hypernuclear structure.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline & DD-LZ1-\(\Lambda\)1 & PKDD-\(\Lambda\)1 & DD-ME2 & TW99 & DDV & PKO1-\(\Lambda\)1 & PKO2-\(\Lambda\)1 & PKO3-\(\Lambda\)1 & NL-SH & PK1 \\ \hline \(g_{\sigma\Lambda}/g_{\sigma N}\) & 0.615 & 0.620 & 0.620 & 0.617 & 0.622 & 0.596 & 0.591 & 0.594 & 0.621 & 0.618 \\ \(\Delta_{1}\) & 0.319 & 0.363 & 0.245 & 0.375 & 0.473 & 0.265 & 0.260 & 0.407 & 0.916 & 0.519 \\ \(\Delta_{2}\) & 1.810 & 0.734 & 0.710 & 0.684 & 3.460 & 0.683 & 0.527 & 0.881 & 1.614 & 1.184 \\ \hline \hline \end{tabular} In DDRMF approach, the in-medium effects of nuclear force are effectively embedded in the density-dependent shape of meson-baryon coupling strength, playing the role in the nuclear structure via the equilibrium of nuclear dynamics from various coupling channels. In recent years, analysis based on the equilibrium of nuclear in-medium dynamics has been applied to clarify the mechanism of the pseudospin symmetry, the shell evolution, the liquid-gas phase transition, and hyperon’s spin-orbit splitting in the CDF models [56, 69, 78, 82, 83]. The delicate in-medium balance between nuclear attractive and repulsive interactions may be significantly altered by treating the density dependence of coupling strength differently, impacting the description of the properties of nuclear matter and finite nuclei with different CDF effective interactions.
\end{table}
Table 1: The \(\sigma\)-\(\Lambda\) coupling strengths \(g_{\sigma\Lambda}/g_{\sigma N}\) fitted for the DDRMF effective interactions DD-LZ1-\(\Lambda\)1, PKDD-\(\Lambda\)1 [69], DD-ME2, TW99 and DDV [72], the DDRHF ones PKO1-\(\Lambda\)1, PKO2-\(\Lambda\)1 and PKO3-\(\Lambda\)1 [69], as well as NLRMF ones NL-SH [19] and PK1 [73] by minimizing the root-mean-square deviation \(\Delta_{1}\) (in MeV) from the experiment values of \(\Lambda\) separation energies of \({}^{4}\)C0, \({}^{40}\)Ca and \({}^{208}_{\Lambda}\)Pb, where the \(\omega\)-\(\Lambda\) coupling is fixed to be \(g_{\omega\Lambda}/g_{\omega N}=0.666\). \(\Delta_{2}\) represent the root-mean-square deviation between the theoretical calculations and experimental values of \(\Lambda\) separation energies for all hypernuclei shown in Fig. 1.
Figure 2: (Color Online) Meson-hyperon coupling strengths, namely, the isoscalar \(g_{\sigma\Lambda}\) [panel (a)] and \(g_{\omega\Lambda}\) [panel (b)], as functions of baryonic density \(\rho_{b}\)(fm\({}^{-3}\)) for the DDRMF effective interactions DD-LZ1-\(\Lambda\)1, PKDD-\(\Lambda\)1, DD-ME2, TW99 and DDV, the DDRHF ones PKO1-\(\Lambda\)1, PKO2-\(\Lambda\)1 and PKO3-\(\Lambda\)1, as well as NLRMF ones NL-SH and PK1.
### Bulk properties of single-\(\Lambda\) hypernuclei in Oxygen hyperisotopes
To focus on the isospin dependence of single-particle properties, we choose the \(\Lambda\) hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes as examples, since they usually take the spherical symmetry. To check the accuracy of the chosen interactions in describing the properties of finite nuclei, we first calculated the binding energies \(E_{B}\), charge radii \(R_{\rm c}\), and matter radii \(R_{\rm m}\) for Oxygen isotopes using the DD-LZ1 effective interaction. We compared the theoretical calculations with experimental measurements, which were taken from Refs. [84, 85, 86]. From the results in Table 2, we can see that the theoretical calculations and experimental measurements are in good agreement for both the binding energies \(E_{B}\) and the charge radii \(R_{\rm c}\), for the interaction DD-LZ1. It is worth noting that the total matter radius \(R_{\rm m}\) of finite nuclei, unlike the charge radius, still has significant uncertainties based on heavy ion reaction experiments. The theoretical calculations of \(R_{\rm m}\) reconcile with the experimental measurements with the existence of error bars.
Furthermore, we summarize in Table 3 the systematics of the occupied energy level of \(\Lambda\) hyperon, the single-particle energies of \(\Lambda\) hyperon, the total binding energies, the charge radii, and the matter radii of hypernuclei in Oxygen hyperisotopes. In order to give possible reference to hypernuclear experiments, we also calculated the strength of electric dipole transition \(B(E1)\) between the \(\Lambda 1p\) and \(\Lambda 1s\) occupation states. The transition strength is expressed as
\[B(E1;J_{i}\!\longrightarrow\!J_{f}) = \frac{3e_{\Lambda}^{2}}{4\pi}\langle f|r|i\rangle^{2}(2j_{f}+1) \begin{pmatrix}j_{f}&1&j_{i}\\ -\frac{1}{2}&0&\frac{1}{2}\end{pmatrix}^{2}, \tag{24}\]
Where \(e_{\Lambda}\) represents the effective charge of the \(\Lambda\) hyperon. The integration \(\langle f|r|i\rangle\) can be computed using the radial wave functions of the initial and final single-\(\Lambda\) state, see Ref. [26] for details.
In the framework of relativistic models, Dirac spinors with both upper and lower components could contribute to determining the value of \(B(E1)\). However, it is checked that the contribution from the lower component is negligible, especially for non-charge exchange channel. Therefore, only the contribution from the upper component is preserved in current calculations as a simplification. The inclusion of \(\Lambda\) hyperon causes the so-called impurity effect inside hypernuclei [2]. When the \(\Lambda\) hyperon is filled in the \(1s_{1/2}\) state, we can see from the comparison of the total matter radii in Table 3 and Table 2 that the introduction of hyperon causes a shrinkage effect on the hypernuclei, which is approximately \(0.06\!-\!0.13\) fm. Compared with the ground-state results, we observe a significant enhancement in \(\Lambda\) root-mean-square radii when hyperon is filled in higher-lying \(1p\) state. This change in the density distribution of hyperon due to different level occupations leads to an overall expansion of the hypernuclear matter radii, different from the \(\Lambda 1s\) case. Additionally, with the increase of neutron filling, both the hyperon radii, matter radii and \(B(E_{1})\) show significant isospin dependence, which can be qualitatively explained by the density-dependence of the coupling strength. As indicated in Table 3, when \(\Lambda\) hyperon occupies the \(1p\) state, its density distribution spreads more outward than the nucleonic core. As isospin evolves, more neutrons are filled and their attraction to the hyperon increases, correspondingly leading to a significant reduction in the hyperon radius. For \(B(E_{1})\), its value is determined not only by the overlap between initial and final states which are sensitive to the neutron number, but also by the effective charge. As a result, the \(B(E_{1})\) values enlarge a little from \({}^{15}_{\Lambda}\)O to \({}^{17}_{\Lambda}\)O and go down gradually as isospin evolves after \(N\!=\!8\).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Nucleus & \(E_{B}\)(MeV) & \(E_{B}^{\rm exp.}\)(MeV) & \(R_{\rm c}\)(fm) & \(R_{\rm c}^{\rm exp.}\)(fm) & \(R_{\rm m}\)(fm) & \(R_{\rm m}^{\rm exp.}\)(fm) \\ \hline \({}^{14}\)O & -99.699 & -98.732 & 2.766 & & 2.543 & \\ \({}^{16}\)O & -128.215 & -127.619 & 2.752 & 2.699 & 2.619 & 2.57(2) \\ \({}^{18}\)O & -140.017 & -139.808 & 2.749 & 2.773 & 2.761 & 2.64(8) \\ \({}^{20}\)O & -150.687 & -151.371 & 2.746 & & 2.868 & 2.71(3) \\ \({}^{22}\)O & -160.364 & -162.028 & 2.746 & & 2.955 & 2.90(5) \\ \({}^{24}\)O & -168.802 & -168.960 & 2.761 & & 3.054 & 3.18(12) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Binding energy \(E_{B}\), charge radii \(R_{\rm c}\) and matter radii \(R_{\rm m}\) of normal nuclei \({}^{Z+N}\)O, calculated by DDRMF effective interaction DD-LZ1, compared to the experimental data [84, 86, 87, 88].
### Isospin dependence of \(\Lambda\) spin-orbit splitting
Motivated by the connection between the density-dependent effective interactions of theoretical models and the isospin-dependent properties of nuclear structure, the spin-orbit splitting of \(\Lambda\) hyperon in hypernuclei, as a promising observable in current hypernuclear spectroscopy, will be discussed in this subsection with newly developed DD-LZ1-\(\Lambda\)1 and other selected CDF functionals. The \(\Lambda\)'s spin-orbit splitting is defined by the difference of \(\Lambda\) single-particle energies between a couple of spin partner states, which is
\[\Delta E^{\Lambda}_{\rm SO}\!\equiv\!\varepsilon_{{}_{\rm\Lambda}=l_{\rm A}-1 /2}\!-\!\varepsilon_{{}_{\rm\Lambda}=l_{\rm A}+1/2}. \tag{25}\]
As shown in Fig. 3, the analysis is carried out for \(\Lambda\) spin partner states \(1p\) in Oxygen hyperisotopes, with the \(\Lambda\) hyperon occupying its ground state.
In Fig. 3(a), it is seen that the isospin dependence of \(\Delta E^{\Lambda}_{\rm SO}\) is clearly distinguished with the chosen CDF functionals. The curves from NLRMF models tend to be stable with increasing neutron number, while for density-dependent RMF or RHF functionals the splitting enlarges generally with isospin. Among them, DD-LZ1-\(\Lambda\)1 exhibits the most significant isospin dependence. Besides, it is clear that the smaller \(\Lambda\) spin-orbit splitting is predicted by DDRHF compared to RMF, which has been illustrated as a result in single-particle properties since the dynamical equilibrium between nuclear attraction and repulsion is dramatically changed with the appearance of Fock terms [69].
To better understand the evolution of \(\Lambda\) spin-orbit splitting with isospin, we could decompose \(\Delta E^{\Lambda}_{\rm SO}\) into various parts according to its source of the kinetic or potential energy. The values are obtained by left-multiplying the transferred Dirac spinor to the Dirac equation Eq. (18), and separate the integrated contributions from different self-energie terms. For instance, \(\Delta E_{\rm rea}\) comes from the contribution of the rearrangement term \(\Sigma_{R}\) to \(\Lambda\) self-energy \(\Sigma_{0,\Lambda}\), as seen in Eq. (19), due to the density dependence of meson-hyperon couplings. Consequently, the rest one from the kinetic energy and the density-independent potential energies could be summed over, which means \(\Delta E_{\rm kin+\sigma+\omega}\!\equiv\!\Delta E^{\Lambda}_{\rm SO}-\Delta E _{\rm rea}\), as discussed in Fig. 3(b).
It is observed that the values of \(\Lambda\) spin-orbit splitting are primarily determined by \(\Delta E_{\rm kin+\sigma+\omega}\). However, the isospin dependence of the splitting is weakly controlled by \(\Delta E_{\rm kin+\sigma+\omega}\) except for \({}^{15}_{\Lambda}\)O. Attributed to the occupation
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Nucleus & \(\Lambda(nlj)\) & \(\varepsilon_{\rm s.p.}\)(MeV) & \(E_{B}\)(MeV) & \(R_{\rm c}\)(fm) & \(R_{\Lambda}\)(fm) & \(R_{\rm m}\)(fm) & \(B(E1)(e^{2}{\rm fm}^{2})\) \\ \hline & \(1s_{1/2}\) & -14.330 & -113.245 & 2.716 & 2.115 & 2.458 & \\ \({}^{15}_{\Lambda}\)O & \(1p_{1/2}\) & -0.413 & -99.590 & 2.772 & 5.265 & 2.813 & 0.095 \\ & \(1p_{3/2}\) & -1.582 & -101.002 & 2.760 & 4.134 & 2.674 & 0.119 \\ & \(1s_{1/2}\) & -13.086 & -140.507 & 2.704 & 2.323 & 2.555 & \\ \({}^{17}_{\Lambda}\)O & \(1p_{1/2}\) & -1.059 & -128.927 & 2.756 & 4.609 & 2.780 & 0.109 \\ & \(1p_{3/2}\) & -2.278 & -130.307 & 2.746 & 3.963 & 2.711 & 0.121 \\ & \(1s_{1/2}\) & -14.170 & -153.506 & 2.699 & 2.310 & 2.682 & \\ \({}^{19}_{\Lambda}\)O & \(1p_{1/2}\) & -1.720 & -141.540 & 2.751 & 4.291 & 2.861 & 0.090 \\ & \(1p_{3/2}\) & -3.036 & -143.003 & 2.740 & 3.824 & 2.815 & 0.097 \\ & \(1s_{1/2}\) & -15.394 & -165.477 & 2.695 & 2.295 & 2.773 & \\ \({}^{21}_{\Lambda}\)O & \(1p_{1/2}\) & -2.463 & -153.079 & 2.744 & 4.062 & 2.927 & 0.075 \\ & \(1p_{3/2}\) & -3.890 & -154.635 & 2.733 & 3.699 & 2.890 & 0.079 \\ & \(1s_{1/2}\) & -16.804 & -176.670 & 2.688 & 2.277 & 2.829 & \\ \({}^{23}_{\Lambda}\)O & \(1p_{1/2}\) & -3.285 & -163.703 & 2.737 & 3.882 & 2.977 & 0.063 \\ & \(1p_{3/2}\) & -4.841 & -165.374 & 2.725 & 3.582 & 2.943 & 0.066 \\ & \(1s_{1/2}\) & -17.634 & -185.728 & 2.723 & 2.256 & 2.969 & \\ \({}^{25}_{\Lambda}\)O & \(1p_{1/2}\) & -3.925 & -172.669 & 2.757 & 3.836 & 3.079 & 0.052 \\ & \(1p_{3/2}\) & -5.522 & -174.326 & 2.748 & 3.562 & 3.055 & 0.055 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Properties of single-\(\Lambda\) states in hypernucleus \({}^{2+N+\Lambda}_{\Lambda}\)O calculated with the DDRMF effective interaction DD-LZ1-\(\Lambda\)1, including single-particle energies \(\varepsilon_{\rm s.p.}\), binding energies \(E_{B}\), charge radii \(R_{\rm c}\), hyperon radii \(R_{\Lambda}\), hypernuclear matter radii \(R_{\rm m}\) and \(B(E1)\) value of the transition from the excited (\(\Lambda 1p\)) state to the ground (\(\Lambda 1s\)) state.
of \(\nu 1p_{1/2}\) orbit, the \(\Lambda\) spin-orbit splitting predicted by various CDF functionals systematically reduces from \({}^{15}_{\Lambda}\)O to \({}^{17}_{\Lambda}\)O. As has been illustrated in Ref. [69], the spin-orbit coupling potential of hyperon is determined mainly by the radial derivative of the self-energy \(\Sigma^{\Lambda}_{-}\). In general, the more neutrons are filled into hypernuclei, the larger the density circumstance where the \(\Lambda\) hyperon is housing. Thus, if the model is density dependent like DDRMFs and DDRHFs given in Fig. 2, the meson-hyperon coupling strength then weakens and \(\Delta E^{\Lambda}_{\rm SO}\) should become smaller correspondingly as the neutron number increases. As seen in Fig. 3(b), such a reduction in \(\Delta E_{\rm kin+\sigma+\omega}\) is remarkable from \({}^{15}_{\Lambda}\)O to \({}^{17}_{\Lambda}\)O, and relatively less significant at larger neutron numbers.
Different from the NLRMF case, the density-dependent CDFs introduce extra contribution to reinforce the isospin dependence of the splitting, as demonstrated in Fig. 3(c), which cancels the reduction trend in \(\Delta E_{\rm kin+\sigma+\omega}\) overwhelmingly and finally leads to the enhancement of \(\Delta E^{\Lambda}_{\rm SO}\) with increasing neutron number in Fig. 3(a). In fact, the contribution \(\Delta E_{\rm rea}\) to \(\Lambda\) spin-orbit splitting is originated from the rearrangement terms of \(\Lambda\) self-energies \(\Sigma_{0,\Lambda}\) which according to Eq. (21) depends on the density slope of the meson-hyperon coupling strength. As the neutron number increases, the density scenario where \(\Lambda\) lives could get more intense, consequently weaker density dependence of the meson-hyperon coupling strength, smaller density slope as well as the suppressed value of \(\Delta E_{\rm rea}\). Therefore, the link between the isospin evolution of \(\Lambda\) spin-orbit splitting and the in-medium behavior of \(\Lambda N\) interaction with baryon density is elucidated from the discussion on Oxygen hyperisotopes. In consequence, possible experimental constraints on \(\Delta E^{\Lambda}_{\rm SO}\) along the hyperisotopes could assist us further in understanding the in-medium effects of nuclear force.
### Isospin dependence of matter and hyperon radii
In the properties of hypernuclear structure, not only the \(\Lambda\) spin-orbit splitting but also the \(\Lambda\) impurity effect could exhibit the information of in-medium nuclear interactions. In Fig. 4(a), we selected DDRMF functionals DD-LZ1-\(\Lambda\)1 and DD-ME2, DDRHF's PKO1-\(\Lambda\)1 and NLRMF's PK1, to illustrate its influence on the matter radii of Oxygen (hyper)isotopes, where the solid and dash-dotted lines correspond to the calculated results for single-\(\Lambda\) hypernuclei and their nucleonic counterpart in Oxygen (hyper)isotopes, respectively. The matter radius \(R_{m}\) in hypernuclei goes up monotonically as the neutron number increases, regardless of the specific model used, where a steep leap from \({}^{23}_{\Lambda}\)O to \({}^{25}_{\Lambda}\)O corresponds to the effect of new occupation in \(\nu 2s_{1/2}\).
Although divergent values given for Oxygen isotopes without hyperon, all of the selected models are getting closer in size of matter radii for hypernuclei, implying \(R_{m}\) of hypernuclei as a possible model-independent observable. It is
Figure 3: (Color Online) The spin-orbit splitting of \(\Lambda 1p\) spin-partner states as a function of neutron number \(N\) for the ground state in \({}^{\rm Z+N+\Lambda}_{\Lambda}\)O hypernuclei [panel (a)], and its contribution \(\Delta E_{\rm kin+\sigma+\omega}\) from the sum of the kinetic energy, the density-independent potential energies of \(\sigma\) and \(\omega\) channels [panel (b)], as well as the rearrangement terms \(\Delta E_{\rm rea}\) due to density-dependent meson-hyperon couplings [panel (c)]. The results are extracted from the calculations with the DDRMF effective interactions DD-LZ1-\(\Lambda\)1, PKDD-\(\Lambda\)1, DD-ME2, TW99 and DDV, the DDRHF ones PKO1-\(\Lambda\)1, PKO2-\(\Lambda\)1 and PKO3-\(\Lambda\)1, as well as the NLRMF ones NL-SH and PK1.
evident that the matter radii of Oxygen hyperisotopes contract as compared to their nucleonic counterparts, namely the size shrinkage due to the impurity effect of the \(\Lambda\) hyperon. However, the shrinkage magnitude appears to be strongly model dependent. Among them, the DDRMF effective Lagrangian DD-LZ1-\(\Lambda\)1 yields the largest difference between the solid and dash-dotted lines, whereas the NLRMF one PK1 shows the smallest disparity. By checking the bulk properties of nuclear matter within these CDFs, it is verified that the shrinkage magnitude correlates well with the incompressibility, which is 230.7 MeV for DD-LZ1, 250.8 MeV for DD-ME2, 250.2 MeV for PKO1, 282.7 MeV for PK1, respectively [56, 89, 90]. In fact, the larger the incompressibility \(K\) is, the harder the nucleus is contracted by the exerted attraction from the filled hyperon inside, consequently the weaker size shrinkage effect in the calculated matter radii. The similar relation could be found from the Table II of a work on the isoscalar giant monopole resonance of hypernuclei, where the effective nuclear incompressibility modulus was extracted [91].
To further distinguish the effects of different interactions on the description of hypernuclear structure, we investigate the isospin evolution of the \(\Lambda\) hyperon radius \(R_{\Lambda}\) in Oxygen hyperisotopes using all selected CDF effective interactions, as shown in Fig. 5. It is seen tangibly that \(R_{\Lambda}\) evolve diversely along Oxygen hyperisotopes with different CDF effective interactions. Some effective interactions, like PKO3-\(\Lambda\)1, DD-ME2, DDV, and DD-LZ1-\(\Lambda\)1, exhibit a reduced \(R_{\Lambda}\) with increasing neutron number. Especially, DD-LZ1-\(\Lambda\)1 gives the smallest hyperon radii among all chosen CDFs, and an strong declining trend. In fact, the core polarization effect due to \(\Lambda\) hyperon plays a significant role in this evolution. When \(\Lambda\) occupies the \(1s_{1/2}\) state, its density distribution is concentrated inside the hypernucleus. As a result, the \(\Lambda\)'s coupling or attraction with the nucleons in the core (here corresponding to \({}^{16}\)O) appears relatively stronger than that with the valence nucleons. Hence, the evolution of the hyperon radius could be comprehended more or less by the size change of the core with respect to the neutron number.
The variation of the matter radii for the \({}^{16}\)O core in Oxygen (hyper)isotopes is plotted in Fig. 4(b) with respect to the neutron number. From \(N\!=\!8\) to 14, in contrast to the situation of total matter radii \(R_{m}\), there is no consistent isospin dependence for the selected CDFs in the core radius \(R_{m}^{\rm core}\) with increasing neutron number. The nonlinear RMF functional PK1 exhibits a significant increasing trend with isospin, while the density-dependent RMF one DD-LZ1-\(\Lambda\)1 shows a noticeable decrease. Consequently, the hyperon radius \(R_{\Lambda}\) exhibit a similar isospin dependence resulting from the core polarization effect, determined mainly by the various isospin properties of CDF functionals in nucleon-nucleon channels. From such analysis, the importance of nuclear in-medium effects in affecting the hyperon radii is unveiled. So the divergent isospin evolution of \(R_{\Lambda}\) given by the CDFs with different density dependent meson-baryon couplings makes it a valuable tool to elucidate the in-medium behavior of nuclear force.
## 4 Summary
In summary, considering the significance of nuclear in-medium effects in nuclear many-body problems, such as eliminating the spurious shell closures, we expanded the newly developed DDRMF Lagrangian DD-LZ1 to incorporate the \(\Lambda\) hyperon degree of freedom and determined the \(\Lambda N\) effective interaction by fitting the experimental data of \(\Lambda\) separation energies for several single-\(\Lambda\) hypernuclei. Subsequently, with several other CDF functionals, the features
Figure 4: (Color Online) The variation of the matter radii of hypernuclei [Panel (a)] and their \({}^{16}\)O core [Panel (b)] in Oxygen (hyper)isotopes with respect to the neutron number, with the \(\Lambda\) hyperon occupying the \(1s_{1/2}\) ground state. The solid and dash-dotted lines represent the calculated results for hypernuclei and normal isotopes without hyperon, respectively. The results were obtained with the CDF functionals DD-LZ1-\(\Lambda\)1, DD-ME2, PKO1-\(\Lambda\)1 and PK1.
including \(\Lambda\) separation energy and \(B(E1)\) transition, and the evolution of the spin-orbit splitting as well as the characteristic radii were analyzed in detail along the Oxygen (hyper)isotopes.
By comparing the results obtained from different CDF models, we further investigated the crucial impact of nuclear in-medium effects on accurately describing the properties of hyperon, both in terms of their bulk and single-particle properties. For the \(1p\) spin-orbit splitting of the \(\Lambda\) hyperon, significant differences in the isospin dependence are observed among the selected CDF effective interactions in Oxygen hyperisotopes. As the neutron number increases, the density circumstance where the hyperon is housing gradually increases, which causes the meson-hyperon coupling strengths that determine the hypernuclear properties to change as well. In particular, the density-dependent CDF effective interactions introduce additional rearrangement terms that significantly enhance the isospin dependence of the \(\Lambda\) spin-orbit splitting, leading to more distinct variation of \(\Delta E_{\rm SO}^{\Lambda}\) with neutron number in DDRMF and DDRHF models.
The evolution of the hypernuclear matter radius with isospin was further investigated. Significant model dependence in the magnitude of size shrinkage due to the inclusion of \(\Lambda\) hyperon is observed, where the DDRMF functional DD-LZ1-A1 displays the largest shrinkage effect. The result was then explained by an anticorrelation between the incompressibility coefficients \(K\) of nuclear matter and the hyperon radii \(R_{\Lambda}\), providing us a possible way to constrain the hyperon distribution inside a hypernucleus from better-determined bulk properties of nuclear matter. Additionally, it is found that the isospin evolution of the hyperon radius is primarily influenced by the density-dependent behavior of the chosen CDF functional in \(NN\) interaction channel via the procedure of the core polarization. Thus, the sensitivity in depicting these hyperon-relevant properties in CDF models with a various of different meson-baryon couplings holds us great potential to elucidate nuclear in-medium nature in both \(\Lambda N\) and \(NN\) channels.
|
2306.13267 | Coalescence of surfactant-laden droplets | Droplet coalescence is an important process in nature and various
technologies (e.g. inkjet printing). Here, we unveil the surfactant
mass-transport mechanism and report on several major differences in the
coalescence of surfactant-laden droplets as compared to pure water droplets by
means of molecular dynamics simulation of a coarse-grained model. Large scale
changes to bridge growth dynamics are identified, such as the lack of multiple
thermally excited precursors, attenuated collective excitations after contact,
slowing down in the inertial regime due to aggregate-induced rigidity and
reduced water flow, and a slowing down in the coalescence rate (deceleration)
when surfactant concentration increases, while at the same time we also confirm
the existence of an initial thermal, and a power-law, inertial, regime of the
bridge growth dynamics in both the pure and the surfactant-laden droplets.
Thus, we unveil the key mechanisms in one of the fundamental topological
processes of liquid droplets containing surfactant, which is crucial in
relevant technologies. | Soheil Arbabi, Piotr Deuar, Mateusz Denys, Rachid Bennacer, Zhizhao Che, Panagiotis E. Theodorakis | 2023-06-23T02:37:07Z | http://arxiv.org/abs/2306.13267v1 | # Coalescence of Surfactant-Laden Droplets
###### Abstract
Droplet coalescence is an important process in nature and various technologies (_e.g._ inkjet printing). Here, we unveil the surfactant mass-transport mechanism and report on several major differences in the coalescence of surfactant-laden droplets as compared to pure water droplets by means of molecular dynamics simulation of a coarse-grained model. Large scale changes to bridge growth dynamics are identified, such as the lack of multiple thermally excited precursors, attenuated collective excitations after contact, slowing down in the inertial regime due to aggregate-induced rigidity and reduced water flow, and a slowing down in the coalescence rate (deceleration) when surfactant concentration increases, while at the same time we also confirm the existence of an initial thermal, and a power-law, inertial, regime of the bridge growth dynamics in both the pure and the surfactant-laden droplets. Thus, we unveil the key mechanisms in one of the fundamental topological processes of liquid droplets containing surfactant, which is crucial in relevant technologies.
+
Footnote †: preprint: AIP/123-QED
Introduction
Droplet coalescence plays an important role in many natural phenomena, for example, determining the size distribution of droplet rains [1; 2], the dynamics of multiphase flows [3; 4], and, also, in technological applications, such as inkjet printing [5] or coating applications [6]. The coalescence process depends on the interplay between viscous and inertial forces and surface tension, with the minimization of the latter driving this process. Experiments, theories, and simulations of the coalescence of droplets without additives have provided great insight into its mechanisms [7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25], but much less is known in the case of surfactant-laden droplets [4; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43] or droplets with other additives [44; 45; 46; 47; 48; 49; 50; 51], despite their relevance in many areas, such as cloud formation [52], microfluidics [53], coating technologies [54], and water treatment during crude oil and natural gas separation [55]. Based on high-speed imaging and particle image velocimetry technology, experimental studies have investigated the coalescence of surfactant-laden droplets, mainly providing macroscopic descriptions of the coalescence process [35; 36; 38; 56; 57; 58]. However, the initial fast stages of the coalescence process are impossible to observe in experiments due to device limitations [19]. Moreover, conventional hydrodynamic models are only applicable in the later stages of coalescence [59; 60; 61], while the singularity at the initial contact point of the coalescing droplets continues to pose challenges for numerical modelling despite progress in this area [11; 13; 23; 56]. To address the latter issue, for example, continuum modeling may consider either the formation of a single body of fluid by an instant appearance of a liquid bridge that smoothly connects the two droplets and then evolves as a single body due to capillary forces, or a section of the free surface trapped between the bulk phases that gradually disappears [13]. In the case of systems with surfactant, continuum simulation has suggested that an uneven contraction of the interface due to a nonuniform distribution of accumulating surfactant at the meniscus bridge that connects the droplets is an important factor that modulates the surface tension, which, in turn, drives the coalescence process [4]. Still, numerical simulation is unable to analyze the mechanism of coalescence after the drops come into contact. Recent molecular-level simulations have clarified important aspects, such as the role of thermal capillary waves at the surface of water droplets [10], but the effect of surfactant on the physics involved in the coalescence has remained overwhelmingly unexplored. We know surfactant effects must be large since they greatly change the surface tension, so the research reported here set out to clarify its role in the coalescence dynamics and other characteristics.
In this study, we report on large-scale MD simulations based on a high-fidelity coarse-grained (CG) force-field [62; 63; 64; 65; 66], which allows for the faithful simulation of surfactant in water. With these we uncover the mass transport mechanism of surfactant during coalescence, elucidate the dynamics of the bridge growth process, resolve the flow, and analyse how the above depend on surfactant distribution. We find an unexpected lack of multiple thermally excited precursor bridges, attenuated collective flow after contact, formation of new aggregates inside the bridge from surfactant previously at the droplets' surface, and a slowing down in the inertial regime as surfactant concentration increases. In the following, we provide some background information in Sec. II. Then, we present our simulation model and methods in Sec. III and our results and relevant discussion in Sec. IV. Finally, we draw our conclusions and suggest possible directions for future work in Sec. V.
Figure 1: Stages of coalescence of spherical surfactant-laden droplets with equal size and surfactant concentration (3.2 CAC). a) Initial configuration; b) Beginning of the bridge formation; c) Bridge growth with a magnified view of the bridge region. \(b\) is the radius of the bridge; d) Final equilibrium configuration after reshaping; e) Coarse-grained representation of a C10E4 surfactant molecule. The surfactant’s hydrophobic beads are in red, hydrophilic ones in yellow. Each cyan bead represents two water molecules. External or cross-section views are shown to highlight the bulk, surface, and bridge structure of the droplets. Surrounding water vapor is omitted for the sake of clarity. The snapshots of the system were obtained using Ovito software [67].
Background
Droplet coalescence takes place in three different stages, namely, the droplet approach, when the two droplets are positioned close enough to 'feel' intermolecular forces (Fig. 1a), the bridge growth-stage (Figs 1b and 1c) [68], and the final reshaping stage towards the equilibrium spherical droplet (Fig. 1d). In the case of droplets without surfactant, the growth dynamics of the bridge has been investigated and in general, two different regimes have been assumed from the perspective of fluid dynamics [11; 56]: an initial viscous regime dominated by macroscopic flows that pull the droplets together, and a subsequent inertial regime, which involves the propagation of local deformations with higher Reynolds number excited near the bridge as it grows.
Even in the case without surfactant, the bridge growth dynamics has been under intense debate. In the viscous regime (VR), a linear scaling in time \(b\propto t\) has been suggested for the bridge radius, \(b\), as well as logarithmic corrections \(t\ln t\)[11; 56], while a scaling \(b\propto\sqrt{t}\) has been proposed for the inertial regime (IR) [11; 56]. However, others have suggested scaling regimes that depend on the ratio of characteristic scales to the viscous length scale, including an additional inertially limited viscous (ILV) regime [69; 70], which, according to numerical simulations, is only realized when the coalescing drops are initially separated by a finite distance [21]. Another idea put forward has been the characterization of the viscous-inertia-regime transition via a modified Ohnesorge number in the case of immiscible droplets [71]. Despite the advent of modern experimental techniques, such as electrical measurements with resolution of a few micrometers [72], the bridge growth dynamics at the early stages still remains challenging for experimental studies. Instead, molecular dynamics (MD) simulation of an all-atom model for water droplets has provided insight into this initial stage of coalescence, suggesting the formation of multiple precursor bridges at the pinch point, due to thermal capillary waves at the droplet surfaces [10]. These multiple bridges expand linearly in time, due to collective molecular jumps at the droplets' interface, and the transition to the classical hydrodynamics regime only takes place when the bridge radius becomes larger than a thermal length, \(l_{T}\approx\left(k_{B}T/\gamma\right)^{1/4}R^{1/1/2}\), assuming that fluctuations on one droplet are not affected by the other and in the absence of instabilities [10]. \(l_{T}\) describes the typical width of the contact points at droplet's interface at the initial stage of coalescence, \(k_{B}\) is Boltzmann's constant, \(T\) the temperature, \(\gamma\) the liquid-vapor (LV) surface tension, and \(R\) the radius of the droplet. Since \(l_{T}\) depends on surface tension, it is expected to grow with surfactant concentration as \(\gamma\) decreases,
saturating to a value, \(l_{s}\), above the critical aggregation concentration (CAC) as \(\gamma\) reaches its plateau value.
In the presence of surfactant there are many unknowns. Several studies have suggested that its presence would actually delay the coalescence process, due to the reduction of the surface tension [37; 57], while smaller droplets tend to show much faster equilibration of surfactant interfacial coverage [38; 73]. Moreover, it has been suggested that physical regimes could also depend on the diffusion and adsorption time scales of the surfactant, and their dependence on the surfactant concentration and the droplet size [73]. In addition, it has been pointed out that surfactant alters the properties of the droplets particularly in the bridge area [39]. For example, hydrodynamic instabilities, such as dimples, have been observed for concentrations larger than CAC [41], but surfactant might actually have a more global effect by affecting the overall size of the droplets [40]. Certain experiments have also highlighted the role of Marangoni flow that leads to local capillary pressure changes, which in turn affect the coalescence kinetics and result in a delay of the process [36; 4]. Despite these efforts, the mass transport mechanism of surfactants, the resulting dynamics and structure of the bridge and other early time effects are not well understood. Molecular simulations allow for tracking the individual molecules, which goes beyond the reach of any continuum simulation or real experiment and is therefore crucial for unravelling the mass transport mechanism of surfactant. At present, the early time phenomena that are pivotal for the onset of coalescence can only be investigated in adequate detail by molecular-scale simulation.
## III Model and Methodology
Our investigation covers all stages of coalescence for droplets of equal size and surfactant concentration. We have considered different surfactants, such as C10E8 and C10E4 [63], and a range of surfactant concentrations below/above CAC. The interactions between components of the system are obtained by the Mie-\(\gamma\) Statistical Associating Fluid Theory (SAFT Mie-\(\gamma\)) [74; 75; 76; 77; 78]. The MD simulations were carried out in the canonical ensemble using LAMMPS software [79; 80]. After equilibration of each individual droplet, the droplets were placed next to each other for initiating their coalescence as illustrated in Fig. 1a.
The force field has been validated for water-surfactant systems with particular focus on accurately reproducing the most relevant properties of the system, such as surface tension and phase
behavior [62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83]. Interactions between the various types of CG beads are described via the Mie potential, which is mathematically expressed as
\[U(r_{ij})=C\epsilon_{\rm ij}\left[\left(\frac{\sigma_{\rm ij}}{r_{\rm ij}} \right)^{\lambda_{\rm ij}^{\tau}}-\left(\frac{\sigma_{\rm ij}}{r_{\rm ij}} \right)^{\lambda_{\rm ij}^{a}}\right],\;{\rm for}\;r_{\rm ij}\leq r_{\rm c}, \tag{1}\]
where
\[C=\left(\frac{\lambda_{\rm ij}^{\tau}}{\lambda_{\rm ij}^{\tau}-\lambda_{\rm ij }^{a}}\right)\left(\frac{\lambda_{\rm ij}^{\tau}}{\lambda_{\rm ij}^{a}} \right)^{\frac{\lambda_{\rm ij}^{a}}{\lambda_{\rm ij}^{a}-\lambda_{\rm ij}^{ a}}}.\]
\({\rm i}\) and \({\rm j}\) are the bead types, \(\sigma_{\rm ij}\) indicates the effective bead size and \(\epsilon_{\rm ij}\) is the interaction strength between beads \({\rm i}\) and \({\rm j}\). \(\lambda_{\rm ij}^{a}=6\) and \(\lambda_{\rm ij}^{\tau}\) are Mie potential parameters, while \(r_{\rm ij}\) is the distance between two CG beads. A universal cutoff for all nonbonded interactions is set to \(r_{c}=4.583\ \sigma\). Units are chosen for the length, \(\sigma\), energy, \(\epsilon\), mass, \(m\), and time, \(\tau\), which in real units would roughly correspond to: \(\sigma=0.43635\ {\rm nm}\), \(\epsilon/k_{B}=492\ {\rm K}\), \(m=44.0521\ {\rm amu}\) and \(\tau=\sigma(m/\epsilon)^{0.5}=1.4062\ {\rm ps}\). All simulations are carried out in the NVT ensemble by using the Nose-Hoover thermostat as implemented in the LAMMPS package [79; 80] with an integration time-step \(\delta t=0.005\ \tau\). Moreover, simulations took place at room temperature, therefore, \(k_{B}T/\epsilon=0.6057\), which corresponds to \(T=25\ ^{\circ}{\rm C}\).
Surfactants of type C\(n\)E\(m\) are considered, such as C10E8 and C10E4. A hydrophobic alkane CG 'C' bead represents a \(-{\rm CH}_{2}-{\rm CH}_{2}-{\rm CH}_{2}-\) group of atoms, while a hydrophilic CG 'EO' bead represents an oxyethylene group \(-{\rm CH}_{2}-{\rm O}-{\rm CH}_{2}\). Finally, a water CG 'W' bead corresponds to two water molecules. In Table 1, the nonbonded interactions between the different CG beads are listed, while the mass of each bead is reported in Table 2.
Bonded interactions are taken into account via a harmonic potential, _i.e._,
\[V(r_{\rm ij})=0.5k(r_{\rm ij}-\sigma_{\rm ij})^{2} \tag{2}\]
where \(k=295.33\ \epsilon/\sigma^{2}\). Moreover, EO beads experience a harmonic angle potential:
\[V_{\theta}(\theta_{\rm ijk})=0.5k_{\theta}(\theta_{\rm ijk}-\theta_{0})^{2} \tag{3}\]
where \(\theta_{\rm ijk}\) is the angle between consecutive beads \({\rm i}\), \({\rm j}\) and \({\rm k}\). \(k_{\theta}=4.32\ \epsilon/{\rm rad}^{2}\), while \(\theta_{0}=2.75\ {\rm rad}\) is the equilibrium angle. Further discussion on the model can be found in previous studies [62; 63].
To prepare the initial configuration of each system, individual droplets were first equilibrated in the NVT ensemble. The total number of beads in the simulations was \(10^{5}\) per initial droplet, with approximately 5% evaporation into the gas. Droplet diameters were \(\sim 53\ \sigma\), which is about
23 nm. Careful consideration was given during the preparation not only to observing the energy of the system, but, also, making sure that the distribution of clusters has reached a dynamic equilibrium and that each cluster was able to diffuse a distance many times its size. After equilibration, the system size (volume of the simulation box) was doubled and droplets were placed next to each other in such a way to avoid interaction between mirror images of the droplets that could potentially occur due to the presence of periodic boundary conditions in all directions, if one was not careful. In this way, the same thermodynamic conditions for the system were approximately guaranteed in the system of a single droplet and in the systems of two droplets used for coalescence. Figure 1a illustrates a typical initial configuration of the system. Only the liquid state (droplets) is shown, which is identified by a cluster analysis [84; 85], while surrounding vapour has been removed for the sake of clarity. Finally, for our droplets, we have considered a range of different surfactant concentrations below and above the CAC and up to about 6.1 CAC. This covers the whole range of concentrations relevant for the mass transport and other phenomena discussed here.
To quantify the mass transport of surfactant, first a grid with mesh size of 2 \(\sigma\) is defined and surfactant and water particles are assigned to each grid cell. The grid size is chosen to guarantee
\begin{table}
\begin{tabular}{c c c} \hline \hline \multicolumn{1}{c}{ Bead type} & Mass (m) \\ \hline W & & 0.8179 \\ C & & 0.9552 \\ EO & & 1.0000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mass of CG beads.
adequate accuracy in the position of the grid cell while avoiding excessive technical randomness due to having a mesh finer than the size of single beads. Then, based on the density, one can identify the grid cells that belong to the droplets surface or the bulk. By following the grid locations of the surfactant beads, we are able to track the transport of surfactant between the different parts of the droplets. The central bead in a molecule determines whether it is counted as bulk or bridge, whereas if any bead of a molecule enters a surface grid cell the molecule is counted as being on the surface.
To track the bridge growth, we need to define the bridge region. In our case, this is a slab whose width in the \(X\) direction is recalculated at each snapshot. The left and right limits of the slab are determined by analyzing the grid points on the \(X-Z\) plane after droplets have been aligned with the coordinate system as shown in Fig. 2. We fit a circle around each droplet and note the surface grid positions at the central \(X=0\) position, shown by the red points in Fig. 2. Horizontal lines are drawn in the \(X\) direction passing through these red points to touch the fitted circles, thus defining the rectangle in green. The vertical sides of the rectangle give the limits of the bridge slab in the \(X\) direction and its width. All molecules with centers having \(X\) coordinates inside these limits are labelled as belonging to the bridge in a given snapshot.
On the other hand, the bridge radius, \(b\), (Fig. 1) is calculated using the distances between extrema of the positions of the beads belonging to the grids located at \(X=0\), _i.e._ this distance is first calculated separately for the \(Z\) coordinate to give a distance \(2b_{Z}\), and then for the \(Y\) coordinate to give \(2b_{Y}\). The final bridge radius estimate is then given by \(b=(b_{Z}+b_{Y})/2\).
Figure 2: Specifying the bridge (green rectangle). Orange and blue points are surface grids on left and right droplets, respectively. Red points are the location of highest and lowest beads on bridge. The solid black line is a best fit to the surface grid positions following Ref. [86].
## IV Results and Discussion
The mass transport mechanism of surfactant molecules during coalescence is fundamental to understanding the role of surfactant in the dynamics of this process at all stages. Surfactant mass transfer mechanisms have been investigated in various processes, for example, superspreading,[63] emulsion films,[42] and foam stabilization in lubricating oils.[43] For example, in the case of emulsion films, a fascinating cyclic phenomenon has been observed where new dimples sequentially form with the surfactant redistribution driving this process through coupling to interfacial an hydrodynamic motion inside the films.[42]. In our system, coalescence starts with the formation of the contact point (Fig. 3a), where hydrophobic beads from the two droplets actively move to aggregate due to the favorable attractive interaction. In the case of surfactant-laden droplets, we have
Figure 3: Mass transport mechanism of surfactant (C10E4 4.7 CAC) during the coalescence process. (a) Droplet pinching (precursor bridge) taking place through the aggregation of surfactant at the first contact point of the droplets (\(t-t_{c}=6.25\)\(\tau\)). Here \(t_{c}\) is the time of first contact; (b) Main surfactant transfer processes during the initial stage of coalescence as indicated by arrows on a droplet cross-section in the \(x-y\) plane (\(t-t_{c}=32.5\)\(\tau\)). A larger arrow end indicates the dominant direction of surfactant transport between the different regions in the droplet. Magnified views of the bridge and its cross section on the \(y-z\) plane (only hydrophobic beads) are also shown above. At this stage, the bridge is dominated by the presence of surfactant molecules. (c), (d), (e), and (f) at times \(t-t_{c}=76.25\)\(\tau,t-t_{c}=233.75\)\(\tau,t-t_{c}=517.50\)\(\tau,t-t_{c}=733.75\)\(\tau\), respectively, show evolution in the inertial regime. The snapshots of the system were obtained using Ovito software[67].
not observed the formation of multiple contact points (bridge precursors) for any of the systems, unlike what has been seen in pure water droplets [10]. In fact, water molecules do not participate at this earliest stage in the bridge formation. The bridge growth process continues with the formation of a thin layer of surfactant between the droplets (Fig. 3b), whose origin is mostly from the initial surface coverage. To unveil these processes, we have monitored the transport of surfactant between different parts of the droplets, _i.e._ the interior, the bridge, and their surfaces, which sums up to 36 possible surfactant transport processes. The Supplemental Material (Table S3) provides the numbers for the probabilities of surfactant remaining at a certain place or moving to different parts of the droplets for all cases considered in our study. At this stage, the still small radius, \(b\), of the bridge permits a high supply of surfactant at the contact surface (Fig. 3b), which is central to the coalescence of the droplets. However, as the bridge further grows, the surfactant from the initial contact and inflow to the bridge perimeter is not enough to fully supply the interior of the bridge with surfactant. The perimeter of the bridge grows proportionally to \(b\), while its area (cross-section) increases with \(b^{2}\). Therefore, the concentration of surfactant in the bridge, initially very high, reduces proportionally to \(1/b\) as the bridge grows. Moreover, tracking the molecules shows that, as the bridge forms, less molecules end up in the bridge bulk than were on the approaching surfaces prior to contact. Surfactant transport towards the surface is favourable energetically and only surfactant that cannot escape to the exterior (surface) remains trapped in the interior of the bridge region. As a result, the engulfed surfactant forms separated aggregates within the bridge, especially for the cases above CAC (Fig. 3d). These aggregates are characteristic of the bridge growth at later stages (Figs 3c-e), and, as we will see later by the analysis of the bridge growth, surfactant from the bulk can join the aggregates that formed at the bridge as it grows.
The relevant surfactant transport processes during the bridge growth that we have identified are the engulfment of surfactant from the contact surface of the droplets into the interior of the bridge (Tables S1 and S2 in the Supplemental Material give details), which increases with surfactant concentration, and to a smaller extent the transfer of surfactant in the bulk towards the bridge (Figs 3c, d). Coalescence is mainly affected by the transfer of surfactant in the region close to the bridge from the interior to the surfaces, while, in the other parts of the droplet, surfactant is rather in dynamic equilibrium and does not affect the coalescence process. After the bridge fully develops (Fig. 3e), a dynamic equilibrium of surfactant extends throughout and no dominant directions of adsorption/desorption processes remain, but only a slight surfactant transport from the surface towards the bulk as the surface area of the droplet becomes smaller. At this final
stage, the droplet will reach its final spherical shape (Fig. 3f), driven by the surface tension. We have also verified that the new aggregates emerging during the coalescence process consist of surfactant that was previously on the contact area (surfaces) between the two merging droplets. The latter observations are valid throughout a range of different concentrations and surfactants below and above the CAC. Data for other concentrations and surfactants than in Fig. 3 are reported in the Supplemental Material, and show the same mechanism, while snapshots of the aggregate formation in the inertial regime are presented in Fig. 4.
To identify the various regimes and better understand the bridge growth dynamics, we have measured the bridge radius, \(b\), over time for droplets with different surfactant concentrations (Fig. 5 here for C10E4 and Fig. S1 for C10E8 in the Supplemental Material). The regimes that follow after the initial bridge formation can be in principle identified by the bridge radius scaling. The inertial scaling with power law \(b\sim\sqrt{t}\) is generally most conspicuous, although we see an apparent changeover from an initial thermal regime (TR) with little bridge growth to the IR power law (see fits in Fig. 5 and Fig. S1 of the Supplemental Material). Moreover, in Fig. 5, the values of the thermal lengths are marked with the horizontal lines for the cases of pure water and surfactant-laden droplets (above CAC) according to previous MD predictions [10]. These values are of the same order as the TR regime bridge size that we observe in our data, and express the range of the thermal length scale above which a persistent increase of the bridge radius, \(b\), takes place.
Figure 4: Droplet interiors in the inertial regime showing the presence of new aggregates emerging during the coalescence process. Cross-sections are shown at times corresponding to Fig. 3d for surfactant in different concentrations of C10E4 above the CAC: (a) 1.6 CAC, (b) 3.2 CAC, (c) 4.7 CAC and (d) 6.1 CAC. The snapshot of the systems were obtained using Ovito software [67].
Our findings also indicate that the growth speed of the bridge decreases as a function of surfactant concentration in both regimes. Tracking the simulation trajectories we observe that the surfactant aggregates that are present in the bulk can slow the liquid flow and obstruct the strong water-water interactions. Upon a significant increase of surfactant concentration far above CAC, aggregates merge in the bulk leading to an increased rigidity of the droplet. This then hinders the coalescence process by slowing down the rearrangement of the droplet towards its equilibrium spherical shape. This is explained by the interactions of water and hydrophobic beads (see Supplemental Material), which indicate a larger W-W (W: water beads) than C-C (C: hydrophobic surfactant beads) attraction and a strongly unfavorable (less attractive) C-W interaction in comparison to the interactions of all other components. The average bridge growth velocity, which
Figure 5: Bridge growth dynamics (\(b\), radius of the bridge) _vs._ time, \(t\), for droplets with different surfactant concentrations (C10E4), as indicated in the legend. CAC \(\approx 7.5\) wt%. Power law fits are also shown, tentatively identifying the inertial (IR, \(b=b_{0}t^{\beta}\)) regimes. The inset highlights the power law scaling in the inertial regime and the initial TR regime. \(l_{w}\) is the thermal length for pure water droplets and \(l_{s}\) for surfactant-laden droplets above CAC according to Ref. [10]. Data for C10E8 and average growth rates are provided in the Supplemental Material.
includes both the TR and IR regimes together as an overview of the overall speed of growth is reported for each surfactant for a range of concentrations in Table 3. It is calculated over the time interval between the moment that the link between the droplets is established, \(t_{c}\), at the beginning of the coalescence until the point at which the bridge radius is equal to the radius of the droplets in the \(y\) direction (for example, see Fig. 3e). As surfactant concentration increases, the bridge growth process slows down in comparison with the simulated case of the pure water droplets. These data also show a slightly faster bridge growth in the case of the C10E8 surfactant (see Fig. S1 in Supplemental Information).
Furthermore, the flow field of water molecules during coalescence exhibits differences between droplets with and without surfactant. In Fig. 6, the colour code indicates flow towards the bridge (red) and away from the bridge (blue). In the case of the water droplets without surfactant, the formation of the bridge at the very initial stages is accompanied by fluctuations of internal collective flow in the direction of the coalescence axis (\(x\) direction), which encompass the entire droplets (Figs 6a, b). This is due to the capillary waves produced by the energy release from the initial rupture of the surface when the droplets first touch.[44] The waves propagate and result in perturbations in the overall shape of the droplets and the flow patterns illustrated in Figs 6a and b. These flow patterns disappear as the bridge grows further and a robust contact between the two droplets establishes beyond the thermal regime. Moreover, an overall flow towards the bridge as the droplets further coalesce is observed (notice the dominance of red in Fig. 6c), while at the final equilibrium only random thermal fluid flow patterns are seen (Fig. 6d). We have not noticed any statistically significant flow patterns or Marangoni flow[33] (_e.g._ in the case of droplets with surfactant) towards any of the other directions (_e.g._ radial).
As surfactant is added to the droplets the early time collective flow patterns of Figs 6a and 6b gradually disappear, especially for all concentrations above the CAC (Fig. 7). Compare, for exam
\begin{table}
\begin{tabular}{l c c c c c} Concentration (CAC) & 0.8 & 1.6 & 3.2 & 4.7 & 6.1 \\ \hline C10E4 & 0.2849 & 0.2204 & 0.1878 & 0.1605 & 0.1047 \\ C10E8 & 0.2794 & 0.2319 & 0.1871 & 0.1530 & 0.1115 \\ CAC = 7.5 wt\% & & & & & \\ \end{tabular} \({}^{\mathrm{a}}\) For pure water droplets in the viscous regime (result from simulation): 0.3675 \(\sigma/\tau\).
\end{table}
Table 3: Average velocity of bridge growth in units \(\sigma/\tau\)\({}^{\mathrm{a}}\)
ple, the flow patterns in Fig. 6b and 6f, for the same bridge size. The suppression of the collective flow occurs through two routes: First, the surfactant at the surface reduces the surface tension (reducing energy input from the initial rupture of the surface) and, also, reduces the amplitude of thermal fluctuations, thus suppressing the formation of multiple thermal bridges [10]. Second, the presence of aggregates in the bulk hinders the flow of the water molecules and disperses the momentum transfer before it enters deeper into the droplets.
Figure 6: Flow field of water (\(x\) velocity component, \(v_{x}\)), in cross-sections of droplets without (a–d) and with (e–h) surfactant (C10E4) at 4.7 CAC concentration, at different stages of coalescence. Side by side times correspond to similar stages of the coalescence process. Red reflects the intensity of flow (only water) motion towards the bridge, blue away from the bridge. Time labels based on contact time (\(t_{c}\)) are added. Note that white space between the water areas (e.g. in the bridge) includes surfactant aggregates and surfactant on the surface. This can cause an illusion of multiple contact points such as in panel f, which are in fact surrounded by surfactant forming an overall broad bridge.
## V Conclusions
In this study, the fundamental processes involved in coalescence of droplets containing surfactant have been described -- including the initial rupture and bridge growth, which occur on time and length scales inaccessible to experiment. We have reported on the main adsorption processes (surfactant transport mechanism), characterised the bridge growth dynamics of coalescence, and identified several important differences to the case of pure water droplets and those with surfactant. Notably, Fig. 5 suggests that if a slow-down of coalescence processes is desired industrially, more surfactant should be added, which confirms earlier suggestions [37; 57]. Moreover, we have identified early time collective flow patterns that are present in the case of aqueous droplets without surfactant, but are absent when appreciable surfactant is present. Surfactant also suppresses the multiple
Figure 7: Water flow pattern for different concentrations (a–f, 0–6.1 CAC) for C10E4 at the initial pinching stage. Above CAC (c–f), there is no observable pattern. Red color reflects the intensity of flow towards the bridge, while blue indicates the intensity of flow away from the bridge. Like in Fig. 6, empty space within/between the water in the droplets (e.g. in the bridge) includes surfactant, and the contact point of the droplet surface is always single.
precursor bridges that are important at early times for pure water [10]. The last appears to indicate that thermal fluctuations will be less important for topological changes of surfactant-laden droplets generally (splitting, merging, etc.). We anticipate that our results open new exploration directions, which will be relevant for practical applications, and that they suggest the kind of effects that will be seen in other as yet unexplored processes such as droplet break-up and coalescence on substrates. An aspect that requires further consideration is the various effects that might be attributed to a larger surface-area-to-volume ratio as the size of the droplets decreases. For example, we saw that minor redistribution of surfactants from surface to bulk or vice versa can cause large fluctuations in the bulk, while such effects may become negligible in macroscale systems [26]. It would therefore be interesting to explore larger systems in the future as more computational resources become available, as well as employ a range of different simulation models to explore droplet coalescence in the presence of surfactant.
## VI Supplementary Material
The Supplementary Material provides the details of the probabilities for the mass transport mechanism of surfactant molecules between the different regions in the droplets that reflect the arrows in Fig. 3. It also contains data on the bridge growth dynamics in the case of C10E8 surfactant.
## Acknowledgments
This research has been supported by the National Science Centre, Poland, under grant No. 2019/34/E/ST3/00232. We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2022/015261.
|
2302.01394 | Understanding and contextualising diffusion models | The latest developments in Artificial Intelligence include diffusion
generative models, quite popular tools which can produce original images both
unconditionally and, in some cases, conditioned by some inputs provided by the
user. Apart from implementation details, which are outside the scope of this
work, all of the main models used to generate images are substantially based on
a common theory which restores a new image from a completely degraded one. In
this work we explain how this is possible by focusing on the mathematical
theory behind them, i.e. without analyzing in detail the specific
implementations and related methods. The aim of this work is to clarify to the
interested reader what all this means mathematically and intuitively. | Stefano Scotta, Alberto Messina | 2023-01-26T11:24:27Z | http://arxiv.org/abs/2302.01394v2 | # Understanding and contextualising diffusion models
###### Abstract
The latest developments in Artificial Intelligence include diffusion generative models, quite popular tools which can produce original images both unconditionally and, in some cases, conditioned by some inputs provided by the user. Apart from implementation details, which are outside the scope of this work, all of the main models used to generate images are substantially based on a common theory which restores a new image from a completely degraded one. In this work we explain how this is possible by focusing on the mathematical theory behind them, i.e. without analyzing in detail the specific implementations and related methods. The aim of this work is to clarify to the interested reader what all this means mathematically and intuitively.
## 1 Introduction
The main goal of this work is to give an as much rigorous as possible explanation of the theory behind the main models used nowadays for image generation. We talk about "generative" models because they are able, once opportunely set and trained, to generate new and realistic visual contents (meaning that they could look "real" images to an average observer). The common idea behind these models is: since we are able to corrupt gradually any real image up to a completely noisy one, then we should be able to do the reverse process of "denoising" a totally random noisy image. Hence, this process of generating new images can be intended as the reverse of the process that is used to corrupt the real images. In some way, the models that we are going to present, given a lot of sequences of images that are gradually corrupted, become able to recover, from a randomly sampled input corresponding to a completely corrupted image, a new realistic one (which is none of the original images used for training). The reference these models do to the concept of "diffusion" comes from the process used to corrupt the images as well as - we will see - from the one used to recover them. Indeed, in most of the literature about generative models the real images
used for training are corrupted adding gradually, and for a really large number of steps, infinitesimal Gaussian noise to their initial representation. "Noising" processes built in this way are examples of diffusion processes.
More in detail, any image is represented by a precise sequence of numbers corresponding to the pixel values from which it can be rendered, and this sequence is corrupted, adding Gaussian noise, until it becomes a totally random sequence belonging to some probability distribution. This process is repeated many times and used to train the model in such a way that it will be also usable to compute the reverse process. The idea is then to take a random sample from the resulting probability distribution and use this model to obtain a new sequence of numbers corresponding to a new "real" image. These processes are intuitively represented in Figure 1.
In other words these models are able to sample new elements from the probability distribution of the sequences of pixel values representing real images, without explicit knowledge of this distribution. Elements of this distribution are the limit points of the reverse process described above, starting from any completely noisy image belonging to the limit distribution of the "noising" process.
This approach is completely different from the one trying to describe that distribution as a mixture of known probability distributions (for example Gaussian Mixture Models) in order to be able to sample from that later. Indeed, even if this last approach is perfectly working in theory ([8]), it came out that the goal distribution is too complex to be obtained as a combination of a reasonable number of known probability distributions. A good reference for these attempts is the paper [7] where some solutions are also proposed.
As anticipated, we decided to focus rather on the mathematical theory behind all the models that we are going to present than on the way in which they are being practically developed. Our goal is indeed to make understandable more the "why" this kind of model work than the "how". Nevertheless, we hope that after reading this paper the interested reader can have a fresh look at the original papers and recognize in each of the proposed structures the theoretical foundation here described.
This work is structured as follows. In Section 2 we considered the Diffusion Probabilistic Model (DPM), for which we focused particularly on the references [17, 10], the seminal works where the possibility to get a totally new image from a noisy image through modeling the reverse Guassian process ("denoising" ) was firstly introduced. This model and the theory on which it is based are the key also to understand the models that later improved its performance, as the ones presented in [18, 15, 12, 14] and many others. Then, in Section 3 we briefly state the differences between DPM and the Stable Diffusion (DS) models introduced in [16]. This new (state of the art) kind of models improves the results and decreases the computational cost of DPM by using the same idea except for changing the space in which it works, namely a space of latent representations of images. Moreover, it also allows to condition the generation process through the insertion of a prompt influencing the kind of image we want to generate. Lastly, in Section 4, we talk about the recent proposals for Cold Diffusion models
Figure 1: Intuitive representation of the process adding noise starting from a given image (a) and the one denoising a completely noisy image to get to a new one. In blue we represented the set composed by all the real images starting point of the noising process and the limit point of the denosing process. Notice that this is just an intuitive representation of these processes, since they actually take place in much higher dimensions and we do not know anything about the properties of the spaces involved.
(developed in [2]), that use the same idea behind DPM but proving that it is not actually necessary to use random elements to corrupt and then restore the images. Indeed the authors showed that using arbitrary corrupting processes (including deterministic ones) it is possible to build a model that can generate new images starting from a totally corrupted one.
### Similar works
Because of the novelty introduced by these kind of processes during the last months, many works similar to this one have been published with the aim to explain the theory behind generative models and in particular behind DPM. [13] surely deserves a particular mention, since indeed in that work the whole mathematical basics behind DPM are treated in details similarly as we did here. Other surveys (like [5, 20]) and many web blogs also tried to make these kind of models more understandable to the occasional reader. We decided anyway to complete this work because we believe that it contains some small novelties and some additional comments that can be useful to anyone interested in understanding better the underlying theory.
## 2 Diffusion Probabilistic Models
In this section we present the DPM introduced in [10, 17]. The main idea behind this model, as we mentioned in the Introduction, is that, given the stochastic process adding noise to real images up to a completely noisy image, it is possible to reverse this process, i.e., starting from a completely noisy image, to denoise it, step by step, up to an image belonging to the distribution of realistic images and - as such - that looks totally realistic. This idea implies that the built model, in some way, "compiles" the stochastic relation between the parts (pixels) of the noisy image and uses this compiled knowledge to recover (in a stochastic sense) some combination of pixels representing a realistic image. Furthermore, we can say that the resulting image is (with some abuse of notation, see Remark 2.1) a totally new image with probability 1. Intuitively, this conjecture is based on the observation that because the space of all possible realistic images is (almost) infinite - and since the reverse process starting from a random pattern goes back to an image belonging to the distribution of real images in a random way -, the limiting point is "almost surely" different from each one that actually already "exists" (obviously including the image set used in the training process).
**Remark 2.1**.: We said that the images generated by these models are "new" with probability 1 but, even if this is true in practice, it is not true theoretically. In fact, since the images that we consider are composed by a finite number of pixel values combinations, they are not infinite. Therefore, theoretically, any of the real images (even what you are seeing right now) could be reproduced by these models, the limit being just in the amount and diversity of the training data. However, in practice the set of known realistic images that each of us
observes (let alone remembers) during its whole life is so limited compared to the set of all possible realistic images that this never happens.
### Mathematical background
The idea of the forward process involved here is to add Gaussian noise to a real image \(x_{0}\) at each step up to a completely noised image. In particular this process is a Markov process \(\{x_{t}\}_{t\in\{0,\dots,T\}}\) (defined on some probability space \((\Omega,\mathcal{F},\mathbb{P})\)) taking values in the set of all possible images, meant as matrices containing the values composing the image1. Given the starting image \(x_{0}\), the forward "noising" process is built iterating the following step in which we add Gaussian noise for any \(t\in\{1,\dots,T\}\):
Footnote 1: Hitherto and for the remainder of the paper we consider images, also in order to give the reader a concrete reference with which to follow the tractation, but notice that the whole theory does not depend on the space in which \(\{x_{t}\}_{t\in\{0,\dots,T\}}\) is in, as it comes out from Section 3.
\[x_{t}=\sqrt{1-\beta_{t}}x_{t-1}+\sqrt{\beta_{t}}z_{t}, \tag{1}\]
where \(z_{t}\sim\mathcal{N}(0,I)\) and \(\{\beta_{t}\}_{t=1}^{T}\) are arbitrary positive parameters. So, the distribution of \(x_{t}\) given \(x_{t-1}\), that we denote by \(p(x_{t}|x_{t-1})\) is the same of a random variable \(\mathcal{N}(\sqrt{1-\beta_{t}}x_{t-1};\beta_{t}I)\), for any \(t\in\{1,\dots,T\}\).
Moreover, being the process Markovian, with these conditional distributions we can explicitly evaluate the distribution of the whole process, i.e.
\[p(x_{0},x_{1},\dots,x_{T})=p(x_{0})p(x_{1},\dots,x_{T}|x_{0})=p(x_{0})\prod_{t =1}^{T}p(x_{t}|x_{t-1}). \tag{2}\]
Another interesting and useful property of this kind of process is that, given \(x_{0}\) it is possible to evaluate the distribution of \(x_{t}\), for any \(t\in\{1,\dots,T\}\), as follows
\[\begin{split} x_{t}&=\sqrt{1-\beta_{t}}x_{t-1}+ \sqrt{\beta_{t}}z_{t}=\sqrt{1-\beta_{t}}(\sqrt{1-\beta_{t-1}}x_{t-2}+\sqrt{ \beta_{t-1}}z_{t-1})+\sqrt{\beta_{t}}z_{t}\\ &=x_{0}\prod_{s=1}^{t}\sqrt{1-\beta_{t}}+\sum_{s=1}^{t}\sqrt{ \beta_{s}}z_{s}\prod_{u=s+1}^{t}\sqrt{1-\beta_{u}}.\end{split} \tag{3}\]
Hence, introducing the notation \(\alpha_{t}=1-\beta_{t}\) and \(\overline{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{s}\), we have \(p(x_{t}|x_{0})=\mathcal{N}(x_{0}\sqrt{\overline{\alpha}_{t}};(1-\overline{ \alpha}_{t})I)\) (see Remark 2.3 for details about it). Then let us observe two very important features that this process has if we opportunely choose the values of the \(\{\beta_{t}\}_{t=1}^{T}\).
* if \(\overline{\alpha}_{t}\to 0\) as \(t\to\infty\), the limit distribution of \(x_{t}\) is a \(\mathcal{N}(0;I)\). This means that, under this hypothesis, if we take sufficiently big \(T\), we can say that \(x_{T}\sim\mathcal{N}(0;I)\), independently from the starting point \(x_{0}\). Note that \(\overline{\alpha}_{t}\to 0\) as \(t\to\infty\), if, for example, \(\beta_{s}<1\) for any \(s\). This property is clear from the simulation results resumed in Figure 3.
* if \(\beta_{t}<<1\) for any \(t\in\{1,\ldots,T\}\), as pointed out in [17] and in Remark 2.4, the process \(\{x_{t}\}_{t=0}^{T}\) admits a reverse process \(\{x_{t}\}_{t=T}^{0}\) such that the conditional distributions \(p(x_{t-1}|x_{t})\) have the same functional form of the forward ones \(p(x_{t}|x_{t-1})\), for any \(t\in\{1,\ldots,T\}\). This is really the key feature of this process in order to characterize the reverse process in the following sections.
**Remark 2.2**.: Notice that if we take all values of \(\{\beta_{t}\}_{t=1}^{T}\) close to \(1\) the process \(x_{t}\) converges really fast to the limit distribution \(\mathcal{N}(0,I)\). However, in this case we can not assume that there exists a reverse diffusion process with the same functional form (see Remark 2.4) and so that the model presented would be able to recover the reverse process.
**Remark 2.3**.: It can easily be proved by induction that the random variable \(\sum_{s=1}^{t}\sqrt{\beta_{s}}z_{s}\prod_{u=s+1}^{t}\sqrt{1-\beta_{u}}\sim \mathcal{N}(0,(1-\prod_{s=1}^{t}(1-\beta_{s})I)=\mathcal{N}(0,(1-\overline{ \alpha}_{t})I)\). Indeed the sum of independent Gaussian random variable still a Gaussian r.v. with mean equal to the sum of the means and variance equal to the sum of the variances. Moreover, the variance of \(\sqrt{\beta_{s}}z_{s}\prod_{u=s+1}^{t}\sqrt{1-\beta_{u}}\) is equal to
Figure 2: Distribution in time of the values of \(x_{t}\) in one dimension, obtained starting from a given distribution \(x_{0}\) (in the top left plot) after 2000 simulations of trajectories. The parameters \(\beta_{t}\) were chosen growing linearly in time from \(0.0004\) to \(0.06\). It is clear the convergence to some distribution close to a Gaussian random variable after a sufficient big number of steps, as we proved analytically. Doing the same simulation with higher values of \(\{\beta_{t}\}_{t\in\{1,\ldots,T\}}\) the convergence to the Gaussian distribution would be much faster (immediate in the case \(\beta_{t}=1\) for any \(t\in\{1,\ldots,T\}\)). The notebook containing the simulations related to this toy model and other interesting remarks can be found at [https://github.com/stefanoscotta/1-d-generative-diffusion-model](https://github.com/stefanoscotta/1-d-generative-diffusion-model).
\(\beta_{s}\prod_{u=s+1}^{t}(1-\beta_{u})\) for any \(s\in\{1,\ldots,t\}\). Let us then show that the base case of the induction proof holds: for \(t=2\), the variance of \(\sum_{s=1}^{2}\sqrt{\beta_{s}}z_{s}\prod_{u=s+1}^{2}\sqrt{1-\beta_{u}}\) is equal to \(\beta_{1}(1-\beta_{2})+\beta_{2}=1-(1-\beta_{1})(1-\beta_{2})=1-\overline{ \alpha}_{2}\). Therefore it remains only to show the inductive step to concludes the proof, let assume that for \(t=t^{\prime}\) it holds that the variance of \(\sum_{s=1}^{t^{\prime}}\sqrt{\beta_{s}}z_{s}\prod_{u=s+1}^{t^{\prime}}\sqrt{1- \beta_{u}}\) is \((1-\overline{\alpha}_{t^{\prime}})\), then the variance of \(\sum_{s=1}^{t^{\prime}+1}\sqrt{\beta_{s}}z_{s}\prod_{u=s+1}^{t^{\prime}+1} \sqrt{1-\beta_{u}}\) is equal to
\[\beta_{t^{\prime}+1}+ \sum_{s=1}^{t^{\prime}}\beta_{s}\prod_{u=s+1}^{t^{\prime}}(1- \beta_{u})(1-\beta_{t^{\prime}+1})=\beta_{t^{\prime}+1}+(1-\overline{\alpha}_ {t^{\prime}})(1-\beta_{t^{\prime}+1})\] \[=\beta_{t^{\prime}+1}+1-\beta_{t^{\prime}+1}-\overline{\alpha}_{ t^{\prime}}(1-\beta_{t^{\prime}+1})=1-\overline{\alpha}_{t^{\prime}+1}.\]
**Remark 2.4**.: We want to find the continuous version of (1) in order to use some important properties of this kind of processes. To do this we follow the computations made in Appendix B of [19]. Under the condition \(\beta_{t}<<1\), for any \(t\in\{1,\ldots,T\}\), we are able to show that we can approximate (1) as
\[x_{t+1}=\sqrt{1-\beta_{t+1}}x_{t}+\sqrt{\beta_{t+1}}z_{t+1}\approx\Big{(}1- \frac{\beta_{t+1}}{2}\Big{)}x_{t}+\sqrt{\beta_{t+1}}z_{t+1}. \tag{4}\]
Indeed, developing the Taylor series of \(\sqrt{1-\beta_{t+1}}\) around \(0\) we get that it is equal to \(1-\frac{\beta_{t+1}}{2}\) plus a term of order \(\beta_{t+1}^{2}\) that can be ignored.
Let us now introduce a new set of scaling parameters \(\{\tilde{\beta}_{t}=T\beta_{t}\}_{t=1}^{T}\) with which we can rewrite (4) as
\[x_{t+1}-x_{t}\approx-\frac{\tilde{\beta}_{t+1}}{2T}x_{t}+\sqrt{\frac{\tilde{ \beta}_{t+1}}{T}}z_{t+1}.\]
Then we rescale the time by a factor \(1/T\), defining \(t^{\prime}=t/T\). With this we introduce \(x(t^{\prime})=x(t/T)=x_{t}\), \(\beta(t^{\prime})=\beta(t/T)=\tilde{\beta}_{t}\), \(z(t^{\prime})=z(t/T)=z_{t}\) and we denote \(1/T\) by \(\Delta t^{\prime}\). Under these assumption we have that
\[x(t^{\prime}+\Delta t^{\prime})-x(t^{\prime})=-\frac{1}{2}\beta(t^{\prime}+ \Delta t^{\prime})\Delta t^{\prime}x(t^{\prime})+\sqrt{\beta(t^{\prime}+ \Delta t^{\prime})\Delta t^{\prime}}z(t^{\prime}+\Delta t^{\prime}).\]
For a big enough \(T\), denoting \(\Delta t^{\prime}\) by \(dt^{\prime}\), we can heuristically say that the previous equation could be approximated as follows
\[x(t^{\prime}+\Delta t^{\prime})-x(t^{\prime})=dx(t^{\prime})\approx-\frac{1}{ 2}\beta(t^{\prime})x(t^{\prime})dt^{\prime}+\sqrt{\beta(t^{\prime})}dw(t^{ \prime}) \tag{5}\]
where \(dw(t)\) is the standard white noise and where we used the following approximations holding for \(T\) big enough: since \(\beta(t^{\prime})<<1\), we can say that \(\beta(t^{\prime}+dt)\approx\beta(t^{\prime})\); \(z(t^{\prime}+dt^{\prime})\sqrt{dt^{\prime}}\approx dw(t^{\prime})\). The intuitive reason of the last approximation is that \(z(t^{\prime}+\Delta t^{\prime})\sim\mathcal{N}(0,I)\) and, with some abuse of notation, \(dw_{t}\sim\mathcal{N}(0,dt)\).
The formulation (5) for the dynamics of our model, thanks to the main theorem stated in [1], grants us that a reverse process exists and that it is a diffusion, as the forward process.
Now, that we have stated and proved the main properties of the forward process, we would like to find the best way to build the diffusion model which represents the reverse process starting from \(x_{T}\sim\mathcal{N}(0,I)\) and going to some \(x_{0}\) satisfying the properties of the starting real images used for the forward process. Let us denote by \(p_{\theta}(\cdot)\) the functions denoting the distributions (or conditional distributions) of the backward process depending on some set of unknown parameters \(\theta\). The goal is finding the latent variables describing the limiting probability \(p_{\theta}(x_{0})=\int p_{\theta}(x_{T},x_{T-1},\ldots,x_{1},x_{0})dx_{T}\ldots dx _{1}\). Now, since the reverse of a Markov process still a Markov process (see [4] for a detailed analysis of that), it is true that
\[p_{\theta}(x_{T},x_{T-1},\ldots,x_{1},x_{0})=p_{\theta}(x_{T})\prod_{t=1}^{T}p _{\theta}(x_{t-1}|x_{t}), \tag{6}\]
which is the analogous of (2) but reverse in time. We already know that \(p_{\theta}(x_{t-1}|x_{t})\) are Gaussian distributed (see Remark 2.4) so, we can conclude that any of them is distributed like \(\mathcal{N}(\mu_{\theta}(x_{t}),\Sigma_{\theta}(x_{t}))\) for some unknown \(\mu_{\theta}(x_{t})\) and \(\Sigma_{\theta}(x_{t})\). To evaluate (6) is then necessary to estimate \(\mu_{\theta}(x_{t})\) and \(\Sigma_{\theta}(x_{t})\) for each time \(t\in\{1,\ldots,T\}\).
As pointed out in [17], these optimal parameters are the ones that maximize the log likelihood averaged on the data that we have, which are given according to the distribution \(p(x_{0})\), being real images. This means that we have to find the \(\mu_{\theta}(x_{t})\) and \(\Sigma_{\theta}(x_{t})\) that maximize
\[\mathcal{L}=\int\log(p_{\theta}(x_{0}))p(x_{0})dx_{0}. \tag{7}\]
First, let us rewrite \(p_{\theta}(x_{0})\) as follows
\[\begin{split} p_{\theta}(x_{0})&=\int p_{\theta}(x_ {T},x_{T-1},\ldots,x_{1},x_{0})dx_{T}\ldots dx_{1}\\ &=\int\frac{p(x_{0},\ldots,x_{T}|x_{0})}{p(x_{0},\ldots,x_{T}|x_ {0})}p_{\theta}(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t})dx_{T}\ldots dx_{ 1}\\ &=\int p(x_{1},\ldots,x_{T}|x_{0})p_{\theta}(x_{T})\prod_{t=1}^{T }\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}dx_{T}\ldots dx_{1}\\ &=\mathbb{E}_{p(x_{1},\ldots,x_{T}|x_{0})}\bigg{[}p_{\theta}(x_{T })\prod_{t=1}^{T}\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}\bigg{]},\end{split} \tag{8}\]
where the second and third equality comes respectively from (6) and (2) plus the fact that \(p(x_{0},\ldots,x_{T}|x_{0})=p(x_{1},\ldots,x_{T}|x_{0})\). Hence, by logarithm properties and Jensen's inequality, (7) can be bounded as follows
\[\begin{split}\mathcal{L}&=\int\log(p_{\theta}(x_{0}))p(x_{0} )dx_{0}\\ &=\int\log\Bigg{(}\mathbb{E}_{p(x_{1},\ldots,x_{T}|x_{0})}\bigg{[}p_ {\theta}(x_{T})\prod_{t=1}^{T}\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1}, x_{0})}\bigg{]}\Bigg{)}p(x_{0})dx_{0}\\ &=\int\log\Bigg{(}\int p_{\theta}(x_{T})\prod_{t=1}^{T}\frac{p_{ \theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}p(x_{1},\ldots,x_{T}|x_{0})dx_{ 1}\ldots dx_{T}\Bigg{)}p(x_{0})dx_{0}\\ &=\int\log\Bigg{(}\int p_{\theta}(x_{T})\prod_{t=1}^{T}\frac{p_{ \theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}d\tilde{p}(x_{1},\ldots,x_{T}|x _{0})\Bigg{)}p(x_{0})dx_{0}\\ &\geq\int\Bigg{(}\int\log\Big{(}p_{\theta}(x_{T})\prod_{t=1}^{T }\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}\Big{)}d\tilde{p}(x _{1},\ldots,x_{T})\Bigg{)}p(x_{0})dx_{0}\\ &=\int\log\Big{(}p_{\theta}(x_{T})\prod_{t=1}^{T}\frac{p_{\theta }(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}\Big{)}p(x_{1},\ldots,x_{T}|x_{0})p(x _{0})dx_{0}dx_{1}\ldots dx_{T}\\ &=\int\log\Big{(}p_{\theta}(x_{T})\prod_{t=1}^{T}\frac{p_{\theta }(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}\Big{)}p(x_{0},x_{1},\ldots,x_{T})dx_ {0}dx_{1}\ldots dx_{T}\\ &=\mathbb{E}_{p(x_{0},\ldots,x_{T})}\bigg{[}\log\Big{(}p_{\theta }(x_{T})\prod_{t=1}^{T}\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0}) }\Big{)}\bigg{]}\\ &=\mathbb{E}_{p(x_{T})}\big{[}\log(p_{\theta}(x_{T}))\big{]}+ \mathbb{E}_{p(x_{0},\ldots,x_{T})}\bigg{[}\log\Big{(}\prod_{t=1}^{T}\frac{p_{ \theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}\Big{)}\bigg{]},\end{split} \tag{9}\]
where used the notation \(d\tilde{p}(x_{1},\ldots,x_{T}|x_{0})\) to denote the random probability measure of density \(p(x_{1},\ldots,x_{T}|x_{0})\). Observe that the first term of the last bound above does not depend on the parameters \(\theta\) since we showed that \(x_{T}\) is distributed as a \(\mathcal{N}(0,I)\). Then the parameters that maximize \(\mathcal{L}\) are the same that maximize
\[\mathbb{E}_{p(x_{0},\ldots,x_{T})}\bigg{[}\log\Big{(}\prod_{t=1}^{T}\frac{p_{ \theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}\Big{)}\bigg{]}.\]
Moreover, let us observe that the first factor of this term (\(t=1\)) is equal to
\[\mathbb{E}_{p(x_{0},x_{1})}\bigg{[}\log\Big{(}\frac{p_{\theta}(x_{0}|x_{1})}{p (x_{1}|x_{0})}\Big{)}\bigg{]}=\mathbb{E}_{p(x_{0},x_{1})}\big{[}\log(p_{\theta} (x_{0}|x_{1}))\big{]}-\mathbb{E}_{p(x_{0},x_{1})}\big{[}\log(p(x_{1}|x_{0})) \big{]}. \tag{10}\]
Now, thanks to Bayes Theorem we can write
\[p(x_{t}|x_{t-1},x_{0})=\frac{p(x_{t-1}|x_{t},x_{0})p(x_{t}|x_{0})}{p(x_{t-1}|x _{0})}\]
and, with this in mind, we rewrite all the factors of \(\tilde{\mathcal{L}}\) but the first (so all the ones involving \(t\geq 2\)) as
\[\begin{split}\sum_{t=2}^{T}&\int\log\bigg{(}\frac{p_{ \theta}(x_{t-1}|x_{t})}{p(x_{t}|x_{t-1},x_{0})}\bigg{)}p(x_{0},\ldots,x_{T})dx_ {0},\ldots dx_{T}\\ &=\sum_{t=2}^{T}\int\log\bigg{(}\frac{p_{\theta}(x_{t-1}|x_{t})}{p (x_{t-1}|x_{t},x_{0})}\frac{p(x_{t-1}|x_{0})}{p(x_{t}|x_{0})}\bigg{)}p(x_{0}, \ldots,x_{T})dx_{0},\ldots dx_{T}\\ &=\sum_{t=2}^{T}\int\log\bigg{(}\frac{p_{\theta}(x_{t-1}|x_{t})}{ p(x_{t-1}|x_{t},x_{0})}\bigg{)}p(x_{0},\ldots,x_{T})dx_{0},\ldots dx_{T}\\ &\qquad\qquad+\sum_{t=2}^{T}\int\log\bigg{(}\frac{p(x_{t-1}|x_{0 })}{p(x_{t}|x_{0})}\bigg{)}p(x_{0},\ldots,x_{T})dx_{0},\ldots dx_{T}.\end{split} \tag{11}\]
Let us focus first on the first term on the right hand-side of the last equality obtained above. Observe that, since each \(\log\bigg{(}\frac{p_{\theta}(x_{t-1}|x_{t})}{p(x_{t-1}|x_{t},x_{0})}\bigg{)}\) depends only on \(x_{0},x_{t-1},x_{t}\), we can rewrite
\[\begin{split}\sum_{t=2}^{T}&\int\log\bigg{(}\frac{p _{\theta}(x_{t-1}|x_{t})}{p(x_{t-1}|x_{t},x_{0})}\bigg{)}p(x_{0},\ldots,x_{T}) dx_{0},\ldots dx_{T}\\ &=\sum_{t=2}^{T}\int\log\bigg{(}\frac{p_{\theta}(x_{t-1}|x_{t})}{ p(x_{t-1}|x_{t},x_{0})}\bigg{)}p(x_{0},x_{t-1},x_{t})dx_{0},dx_{t-1},dx_{t}\\ &=\sum_{t=2}^{T}\int\log\bigg{(}\frac{p_{\theta}(x_{t-1}|x_{t})}{ p(x_{t-1}|x_{t},x_{0})}\bigg{)}p(x_{t-1}|x_{t},x_{0})p(x_{t},x_{0})dx_{0}dx_{t-1} dx_{t}\\ &=-\sum_{t=2}^{T}\mathbb{E}_{p(x_{0},x_{t})}\Big{[}D_{KL}\big{(} p(x_{t-1}|x_{t},x_{0})\ ||\ p_{\theta}(x_{t-1}|x_{t})\big{)}\Big{]};\end{split} \tag{12}\]
where \(D_{KL}\) denotes the Kullback-Leibler divergence (which is a function of \(x_{t}\) and \(x_{0}\) in this case), i.e. for any \(x_{t},x_{0}\),
\[D_{KL}\big{(}p(x_{t-1}|x_{t},x_{0})\ ||\ p_{\theta}(x_{t-1}|x_{t})\big{)}:= \int\log\bigg{(}\frac{p(x_{t-1}|x_{t},x_{0})}{p_{\theta}(x_{t-1}|x_{t})}\bigg{)} p(x_{t-1}|x_{t},x_{0})dx_{t-1}.\]
Now, we analyze the second term on the right hand-side of the last equality
in (11). Using the properties of logarithm it can be written as
\[\begin{split}\sum_{t=2}^{T}&\int\Big{(}\log(p(x_{t-1}|x_ {0}))-\log(p(x_{t}|x_{0}))\Big{)}p(x_{0},x_{t-1},x_{t})dx_{0},dx_{t-1},dx_{t}\\ &=\sum_{t=2}^{T}\Big{(}\mathbb{E}_{p(x_{t-1},x_{0})}\big{[}\log(p( x_{t-1}|x_{0}))\big{]}-\mathbb{E}_{p(x_{t},x_{0})}\big{[}\log(p(x_{t}|x_{0})) \big{]}\Big{)}\\ &=\mathbb{E}_{p(x_{1},x_{0})}\big{[}\log(p(x_{1}|x_{0}))\big{]}- \mathbb{E}_{p(x_{T},x_{0})}\big{[}\log(p(x_{T}|x_{0}))\big{]},\end{split} \tag{13}\]
where the last equality is due to the telescopic property of the sum obtained before. Observe now that the first term on the right hand-side of the last equality above cancels with the second on the right hand-side of (10). At the same time we can write the sum of the second term on the right hand-side of the last equality above plus the first on the right hand-side of the last equality in (9) as
\[\begin{split}\mathbb{E}_{p(x_{T})}&\big{[}\log(p_ {\theta}(x_{T}))\big{]}-\mathbb{E}_{p(x_{T},x_{0})}\big{[}\log(p(x_{T}|x_{0}) )\big{]}\\ &=\int\log\bigg{(}\frac{p_{\theta}(x_{T}))}{p(x_{T}|x_{0})}p(x_{ 0},x_{T})dx_{0}dx_{T}\\ &=\int\log\bigg{(}\frac{p_{\theta}(x_{T}))}{p(x_{T}|x_{0})}p(x_{ 0})dx_{0}dx_{T}\\ &=-\int\log\bigg{(}\frac{p(x_{T}|x_{0})}{p_{\theta}(x_{T}))}p(x_ {T}|x_{0})p(x_{0})dx_{0}dx_{T}\\ &=-\mathbb{E}_{p(x_{0})}\Big{[}D_{KL}\big{(}p(x_{T}|x_{0})\;||\; p_{\theta}(x_{T})\big{)}\Big{]}.\end{split} \tag{14}\]
Finally, summing all the term remained, we can conclude that
\[\begin{split}\mathcal{L}=\mathbb{E}_{p(x_{0},x_{1})}\big{[}\log( p_{\theta}(x_{0}|x_{1}))\big{]}-&\sum_{t=2}^{T}\mathbb{E}_{p(x_{0},x_{t})} \Big{[}D_{KL}\big{(}p(x_{t-1}|x_{t},x_{0})\;||\;p_{\theta}(x_{t-1}|x_{t}) \big{)}\Big{]}\\ &-\mathbb{E}_{p(x_{0})}\Big{[}D_{KL}\big{(}p(x_{T}|x_{0})\;||\;p _{\theta}(x_{T})\big{)}\Big{]}.\end{split} \tag{15}\]
Since we proved that \(p_{\theta}(x_{T})\) under the opportune conditions on the parameters \(\{\beta_{t}\}_{t=1}^{T}\) is a \(\mathcal{N}(0,I)\), it does not depends on the parameters \(\theta\). So, maximising (15) is equivalent to maximise
\[\tilde{\mathcal{L}}:=\mathbb{E}_{p(x_{0},x_{1})}\big{[}\log(p_{\theta}(x_{0}| x_{1}))\big{]}-\sum_{t=2}^{T}\mathbb{E}_{p(x_{0},x_{t})}\Big{[}D_{KL}\big{(}p(x_{t-1}| x_{t},x_{0})\;||\;p_{\theta}(x_{t-1}|x_{t})\big{)}\Big{]}, \tag{16}\]
Hence, the model is given by the solutions of the problem \(\hat{p}_{\theta}(x_{t-1}|x_{t})=\arg\max_{p_{\theta}(x_{t-1}|x_{t})}\tilde{ \mathcal{L}}\).
**Remark 2.5**.: It is possible to evaluate explicitly the value of \(p(x_{t-1}|x_{t},x_{0})\) as it is mentioned in [10] and explained in detail in the next Section.
#### 2.1.1 Analytic computation of \(p(x_{t-1}|x_{t},x_{0})\)
We know that \(p(x_{t-1}|x_{t},x_{0})\) is Gaussian distributed as a \(\mathcal{N}(\tilde{\mu}(x_{t},x_{0}),\tilde{\Sigma}(x_{t},x_{0}))\). In [10] the values of \(\tilde{\mu}(x_{t},x_{0})\) and \(\tilde{\Sigma}(x_{t},x_{0}))\) are given and they are useful for the estimate procedure. In this brief Section we give a strategy of evaluating these values using the results of [3], Chapter 2.3.3 plus some computations.
For that purpose we use the same notation used in Chapter 2.3.3 of [3], i.e. we denote \(p(x_{t-1}|x_{t},x_{0})\) by \(p(x|y)\), \(p(x_{t}|x_{t-1},x_{0})=p(x_{t}|x_{t-1})\) by \(p(y|x)\) and \(p(x_{t-1}|x_{0})\) by \(p(x)\). By our previous computations we already know that \(p(y|x)\) and \(p(x)\) are respectively distributed like two Gaussian variables \(\mathcal{N}(\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I)\) and \(\mathcal{N}(\sqrt{\overline{\alpha}_{t-1}}x_{0},(1-\overline{\alpha}_{t-1})I)\). Now, for simplify the notation, let us consider the unidimensional case, the general case is totally analogous being the covariance matrices diagonal.
So, keeping in mind the relation \((1-\beta_{t})(1-\overline{\alpha}_{t-1})=1-\beta_{t}-\overline{\alpha}_{t}\) and applying (2.116) of [3] to our setting we get that:
\[\begin{split}\tilde{\Sigma}(x_{t},x_{0})=\tilde{\Sigma}_{t}= \left(\frac{1}{1-\overline{\alpha}_{t-1}}+(1-\beta_{t})\frac{1}{\beta_{t}} \right)^{-1}&=\left(\frac{\beta_{t}+(1-\beta_{t})(1-\overline{ \alpha}_{t-1})}{\beta_{t}(1-\overline{\alpha}_{t-1})}\right)^{-1}\\ &=\beta_{t}\frac{(1-\overline{\alpha}_{t-1})}{(1-\overline{ \alpha}_{t})}\end{split} \tag{17}\]
and
\[\begin{split}\tilde{\mu}(x_{t},x_{0})&=\tilde{ \Sigma}_{t}\bigg{(}\sqrt{1-\beta_{t}}\frac{1}{\beta_{t}}x_{t}+\frac{1}{1- \overline{\alpha}_{t-1}}\sqrt{\overline{\alpha}_{t-1}}x_{0}\bigg{)}\\ &=\frac{\sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{1-\overline {\alpha}_{t}}x_{t}+\frac{\beta_{t}\sqrt{\overline{\alpha}_{t-1}}}{1-\overline {\alpha}_{t}}x_{0},\end{split} \tag{18}\]
as stated in [10].
### Estimate procedure
In this section we want to analyze further the problem of maximization of (16), making assumptions on the distributions \(p_{\theta}(x_{t-1}|x_{t})\). We already saw that \(p_{\theta}(x_{t-1}|x_{t})\) is distributed like a \(\mathcal{N}(\mu_{\theta}(x_{t}),\Sigma_{\theta}(x_{t}))\). Proceeding as it is done in [10], we assume that, for any \(t\in\{1,\dots,T\}\), \(\Sigma_{\theta}(x_{t})=\sigma_{t}^{2}I\) for some experimentally established value of \(\sigma_{t}^{2}\) and so, it does not depend on the process itself nor on the parameters \(\theta\) (this is fundamental for the derivation of (19)). In [10] the authors suggested two possibilities: \(\sigma_{t}^{2}=\beta_{t}\) or \(\sigma^{2}=\beta_{t}\frac{1-\overline{\alpha}_{t-1}}{1-\overline{\alpha}_{t}}\).
Under these assumptions, thanks to the properties of Kullback-Leibler divergence of two multivariate normal distribution it is true that
\[D_{KL}\big{(}p(x_{t-1}|x_{t},x_{0})\mid\mid p_{\theta}(x_{t-1}|x_{t})\big{)}= \frac{1}{2\sigma_{t}^{2}}||\tilde{\mu}(x_{t},x_{0})-\mu_{\theta}(x_{t})||^{2}+C, \tag{19}\]
where \(C\) is some constant not depending on \(\theta\) and \(||\cdot||\) is the standard \(L^{2}\) norm. Moreover, recalling that we can express \(x_{t}\) as a function of \(x_{0}\) as
\[x_{t}(x_{0})=\sqrt{\overline{\alpha}_{t}}x_{0}+\sqrt{1-\overline{\alpha}_{t}}z_ {t} \tag{20}\]
for some \(z_{t}\sim\mathcal{N}(0,I)\), we can rewrite the second term on the right had-side of (16) as
\[\begin{split}\sum_{t=2}^{T}&\mathbb{E}_{p(x_{0})} \Big{[}\frac{1}{2\sigma_{t}^{2}}||\tilde{\mu}(x_{t}(x_{0}),x_{0})-\mu_{\theta} (x_{t}(x_{0}))||^{2}\Big{]}+C^{\prime}\\ &=\sum_{t=2}^{T}\mathbb{E}_{p(x_{0}),z_{t}}\Big{[}\frac{1}{2 \sigma_{t}^{2}}\Big{|}\Big{|}\tilde{\mu}\Big{(}x_{t}(x_{0}),\frac{1}{\sqrt{ \overline{\alpha}_{t}}}(x_{t}(x_{0})-\sqrt{1-\overline{\alpha}_{t}}z_{t}) \Big{)}-\mu_{\theta}(x_{t}(x_{0}))\Big{|}\Big{|}^{2}\Big{]}+C^{\prime},\end{split} \tag{21}\]
where \(C^{\prime}\) is a constant not depending on \(\theta\) and the \(z_{t}\) at pedix of the second expectation means that we have to average also on the different possible values that \(z_{t}\sim\mathcal{N}(0,1)\) takes at any time step. Now, thanks to (18) we can rewrite the argument of the expectation in the last term above as
\[\begin{split}&\frac{1}{2\sigma_{t}^{2}}\Big{|}\Big{|}\frac{ \sqrt{\alpha_{t}}(1-\overline{\alpha}_{t-1})}{1-\overline{\alpha}_{t}}x_{t}( x_{0})+\frac{\beta_{t}\sqrt{\overline{\alpha}_{t-1}}}{1-\overline{\alpha}_{t}} \frac{1}{\sqrt{\overline{\alpha}_{t}}}(x_{t}(x_{0})-\sqrt{1-\overline{\alpha}_ {t}}z_{t})-\mu_{\theta}(x_{t}(x_{0}))\Big{|}\Big{|}^{2}\\ &=\frac{1}{2\sigma_{t}^{2}}\Big{|}\Big{|}\frac{\alpha_{t}(1- \overline{\alpha}_{t-1})+(1-\alpha_{t})}{\sqrt{\alpha_{t}}(1-\overline{\alpha }_{t})}x_{t}(x_{0})-\frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}\frac{1}{ \sqrt{\alpha}_{t}}z_{t}-\mu_{\theta}(x_{t}(x_{0}))\Big{|}\Big{|}^{2}\\ &=\frac{1}{2\sigma_{t}^{2}}\Big{|}\Big{|}\frac{1}{\sqrt{\alpha_{t }}}\Big{(}x_{t}(x_{0})-\frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}z_{t} \Big{)}-\mu_{\theta}(x_{t}(x_{0}))\Big{|}\Big{|}^{2}.\end{split} \tag{22}\]
From these computation it becomes clear that the best \(\mu_{\theta}(x_{t}(x_{0}))\) is the one which predicts \(\frac{1}{\sqrt{\alpha_{t}}}\big{(}x_{t}(x_{0})-\frac{\beta_{t}}{\sqrt{1- \overline{\alpha}_{t}}}z_{t}\big{)}\). Moreover, observe that \(x_{t}\) is actually an input of the model and so we could take (as suggested in [10]):
\[\begin{split}\mu_{\theta}(x_{t}(x_{0}))&=\tilde{\mu }\big{(}x_{t}(x_{0}),\frac{1}{\sqrt{\overline{\alpha}_{t}}}(x_{t}(x_{0})- \sqrt{1-\overline{\alpha}_{t}}z_{\theta}(x_{t}))\big{)}\\ &=\frac{1}{\sqrt{\alpha_{t}}}\Big{(}x_{t}(x_{0})-\frac{\beta_{t}}{ \sqrt{1-\overline{\alpha}_{t}}}z_{\theta}(x_{t})\Big{)},\end{split} \tag{23}\]
where \(z_{\theta}(x_{t})\) is the function predicting \(z_{t}\) from the value of the input \(x_{t}\) and the equality comes from (18) plus some simple computations.
Hence, thanks to (22) and (23), we get that the expectation in (21) is equal to
\[\begin{split}\mathbb{E}_{p(x_{0}),z_{t}}&\Big{[} \frac{\beta_{t}^{2}}{2\sigma_{t}^{2}\alpha_{t}(1-\overline{\alpha}_{t})}\big{|} \big{|}z_{t}-z_{\theta}(x_{t})\big{|}\big{|}^{2}\Big{]}\\ &=\mathbb{E}_{p(x_{0}),z_{t}}\Big{[}\frac{\beta_{t}^{2}}{2 \sigma_{t}^{2}\alpha_{t}(1-\overline{\alpha}_{t})}\big{|}\big{|}z_{t}-z_{ \theta}(\sqrt{\overline{\alpha}_{t}}x_{0}+\sqrt{1-\overline{\alpha}_{t}}z_{t} )\big{|}\big{|}^{2}\Big{]},\end{split} \tag{24}\]
where the equality comes from (20). The function that we are looking for is therefore the \(z_{\theta}(\cdot)\) that minimize the quantity
\[\sum_{t=2}^{T}\mathbb{E}_{p(x_{0}),z_{t}}\Big{[}\frac{\beta_{t}^{2}}{2\sigma_{t}^ {2}\alpha_{t}(1-\overline{\alpha}_{t})}\big{|}\big{|}z_{t}-z_{\theta}(\sqrt{ \overline{\alpha}_{t}}x_{0}+\sqrt{1-\overline{\alpha}_{t}}z_{t})\big{|}\big{|} ^{2}\Big{]}. \tag{25}\]
**Remark 2.6**.: In [10] the authors empirically showed that in order to get the function \(z_{\theta}\) it is possible to minimize the following quantity instead of (25) and what we define in Section 2.2.1:
\[\sum_{t=2}^{T}\mathbb{E}_{p(x_{0}),z_{t}}\Big{[}\big{|}\big{|}z_{t}-z_{\theta} (\sqrt{\overline{\alpha}_{t}}x_{0}+\sqrt{1-\overline{\alpha}_{t}}z_{t})\big{|} \big{|}^{2}\Big{]}. \tag{26}\]
Actually they showed that minimizes this unweighted functional gives even better performance.
Notice that, thanks to the assumption that we made at the beginning of this Section, once we have estimated the function \(z_{\theta}(\cdot)\) we "have" the reverse process. Indeed, since \(p(x_{t-1}|x_{t})=\mathcal{N}\bigg{(}\frac{1}{\sqrt{\alpha_{t}}}\Big{(}x_{t}(x _{0})-\frac{\beta_{t}}{\sqrt{1-\overline{\alpha}_{t}}}z_{\theta}\Big{)}, \sigma_{t}^{2}I\bigg{)}\), we have that
\[x_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\Big{(}x_{t}-\frac{\beta_{t}}{\sqrt{1- \overline{\alpha}_{t}}}z_{\theta}(x_{t})\Big{)}+\sigma_{t}z_{t}. \tag{27}\]
**Remark 2.7**.: The last computations (after (19)) are important to make comprehensible that what we are doing in the whole estimation process is, in some way, to estimate the noise to remove at each step in the reverse process. More in practice they are not necessary, in principle it is indeed possible simply to build some model able to find directly \(\mu_{\theta}\) maximizing the left hand-side of (21). Then the estimated reverse process would simply be given by
\[x_{t-1}=\mu_{\theta}(x_{t})+\sigma_{t}z_{t}, \tag{28}\]
equivalent to (27) before all the computations showed.
#### 2.2.1 Analysis of \(\mathbb{E}_{p(x_{0},x_{1})}\big{[}\log(p_{\theta}(x_{0}|x_{1}))\big{]}\)
The last step of the reverse process determined by \(p_{\theta}(x_{0}|x_{1})\) is treated in section 3.3 of [10] in a particular way. Indeed, the authors proposed to work through the model with images which are combinations of pixel values (usually in the set \(\{0,1,\ldots,255\}\)) rescaled to combinations of element in the interval \([-1,1]\). Then within the forward process Gaussian noise is added to the "scaled" images up to the limit distributions which, theoretically, lives in the space of combination of elements in \([-\infty,+\infty]\). In particular, starting from a normal distributed noisy image \(x_{T}\), in principle the reverse process built goes up to an image \(x_{0}\) which still lives in the space of combination of elements in \([-\infty,+\infty]\). For this reason the authors, in order of having as an output of the reverse process an image in
the original pixel space (which can be deducted from the space of combination of elements in \([-1.1]\)), define the last step \(p_{\theta}(x_{0}|x_{1})\) to be a discrete decoder depending on \(\mu_{\theta}(x_{1}(x_{0}))\) in such a way that, receiving in input an image \(x_{1}\) in the unlimited space, it returns as output an image in the scaled space \(x_{0}\) which can then be converted to the pixel space.
This is a technicality necessary for the application of this kind of model but does not change the mathematical theory behind DPM which we presented in the previous sections. So, we refer the interested reader to the aforementioned section of [10] in which it is also pointed out that the solution proposed is not unique and neither optimal.
## 3 DPM on latent space - Stable Diffusion
The most important mathematical theory for DPM is the one presented in Section 2 which was developed first in [17] and which is behind the outstanding results of [10]. This kind of model is built in the pixel space. Indeed, for any \(t\in\{0,\ldots,T\}\) the values taken by the forward and backward process \(x_{t}\) are \(H\times W\times 3\) matrices, where \(H\) and \(W\) are respectively the height and the width of the images in pixels and \(3\) are the RGB values for each pixel (eventually rescaled, see Section 2.2.1). This feature translates in the fact that training, using and optimizing such models is computationally really expansive and therefore, slow.
In [16] the authors proposed some important changes to the DPM that, using
Figure 3: Distributions obtained with the reverse process obtained with (27) starting from 2000 samples from a normal distribution \(\mathcal{N}(0,1)\). For the training of the model were used the trajectories simulated for getting the distributions in Figure 3, indeed it is clear as the limiting distribution of the reverse process in this picture is close to the initial distribution of data there.
the same mathematical background that we explained in Section 2.1, reduce the dimension of the space in which the model is built, improving its performance. The main idea is to create a model which is not working anymore on the pixel space but on the space of some kind of compressed images. So, intuitively they do not train a model that is able to denoise images, but they train a model that is able to denoise compressed representations of them. In such a way, when the trained model generates a new compressed image, starting from the compressed representation of noise, it is possible to retrieve with some decoder the image corresponding to that representation. This leads to an huge advantage in term of computation complexity. This kind of model is usually called stable diffusion (DS hereinafter).
In the next sections we present the principal ideas behind DS, without entering in the detail of the neural architecture used to implement it. It is indeed well explained in the original paper [16] and it goes beyond the goal of this work that aims meanly to give an intuitive idea of the theory behind generative diffusion models.
### From pixel to latent space
As we anticipated, in the model presented in [16] the authors propose to change the space in which the diffusion model is built, passing from the images described pixel per pixel, so in the space \(\Gamma=C^{H\times W\times 3}\) where \(C\) depends on the number of color considered (usually it is the space \(\{0,\ldots,255\}\)), to representations of these images in some other smaller space \(\tilde{\Gamma}=C^{h\times w\times 3}\), with \(h\) and \(w\) such that \(H/h=W/w=2^{m}\) for some \(m\in\mathbb{N}\). In their paper this is done using a perceptual compression model based on the work [6] that, given an image \(x\in\Gamma\), performs the transformation \(y=\mathcal{E}(x)\in\tilde{\Gamma}\), where \(\mathcal{E}\) represents the encoder. We denote by \(\mathcal{D}\) the decoder which is able to transform any element of the latent space \(\tilde{y}\in\tilde{\Gamma}\) back to an image in the pixel space, i.e. \(x^{\prime}=\mathcal{D}(\tilde{y})=\mathcal{D}(\mathcal{E}(\tilde{x}))\in\Gamma\) for some \(\tilde{x}\in\Gamma\).
The basic idea behind this kind of image compression is that it permits to get the "essence" of an image in a smaller representation, in such a way that all the important features of the original image are, in some way, preserved and represented in a smaller space. This does not only give an important advantage in terms of computational cost but also permits to train the model only on the parts that really "mean something" of the images simply minimizing the function (equivalent to (25) for the DPM)
\[\sum_{t=2}^{T}\mathbb{E}_{y_{t}=\mathcal{E}(x_{t}),\tilde{z}_{t}}\Big{[}\big{|} \big{|}\tilde{z}_{t}-\tilde{z}_{\theta}(y_{t})\big{|}\big{|}^{2}\Big{]}, \tag{29}\]
where the \(x_{t}\) are distributed according to \(p(x_{t})\) for any \(t\in\{0,\ldots,T\}\), \(\tilde{z}_{t}\) is the noise in the latent space and \(\tilde{z}_{\theta}\) is the estimate for the noise \(\tilde{z}_{t}\) in the latent space. So, the whole model moved in the latent space, but it is not a problem, indeed when we recover the limit for the reverse process in this space it can also be transformed back in the pixel space. Indeed, when the model is trained it is
able to recover, from a latent variable \(y_{T}=\mathcal{E}(x_{T})\) for some \(x_{T}\sim\mathcal{N}(0,I)\), a limit distribution \(y_{0}\) which is the representation of some realistic image \(x_{0}=\mathcal{D}(y_{0})\) in the pixel space \(\Gamma\).
### Conditioning
Working on a latent abstract space not only makes the model lighter to train and use, but, as pointed out in [16] it also allows to add the possibility of conditioning the image generation process on some additional input given by the user via a prompt. It is indeed possible to train the model conditioned on some (a lot of) inputs given by the user, training it on a larger set of possible images but associated to some representation of the user input.
It is easier to understand making a toy example: we can train the model passing a lot of images representing cats and dogs associated, respectively, to the user input "cat" and "dog". All of these, images and inputs, encoded in some latent variable space. Once it is trained on the representation of images (to which noise is added) and inputs, the model will generate outputs depending on the prompt inserted. So that, if we ask for a "cat" the whole reverse process will be conditioned to converge to some latent representation of realistic image of a cat. If the model is trained on a lot of possible users inputs and a lot of images for each of them, it will translate any input received in a combination of the ones on which it is trained and then condition the reverse process to converge to some representation of an image which is associated to this combination of inputs.
Hence, these inputs have to be seen as additional variables on which the function \(\tilde{z}_{\theta}\) depends, i.e. before this function was only dependent on time and the latent variable \(y_{t}\), now we have to add the dependence on some encoded input \(\iota\). So, the new function to minimize is
\[\sum_{t=2}^{T}\mathbb{E}_{y_{t}=\mathcal{E}(x_{t}),\tilde{z},\iota}\Big{[} \big{|}\big{|}\tilde{z}-\tilde{z}_{\theta}(y_{t},\iota)\big{|}\big{|}^{2}\Big{]}, \tag{30}\]
where \(\iota\) is taken on the space of all the encoded inputs for which the model is trained.
The equation before is not exactly what the neural architecture does, it is much more complex because it is necessary to include, in some way, any possible input \(\iota\). The authors in [16] to reach that, defined a projection \(\tau\) from the space of all possible \(\iota\) to a latent space which can be integrated in to the model being (in some sense) a combination of the inputs that the model received during training.
**Remark 3.1**.: It is clear that a model such that need to be trained on a huge set of combinations of images and user inputs, bigger than the one used for DPM that is usually focused on some specific type of images (for example celebrities, cats, churches...). It is possible indeed to see the DS as a combination of many DPM, each of them associated to some kind of input given by the user. Then,
once the user ask for some specific kind of image, the DS model conditions itself to generate that kind of image, that could be generated also by a DPM trained on a set of images corresponding to that specific input.
## 4 Cold Diffusion: a deterministic generative model
In the recent paper [2] the authors found out that all the complex theory behind DPM and DS can be noticeably simplified without considering the forward process (that add noise) as a random process, but taking a completely deterministc one.
Indeed, they show empirically that similar models can be built using arbitrary degradation processes and not only the one used for the DPM that add Gaussian noise at each step. Below we formalize this concept as it is done in [2].
Let us define the following degradation operator:
\[\begin{split} D:\Gamma\times\{0,\dots,T\}& \longrightarrow\Gamma\\ (x,t)&\longmapsto D(x,t),\end{split} \tag{31}\]
that at each image \(x\in\Gamma\) associate its degradation \(D(x,t)\) after \(t\) degradation steps. So, given a starting image \(x_{0}\), \(x_{t}=D(x_{0},t)\) is the analogous of (20) in
Figure 4: Scheme of what we described in Section 3.2. On the left is represented the process in which a pixel image \(x_{0}\) is encoded to a vector \(y_{0}\) in the latent space. This vector is then transformed, adding noise at each step (diffusion process), in a noised vector \(y_{T}\). The reverse process starts synthetizing the conditioning input to the noised image and then passing everything to the principal neural architecture which gives back a denoised \(\tilde{y}_{0}\) (conditioned on \(\iota\)) that is finally decoded to a (new) pixel image \(\tilde{x}_{0}\).
the DPM, but with an arbitrary degradation transformation \(D\).
**Remark 4.1**.: Considering the (random) operator \(D\) defined on any \((x_{0},t)\in\Gamma\times\{0,\ldots,T\}\) by \(D(x_{0},t)=\sqrt{\overline{\alpha}_{t}}x_{0}+\sqrt{1-\overline{\alpha}_{t}}z_{t}\) we retrieve exactly the DPM.
Now, it is necessary to define the equivalent of the reverse process in DPM and DS. In [2] it is done defining an operator that restores the corrupted images, which plays the role of the denoising process developed in the case of the DPM. This operator is defined as
\[R:\Gamma\times\{0,\ldots,T\} \longrightarrow\Gamma \tag{32}\] \[(x,t) \longmapsto R(x,t),\]
such that, given a forward process \(x_{0},x_{1},\ldots,x_{t}\), \(R(x_{t},t)\) is an approximation of \(x_{0}\).
The operator \(R\) depends on some unknown parameters \(\theta\) and so we denote it by \(R=R_{\theta}\). The goal of the neural model is to estimate in some way this operator, so that it could be able to restore, from some corrupted image \(x_{T}\), a realistic (and new) image \(\tilde{x}_{0}\), which as we said is an approximation (but not the same) of the image \(x_{0}\) such that \(x_{T}=D(x_{0},t)\). This is the analogous of estimating the function \(z_{\theta}\) in order to denoise the image \(x_{T}\sim\mathcal{N}(0,I)\) in the DPM.
The loss function to be minimized in order to find the operator \(R_{\theta}\) proposed in [2] is simply
\[\mathbb{E}_{x\sim\mathcal{X},t}\big{[}||R_{\theta}(D(x,t),t)-x||\big{]} \tag{33}\]
where \(\mathcal{X}\) is the set of (not corrupted) images that we use in training and \(||\cdot||\) is the \(\ell_{1}\) norm, i.e. for any \(N\) and \(a=(a_{1},\ldots,a_{N})\in\mathbb{R}^{N}\), \(||a||=\sum_{i=1}^{N}|a_{i}|\).
**Remark 4.2**.: Notice an important difference between this kind of model and the DPM: in this case indeed the output of the model is an operator which, in some sense, restores the image in one step, i.e. if we have \(x_{t}\) which was corrupted \(t\) times via the operator \(D\), we can recover immediately \(x_{0}=R(x_{t},t)\); while in DPM this was done denoising at each step the noise image, going from \(x_{T}\) to \(x_{T-1}\), then to \(x_{T-2}\) and so on up to get to an \(x_{0}\) corresponding to a real image. In [2] the authors proved empirically that restoring the image in one step as \(x_{0}=R(x_{t},t)\) performed badly compared to combine \(R\) in sequence with \(D\) to get some sort of step by step restoration as it is done in Algorithm 1. Actually in [2] the authors also presented a better algorithm to restoring images, but it goes beyond the goal of this paper and so we refer the interested reader to Section 3.2 and 3.3 of that paper.
In [2] the authors showed that this approach works, generating new images, for many different types of deterministic (and random) degradation process such as:
* adding deterministic Gaussian noise, which is equivalent to the DPM but the Gaussian noise is fixed, i.e. in (1) \(z_{t}=z\) established just once and then used at any time step;
* blurring images;
* animorphis, where human images are transformed in animals pictures;
*...
It is interesting, not only the fact that it shows that some similar results to the one in DPM and DS can be obtained with deterministic processes, but also the importance of the limit distribution of the degradated images \(x_{T}\). Indeed, as it is clear from [2], one thing that it is in common in all the three models that we presented here is that to generate a new image it is necessary to start from a sample of the limit distribution obtained with the forward processes. In the DPM and DS this limit distribution is simply a \(\mathcal{N}(0,I)\) but it is only because the forward processes are Gaussian diffusions. For example in [2] the limit distribution of animorphis transformations are completely different and are obtained empirically. Then from this empirical distribution is sampled some element and it is restored with the operator \(R_{\theta}\) (which is strongly dependent on \(D\)) built as we described above.
**Remark 4.3**.: This importance of the limit distribution is addressed really well and precisely in [14] where the authors proposed different kind of (random) noising process all of them leading to a different limit distribution. The inference algorithm starting point is then always sample an element from one of the limit distribution (obtained empirically or analytically) and "restore" it.
## 5 Assessing models' novelty and realism
In the context of DPM it is quite an interesting research question to define ways to assess what levels of realism and novelty they are able to achieve. We observe that this aspect is normally underestimated and we try here to give a very first hint at how this problem could be approached.
In order to make these sorts of considerations more rigorous, but avoiding at the same time to enter into too philosophical questions - we should obviously start from defining what do we mean by "existence of an image". For the sake of simplicity, we can say that an image exists if there exist(ed) an observation event (to be intended in the most general sense, including dreams, imaginations, hallucinations... etc) performed by some observer for that image **and** the observer remembers about the event to an extent that it is possible for him/her
to recognise the same image. Notice that yet in this simple conceptualisation, we can identify several critical elements which should be further explored and formalised as:
* The properties of the set of realistic images \(I\), the set of observers \(O\) and the set of observation events \(E\)
* The definition and properties of the image recognition/matching function \(M_{o}:I\times I\rightarrow[0,1]\) used by an observer \(o\in O\) to match two images2, where, given \(i,j\in I\), \(M_{o}(i,j)\) is a measure of similarity of \(i\) and \(j\) for the observer \(o\) (so that for \(i=j\), \(M_{o}(i,j)=1\)). Footnote 2: This function is particularly critical since it heavily depends on the specific observer and that includes many features like: the time in which the images are observed, the quantity of times he observed them, etc.
Therefore, we can say with a certain level of rigorousness that a certain image \(x\in I\) is "new" if the following condition holds:
\[\nu_{O}(x):=\sum_{o\in O}\sum_{i\in I_{o}}M_{o}(x,i)=0 \tag{34}\]
where \(I_{o}\subseteq I\) is the set of images for which observer \(o\) recalls an observation event. With (34) in mind, we define the set of new realistic images as \(J_{O}:=\{x\in I:\nu_{O}(x)=0\}\subset I\). Hence, given a randomly chose image \(x\in I\), the probability that it is new, in the sense defined above, is equal to \(N_{I,O}=|J_{O}|/|I|\). This quantity gives account of the intrinsic novelty of \(I\) for the community of observers \(O\).
A model \(M\) is able to produce a certain set of images \(I_{M}=\hat{I}_{M}\cup\tilde{I}_{M}\) where \(\tilde{I}_{M}:=I\cap I_{M}\) and \(\hat{I}_{M}:=I^{C}\cap I_{M}\). Indeed, notice that \(I_{M}\nsubseteq I\) in general: some image generated by \(M\) could not be "realistic". So, the problem of assessing the probability that a generative model \(M\) has to generate new images for observers of \(O\) boils down to the estimation of how big is the set \(I_{N}\subseteq\tilde{I}_{M}\subseteq I\) of new images that can be generated by the model in theory and to evaluate \(N_{M,O}=|I_{N}|/|J_{O}|\). This quantity gives account of the absolute novelty that images generated by the model show to observers in \(O\). If we define \(C_{M}=|\tilde{I}_{M}|/|I|\) the completeness of the model, then we have the following fundamental relation:
\[N_{M,O}:=C_{M}\frac{N_{M}}{N_{I,O}} \tag{35}\]
where \(N_{M}=|I_{N}|/|\tilde{I}_{M}|\) is the model's relative novelty rate.
It is becoming clearer how the above definitions make sense only if we are really able to characterise the set \(I\). In fact, in practice, when using DPM implementations like [16] we cannot be sure whether the set \(I_{M}\) is actually a subset of \(I\) until we do not characterise \(I\). A way to do this is to provide an intensive (synthetic) characterisation \(c\), like "the images of all horses", and considering then \(I_{c}\subset I\), where \(I_{c}\) is the set of realistic images corresponding to the characterization \(c\), instead of the whole \(I\). To train \(M\), we will then
have to provide a sufficient amount of horses pictures and then challenge the model to generate new ones. However, many generated images would probably not resemble horses at all. Therefore to measure the goodness of the model we should establish when an image \(x\in I_{M}\) is in \(I_{c}\). We say that it holds if, at least an observer \(o\in O\) would classify it as "realistic" and belonging to the intensive concept \(c\) that generated \(I_{c}\). Then, defining the boolean classification function \(C_{c,o}:I_{M}\to\{0,1\}\) as
\[C_{c,o}(x):=\begin{cases}1&\text{if $o$ recognizes that $x$ is realistic and corresponding to $c$}\,;\\ 0&\text{otherwise}\end{cases},\]
an image \(x\in I_{c}\) only if
\[\rho(x):=\frac{1}{|O|}\sum_{o\in O}C_{I,o}(x)>0. \tag{36}\]
We can naturally define the overall model realism by:
\[R_{M}:=\frac{1}{|I_{M}|}\sum_{x\in I_{M}}\rho(x). \tag{37}\]
Working with too generic/generalist concepts like "all images", as systems like [16] do, introduce further complexity to the problem. An approximating approach at this would be started by assuming that for these generalist systems \(I\) is composed by the union of a very high number of subsets, each defined by some intensive concept derived by induction from a huge observation process of available data, i.e. \(I=\bigcup_{c\in C}I_{c}\), where \(C\) is the set of these classes. Correspondingly, all quantities defined in (35), (36) and (37) should be appropriately extended to cover this case (e.g. by weighting the contributions of all classes).
### Relation to reinforcement learning
What argued above can be seen as correlated with reinforcement learning whereas we consider realism, novelty or whatsoever other measurable property of generative models as rewards that a system may get from the community of observers. Works have been already started into this direction, e.g. [9], where authors propose a general framework that automatically adapts original user input to model-preferred prompts through reinforcement learning based on relevance and aesthetically pleasantness feedback.
## 6 Conclusions
We wrote these notes trying to answer our self the following questions:
* How is it possible that these models generate "real"/"realistic" images?
* What does it mean for a "real"/"realistic" image to be "new" and how to measure these concepts in practice?
Regarding the first question we hope that this work is a good answer, helping the reader understand better why these models work and on which kind of theory they are based. In particular, we think that these notes present a complete analysis of the mathematical background of the DPM and contextualise the improvements brought by popular tools like Stable Diffusion. As such, they could help (as other similar work, see [13] for example) understand the way these systems work and the fundamental properties on which they rely: the unique limit distribution (not its form, only its existence and uniqueness) for the forward process and the functional form of the backward process. This is even more clear from the analysis of Cold Diffusion in Section 4). We therefore hope to have explained and made more understandable to the interested reader the state of the art models for image generation.
Regarding the second question we do not have yet a complete answer and we think that this can be seen as the main question that generative models are supposed to tackle. We made some interesting considerations about this topic in Section 5 that we resume below. We can in fact interpret the space of the \(x_{0}\) such that \(p_{\theta}(x_{0})>0\) as a subspace of the set of all realistic images \(I\) which is - by construction - far bigger than the set on which these models are trained. It is intuitively clear that this characterization of the models' output includes more images than all the ones observed in a lifetime by any existing observer too. Under this respect, images generated by DPM can be considered really "generated" by them and really "new". With this paper we believe to have given a small hint for a step in the direction of characterizing the space of "real"/"new" images, something which is still completely not understood. Our starting point is that we - as observers - are not able to say what defines an image as "real" even if we are immediately able to understand if it is such or not by comparing the image through the criteria we formed in our previous experience of observers. This is also the basis by which we are able to asses "novelty", i.e., through comparing whether the presented images are resembling anything already experienced in the past. Maybe understanding and developing these model further will help in the future to understand more in detail this quite complex domain and fully characterize it too. We finally like to leave the reader with what may seem a paradox: according to the above interpretation, once an observer has seen an image generated by a DPM, this image instantly loses the status of "new" (recalling the criteria defined in (34)). Thus, if we make a big enough number of observers watch the results of a big enough run of image generations could we eventually run out of "new" images, downscaling what now seems an almost magical property of these systems to generate novelty to a bare capability of representing a distribution of "existing" data? |
2301.08186 | Charged particle beam transport in a flying focus pulse with orbital
angular momentum | We demonstrate the capability of Flying Focus (FF) laser pulses with $\ell =
1$ orbital angular momentum (OAM) to transversely confine ultra-relativistic
charged particle bunches over macroscopic distances while maintaining a tight
bunch radius. A FF pulse with $\ell = 1$ OAM creates a radial ponderomotive
barrier that constrains the transverse motion of particles and travels with the
bunch over extended distances. As compared to freely propagating bunches, which
quickly diverge due to their initial momentum spread, the particles
co-traveling with the ponderomotive barrier slowly oscillate around the laser
pulse axis within the spot size of the pulse. This can be achieved at FF pulse
energies that are orders of magnitude lower than required by Gaussian or Bessel
pulses with OAM. The ponderomotive trapping is further enhanced by radiative
cooling of the bunch resulting from rapid oscillations of the charged particles
in the laser field. This cooling decreases the mean square radius and emittance
of the bunch during propagation. | Martin Formanek, John P. Palastro, Marija Vranic, Dillon Ramsey, Antonino Di Piazza | 2023-01-19T17:30:53Z | http://arxiv.org/abs/2301.08186v2 | # Charged particle beam transport in a flying focus pulse with orbital angular momentum
###### Abstract
We demonstrate the capability of Flying Focus (FF) laser pulses with \(\ell=1\) orbital angular momentum (OAM) to transversely confine ultra-relativistic charged particle bunches over macroscopic distances while maintaining a tight bunch radius. A FF pulse with \(\ell=1\) OAM creates a radial ponderomotive barrier that constrains the transverse motion of particles and travels with the bunch over extended distances. As compared to freely propagating bunches, which quickly diverge due to their initial momentum spread, the particles co-traveling with the ponderomotive barrier slowly oscillate around the laser pulse axis within the spot size of the pulse. This can be achieved at FF pulse energies that are orders of magnitude lower than required by Gaussian or Bessel pulses with OAM. The ponderomotive trapping is further enhanced by radiative cooling of the bunch resulting from rapid oscillations of the charged particles in the laser field. This cooling decreases the mean square radius and emittance of the bunch during propagation.
## I Introduction
Charged particle beams are ubiquitous in physics experiments and applications. Their transport over macroscopic distances is necessary not only for devices like accelerators and electron microscopes, but also for compact radiation sources. The magnetic optics currently used for transport [1; 2] become progressively more expensive as the particle energy increases due to the need for higher magnetic field gradients, which must be generated, for example, by superconducting currents. Further, the achievable focal lengths of these optics may be too large for modern electron beam applications, such as inverse Compton scattering sources [3]. More specifically, the characteristic focusing lengths of magnetic optics range from tens of centimeters to meters and are limited by the physical size of the magnets. While permanent magnets designed for \(\sim\)100 MeV electron energies can be relatively compact (millimeter scale) allowing for \(\sim\)10 cm focal lengths [3], at higher electron energies, focusing at such short distances becomes technologically challenging.
All-optical setups for transporting charged particle beams have been proposed as an alternative that would circumvent the need for magnets [4]. In these schemes, the transverse intensity profile of a laser pulse that counterpropagates with respect to the beam is shaped to provide a confining ponderomotive potential. Such schemes can be especially advantageous when employed at high-intensity laser facilities, where laser pulses can be used both for creating ultra-relativistic particle beams and for their transport. Nevertheless, these schemes require large laser pulse energies, whether they employ conventional or axicon-focused Bessel pulses [5; 6; 7].
In this work, we introduce an all-optical setup for charged particle beam guiding that uses a flying focus (FF) to greatly reduce the required laser pulse energy. The FF refers to a class of optical techniques that provide spatiotemporal control over the trajectory of a focal point [8; 9]. The intensity peak formed by the moving focal point can travel at any arbitrary velocity independent of the laser group velocity over distances much longer than a Rayleigh range. The first experimental demonstration of a FF used chromatic focusing of a chirped laser pulse [9]. Alternate techniques employing space-time light sheets
Figure 1: Schematic of electron confinement in the ponderomotive potential of an \(\ell=1\) OAM flying focus (FF) pulse. The off-axis intensity peak of the FF pulse (yellow toroids) travels at the vacuum speed of light (\(v_{F}=-1\)) in the opposite direction of its phase fronts (\(v_{\phi}=1\)). Ultrarelativistic electrons (blue lines) travel in the same direction as the intensity peak. Electrons with low transverse momentum inside the intensity peak are confined and slowly oscillate in the radial direction, whereas electrons outside the peak are deflected. [The solid (dashed) blue lines represent the past (future) trajectory of the electrons with respect to the snapshot shown in the figure].
[10; 11], axiparabola-echelon mirrors [12], and nonlinear media [13; 14] have also been proposed. The spatiotemporal control enabled by FF pulses have presented a unique opportunity to revisit established schemes and investigate regimes in which FF pulses provide an advantage over traditional fixed-focus Gaussian pulses [15; 16; 17; 18; 19; 20; 21].
Here we show that FF pulses with \(\ell=1\) orbital angular momentum (OAM) and a focus that moves in the opposite direction of the phase fronts at the vacuum speed of light can transport charged particles over macroscopic distances. The ring-shaped transverse intensity profile of the \(\ell=1\) mode provides a ponderomotive potential barrier that confines charged particles in the transverse direction. Figure 1 illustrates the concept and problem geometry (the units \(\hbar=\varepsilon_{0}=c=1\) are used throughout). The ponderomotive confinement allows for transport of an electron bunch with a tight radius, smaller that the focal spot size of the pulse, with feasible laser pulse energies. As an example, a 10 pC, 500 MeV electron beam can be transported over 6 mm using a 200 J, 5 TW FF pulse, compared to the 2 MJ that would be needed in a conventional pulse. Axicon-focused Bessel pulses require even larger energies [7; 22]. The energy requirements to confine the electron beam are much less for a FF pulse because the peak intensity moves with the electron beam, which decouples the interaction length from the Rayleigh range. Confinement of the charged particles is aided by radiative cooling (radiation reaction [23; 24; 25; 26]), which decreases the emittance and mean-squared radius of the beam at the cost of its average energy.
The remainder of the article is organized as follows: Section II presents the four-potential of the \(\ell=1\) OAM FF beam and its properties. Section III describes the analytical model for charged particle motion in the \(\ell=1\) FF pulse. In Section III.1, oscillations in the ponderomotive potential are discussed and constraints on the transverse phase space of the trapped particles are derived. Section III.2 describes the evolution of the particle bunch radius. Longitudinal motion of the particles, including radiation energy loss, is addressed in Section III.3. The importance of space charge repulsion on particle bunches is discussed in Section III.4. Section IV compares the energy required in \(\ell=1\) FF pulses and conventional Laguerre-Gaussian pulses. Section V describes the simulation results. Electron confinement, radiation energy loss, oscillations in the ponderomotive potential, and transverse emittance behavior are covered in Section V.2, and the longitudinal delay of the electron bunch behind the FF intensity peak is covered in Section V.3. Technical details and explicit calculations are contained in six appendices.
## II FF beams and pulses with \(\ell=1\) OAM
A FF field with \(\ell=1\) OAM forms a transverse ponderomotive potential barrier that is capable of confining charged particles close to the propagation axis. Here, a FF pulse with an intensity peak that moves at the vacuum speed of light against the laser phase velocity in the negative \(z\)-direction (see Fig. 1) is considered. This special case admits simple, but exact, closed-form analytical expressions for the four-potential and the fields. This section presents the exact expressions of the four-potential, its cycle-averaged magnitude, and the extension to pulses with finite energy. A scheme for generating FF pulses with \(\ell=1\) OAM is described in Ref. [14].
An exact beam solution to the vacuum wave equation can be written in terms of the lightcone coordinates \(\eta=t+z\) and \(\phi=t-z\). The lightcone coordinate \(\eta\) describes the displacement from the moving focus (\(\eta=0\)) of the FF pulse, while \(\phi\) tracks the fast phase oscillations. The transverse part of the vector potential for the \(\ell=1\) Laguerre-Gaussian (LG10) mode reads [21]
\[\mathbf{A}_{\perp}(\eta,r,\theta,\phi)=\mathbf{\mathcal{A}}_{0}\frac{\sqrt{2}\sigma_{ 0}r}{\sigma_{\eta}^{2}}e^{-r^{2}/\sigma_{\eta}^{2}}\cos\Psi_{1}(0,0)\,, \tag{1}\]
where
\[\begin{split}\Psi_{1}(a,b)=\omega_{0}\phi-&\frac{r^ {2}}{\sigma_{\eta}^{2}}\frac{\eta}{\eta_{0}}+(1-a)\theta\\ &+(2+b)\arctan\left(\frac{\eta}{\eta_{0}}\right)\end{split} \tag{2}\]
is the phase, \(r=\sqrt{x^{2}+y^{2}}\) is the radial distance from the \(z\)-axis, \(\theta=\arctan(y/x)\) is the azimuthal angle in cylindrical coordinates, \(\omega_{0}=2\pi/\lambda_{0}\) is the laser angular frequency, and \(\lambda_{0}\) its wavelength. The spot size \(\sigma_{\eta}\) and Rayleigh range \(\eta_{0}\) equivalents for the FF beam are
\[\sigma_{\eta}=\sigma_{0}\sqrt{1+\frac{\eta^{2}}{\eta_{0}^{2}}},\quad\eta_{0}= \omega_{0}\sigma_{0}^{2}\,. \tag{3}\]
The effective duration of the moving intensity peak is equal to the Rayleigh range \(\eta_{0}\).
Upon imposing the Lorenz gauge condition \(\partial_{\mu}A^{\mu}=0\) and the constraint \(A_{+}=A^{0}+A^{z}=0\), one can evaluate \(A_{-}=A^{0}-A^{z}\) as
\[A_{-}(\eta,r,\theta,\phi)=-\int d\phi\mathbf{\nabla}_{\perp}\cdot\mathbf{A}_{\perp}( \eta,r,\theta,\phi)\,. \tag{4}\]
For a laser beam polarized along the \(y\)-axis \(\mathbf{\mathcal{A}}_{0}=\mathcal{A}_{0}\hat{\mathbf{y}}\)
\[\begin{split} A_{-}(\eta,r,&\theta,\phi)=\mathcal{ A}_{0}\frac{\sqrt{2}\sigma_{0}r}{\omega_{0}\sigma_{\eta}^{2}}e^{-r^{2}/ \sigma_{\eta}^{2}}\\ &\times\left[\frac{2y}{\sigma_{0}\sigma_{\eta}}\sin\Psi_{1}(0,1) -\frac{1}{r}\cos\Psi_{1}(1,0)\right]\,,\end{split} \tag{5}\]
where the initial condition at \(t=0\) was chosen so that the potential vanishes as \(|z|\to\infty\). The remaining Cartesian components can be evaluated as
\[A^{0}=-A^{z}=\frac{1}{2}A_{-}\,. \tag{6}\]
In this gauge, the Lorentz-invariant square of the four-potential is given by
\[A_{\mu}A^{\mu}=(A^{0})^{2}-(A^{y})^{2}-(A^{z})^{2}=-A_{\perp}^{2}\,, \tag{7}\]
where the Minkowski metric tensor is chosen as \(\eta_{\mu\nu}=\mathrm{diag}(+1,-1,-1,-1)\). As a result, the square of the transverse component \(|A_{\perp}|^{2}=-A_{\mu}A^{\mu}\) is also Lorentz invariant. Components of the electric and magnetic fields corresponding to this four-potential are presented in Appendix A.
The cycle-average of the invariant \(|A_{\mu}A^{\mu}|\) at focus (\(\eta=0\)) is proportional to intensity and equal to
\[\overline{|A_{\mu}A^{\mu}|}\Big{|}_{\eta=0}=\overline{A_{\perp}^{2}}|_{\eta=0 }=\frac{A_{0}^{2}r^{2}}{\sigma_{0}^{2}}e^{-2r^{2}/\sigma_{0}^{2}}\,, \tag{8}\]
where the overbar denotes a cycle-average. The cycle averaging procedure is defined in Appendix B. Figure 2 displays Eq. (8) and demonstrates that the peak intensity forms a ring surrounding the beam axis.
From the analytical beam solution, laser pulses with finite total energy can be approximately constructed by applying the pulse envelope function \(g(\phi)\) as a multiplicative factor on the electromagnetic fields. For the approximation to be accurate, the up and down ramps of \(g(\phi)\) should be much longer than \(\lambda_{0}\). For details of the implementation, see Appendix F and Refs. [19; 21].
The average power \(P_{\mathrm{ave}}\) of the FF pulse is given by (see Appendix B)
\[P_{\mathrm{ave}}\approx\frac{\pi}{4}\mathcal{A}_{0}^{2}\omega_{0}^{2}\sigma_{0 }^{2}\,. \tag{9}\]
To ensure that the FF pulse interacts with the particles for a time \(t_{\mathrm{int}}\), the total pulse energy must be
\[E_{\mathrm{tot}}\approx 2P_{\mathrm{ave}}t_{\mathrm{int}}\,. \tag{10}\]
Comparison with the energy required in a conventional LG10 pulse is presented in Section IV.
## III Charged particle motion in an \(\ell=1\) Oam FF pulse
The evolution of a charged particle (mass \(m\) and charge \(q\), respectively) interacting with an external electromagnetic field, including radiation reaction in the classical regime, is described by the Landau-Lifshitz equation of motion [27]:
\[\begin{split}\dot{u}^{\mu}&=\frac{q}{m}F^{\mu\nu}u_ {\nu}\\ &+\frac{2q}{3m}r_{q}\left[\frac{d}{d\tau}(F^{\mu\nu})u_{\nu}+ \frac{q}{m}P_{\nu}^{\mu}F^{\nu\alpha}F_{\alpha\beta}u^{\beta}\right],\end{split} \tag{11}\]
where \(r_{q}=q^{2}/(4\pi m)\) is the classical particle radius, \(\dot{u}^{\mu}\) denotes the proper-time derivative of the four-velocity \(u^{\mu}=(\gamma,\mathbf{u})=(\gamma,\gamma\beta)\), \(P_{\nu}^{\mu}=\delta_{\nu}^{\mu}-u^{\mu}u_{\nu}\) is the projection tensor, and \(F^{\mu\nu}=\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu}\) is the electromagnetic field tensor in terms of the vector four-potential.
For the analytic considerations in this section, radiation reaction is assumed to be negligible, allowing the term proportional to \(r_{q}\) to be omitted. This term has an effect of lowering the particle energy during the propagation and will be revisited in the context of the longitudinal motion (Section III.3) and simulation results (Section V).
In the absence of radiation reaction, the motion of a charged particle in the FF pulse can be described by the ponderomotive guiding center equation of motion [28]
\[\frac{d\overline{\mathbf{u}}}{dt}=-\frac{q^{2}}{2m^{2}\overline{\gamma}}\mathbf{ \nabla}\overline{A_{\perp}^{2}}\,, \tag{12}\]
where \(\overline{\gamma}=(1+\overline{u}_{z}^{2}+\overline{u}_{\perp}^{2}+q^{2} \overline{A_{\perp}^{2}}/m^{2})^{1/2}\) is the cycle-averaged gamma factor and \(t\) is laboratory time. Equation (12) has been derived in the Coulomb gauge. While there is no consensus on a covariant and gauge-independent formulation of the ponderomotive force [29; 30], Eq. (12) accurately predicts the motion independent of gauge as long as the cycle averaging procedure remains valid. In Appendix C, the transverse component of the ponderomotive force is derived in the Lorenz gauge for ultrarelativistic particles moving with the intensity peak (\(\eta\approx 0\)) and against the phase fronts of a FF pulse. The particles have an initial relativistic factor \(\gamma_{0}\gg 1\) and a small transverse velocity, such that \(\overline{\gamma}\approx\gamma_{0}\). Applying these conditions to Eq. (12) yields the same result as Eq. (10):
\[\frac{d\overline{\mathbf{u}}_{\perp}}{dt}\approx-\frac{q^{2}}{2m^{2}\gamma_{0}} \nabla_{\perp}\overline{A_{\perp}^{2}}|_{\eta=0}\,. \tag{13}\]
Near the focus (\(\eta\approx 0\)), \(\overline{A_{\perp}^{2}}\) is only a function of the radial coordinate [see Eq. (8)]. As a result, the motion in the transverse plane is approximately described by
\[r^{\prime\prime}-r(\theta^{\prime})^{2} =-\frac{q^{2}}{2m^{2}\gamma_{0}^{2}}\frac{d}{dr}\overline{A_{\perp }^{2}}|_{\eta=0}\,, \tag{14}\] \[\left(\gamma_{0}r^{2}\theta^{\prime}\right)^{\prime} =0\,, \tag{15}\]
Figure 2: Cycle-averaged invariant \(|A_{\mu}A^{\mu}|\) at the position of the focus (\(\eta=0\)).
where prime denotes a derivative with respect to time in the laboratory frame. The first equation describes the radial motion, and the second equation implies the conservation of relativistic angular momentum
\[L_{z}=m\gamma_{0}r^{2}\theta^{\prime}=\text{const}. \tag{16}\]
Using this constant of motion in Eq. (14) provides
\[r^{\prime\prime}-\frac{L_{z}^{2}}{m^{2}\gamma_{0}^{2}r^{3}}=-\frac{q^{2}}{2m^{ 2}\gamma_{0}^{2}}\frac{d}{dr}\overline{A_{\perp}^{2}}|_{\eta=0}\,. \tag{17}\]
Equation (17) can be re-expressed in the form of Newton's law for a particle with a mass \(m\gamma_{0}\) moving in a potential that depends only on the radial coordinate:
\[m\gamma_{0}r^{\prime\prime}=-\frac{d}{dr}V_{\text{eff}}(r)\,, \tag{18}\]
where the effective potential
\[\begin{split} V_{\text{eff}}(r)&=V_{P}(r)+V_{C}(r) \\ &=\frac{q^{2}}{2m\gamma_{0}}\overline{A_{\perp}^{2}}|_{\eta=0}+ \frac{L_{z}^{2}}{2m\gamma_{0}r^{2}}\end{split} \tag{19}\]
includes both the ponderomotive and centrifugal contributions.
### Transverse motion
To illustrate the approximate harmonic motion of charged particles in the FF pulse, Eq. (17) can be rewritten in terms of the Cartesian coordinates as
\[\frac{d^{2}\mathbf{x}_{\perp}}{dt^{2}}=-\Omega^{2}\left(1-2\frac{r^{2}}{\sigma_{0 }^{2}}\right)e^{-2r^{2}/\sigma_{0}^{2}}\mathbf{x}_{\perp}, \tag{20}\]
where \(\xi_{0}=|q|\mathcal{A}_{0}/m\) is the normalized field amplitude and
\[\Omega=\frac{2\pi}{T}=\frac{\xi_{0}}{\gamma_{0}\sigma_{0}} \tag{21}\]
is the angular frequency of oscillations in the harmonic approximation, i.e., when \(r\ll\sigma_{0}\). The actual oscillation frequency of confined particles is smaller due to the anharmonicity of the potential.
Multiplying Eq. (18) by the radial velocity \(r^{\prime}=\beta_{r}\) and integrating over time provides a conservation relation for the energy \(\mathcal{E}_{\perp}\) associated with the transverse motion:
\[\frac{1}{2}m\gamma_{0}\beta_{r}^{2}+V_{\text{eff}}(r)=\frac{1}{2}m\gamma_{0} \beta_{\perp}^{2}+V_{P}(r)=\mathcal{E}_{\perp}\,, \tag{22}\]
where
\[\beta_{\perp}=\sqrt{(r^{\prime})^{2}+r^{2}(\theta^{\prime})^{2}}=\sqrt{\beta_ {x}^{2}+\beta_{y}^{2}} \tag{23}\]
is the magnitude of the transverse velocity. In terms of the Cartesian velocities and positions, \(r^{\prime}=(x\beta_{x}+y\beta_{y})/r\).
Equation (22) can be used to determine the initial conditions of particles that will be bound in the FF potential. To begin, note that \(V_{\text{eff}}(r)\to\infty\) as \(r\to 0\) and \(V_{\text{eff}}(r)\to 0\) as \(r\to\infty\). Because there cannot bound trajectories in regions of space where the potential is monotonically decreasing, \(dV_{\text{eff}}(r)/dr\geq 0\) provides a necessary condition for the existence of bound trajectories. Upon applying this inequality to Eq. 19, one obtains
\[\rho^{2}\leq\frac{r^{4}}{\sigma_{0}^{4}}\left(1-2\frac{r^{2}}{\sigma_{0}^{2}} \right)e^{-2r^{2}/\sigma_{0}^{2}}\lessapprox 0.02, \tag{24}\]
where \(\rho=L_{z}/(m\sigma_{0}\xi_{0})\) and \(0.02\) is a numerical upper bound on the RHS. Physically, a particle with too large of an angular momentum will not be bound in the ponderomotive potential.
Equality in Eq. (24) determines the local extrema of \(V_{\text{eff}}(r)\). For \(\rho\ll 1\), the position \(r_{\text{max}}\) of the local maximum of \(V_{\text{eff}}(r)\) can be approximated to leading order as
\[r_{\text{max}}=\frac{\sigma_{0}}{\sqrt{2}}\,. \tag{25}\]
The position \(r_{\text{min}}\) of the local minimum of \(V_{\text{eff}}(r)\) is obtained by assuming \(r_{\text{min}}\ll\sigma_{0}\). To leading order
\[r_{\text{min}}=\sigma_{0}\sqrt{\rho}\,. \tag{26}\]
Consistent with the expression for the effective potential, a non-zero initial angular momentum prevents the particle from penetrating the potential all the way to \(r=0\).
Using these values of the extrema, bound trajectories in the FF beam are determined by the constraints \(V_{\text{eff}}(r_{\text{min}})\leq\mathcal{E}_{\perp}\leq V_{\text{eff}}(r_{ \text{max}})\), i.e.,
\[4e\rho\leq\left(\frac{\beta_{\perp}}{\beta_{\perp,\text{max}}}\right)^{2}+ \left(\sqrt{e}\frac{r}{r_{\text{max}}}e^{-r^{2}/\sigma_{0}^{2}}\right)^{2}\leq 1\,, \tag{27}\]
where \(\beta_{\perp,\text{max}}=\xi_{0}/(\sqrt{2e}\gamma_{0})\) and \(e=2.7183\ldots\) is the Euler number. Note that for this derivation to be consistent \(\sqrt{\rho}\ll 1\) and \(4e\rho<1\).
### Evolution of the RMS electron bunch radius in the harmonic approximation
The previous section described the radial dynamics of individual particles. In this section, the dynamics of a particle bunch is described in terms of the root mean squared (RMS) radius of the bunch \(R=\sqrt{\langle r^{2}\rangle}\). An evolution equation for \(R\) can be derived by taking its second derivative with respect to the laboratory time
\[R^{\prime\prime}=-\frac{\langle rr^{\prime}\rangle^{2}}{R^{3}}+\frac{\langle r ^{\prime 2}\rangle}{R}+\frac{\langle rr^{\prime\prime}\rangle}{R}\,. \tag{28}\]
Substituting the harmonic approximation of the force from Eq. (20) in polar coordinates
\[r^{\prime\prime}=-\Omega^{2}r+r\theta^{\prime 2} \tag{29}\]
into the expression for \(R^{\prime\prime}\) provides
\[R^{\prime\prime}+\left(\Omega^{2}-\frac{\varepsilon_{\perp}^{2}}{\gamma^{2}R^{4}} \right)R=0\,, \tag{30}\]
where
\[\varepsilon_{\perp}=\gamma\sqrt{\langle r^{2}\rangle\langle r^{\prime 2}+r^{2} \theta^{\prime 2}\rangle-\langle rr^{\prime}\rangle^{2}} \tag{31}\]
is approximately the normalized transverse emittance of the bunch (see Appendix D). In the absence of energy spread and radiation reaction, \(\varepsilon_{\perp}\) is a constant of motion: in a conservative potential, the phase-space distribution maintains a constant area despite deformations of its boundaries [31].
Equation (30) has an exact analytical solution for the initial condition \(R(0)=R_{0}\) and \(R^{\prime}(0)=0\):
\[R(t)=R_{0}\sqrt{1+\left(\frac{\varepsilon_{\perp}^{2}}{\gamma^{2}R_{0}^{4} \Omega^{2}}-1\right)\sin^{2}(\Omega t)}\,. \tag{32}\]
In the absence of ponderomotive confinement, i.e., \(\Omega\to 0\), Eq.(32) demonstrates that the RMS radius increases without bound as \(R(t)=R_{0}\sqrt{1+(t/T_{s})^{2}}\), where \(T_{s}=\gamma R_{0}^{2}/\varepsilon_{\perp}\). With ponderomotive confinement, the RMS radius either oscillates with the angular frequency \(\Omega\) or remains constant. Setting the terms in the round brackets to zero provides the condition for constant RMS radius
\[\frac{\sigma_{0}\varepsilon_{\perp}}{\xi_{0}R_{0}^{2}}=1\,, \tag{33}\]
where Eq. (21) has been used. Note that the dependence on the energy of the particles is still contained in the definition of the emittance. Although this formula applies only in the harmonic approximation, it provides a starting point for initializing the particle bunches in the simulations described in Section V.
### Longitudinal motion
Because the intensity peak of the FF pulse travels at the vacuum speed of light, the particles will gradually fall behind the peak intensity and experience weaker transverse confinement. There are several effects that can contribute to the rate at which the particles fall behind the peak. First, the initial velocity of each particle is less than the vacuum speed of light. This causes the light-cone variable \(\eta=t+z\) to grow linearly in time, but this growth is negligible for ultra-relativistic particles.
Second, the FF pulse can accelerate (or decelerate) the particles in the longitudinal direction. Fig. 3 displays a slice of the \(|A_{\mu}A^{\mu}|\) invariant in the polarization plane (\(yz\)-plane). In the focal region, this invariant varies weakly in the longitudinal direction, so that the ponderomotive force can be neglected. However, if the relativistic factor \(\gamma\) becomes comparable to the field strength \(\xi_{0}\), the increase in the effective mass of the particles due to transverse and longitudinal oscillations in the fields of the FF pulse can significantly reduce the time-averaged longitudinal velocity. This deceleration also causes the lightcone variable to grow linearly in time and can be neglected as long as \(\xi_{0}\ll\gamma_{0}\).
Finally, as was discussed in Ref. [21], a charged particle co-moving with the FF intensity peak continuously loses energy due to radiation reaction. The resulting deceleration becomes dominant in regions of high field intensity. Because the ultrarelativistic particles primarily move in the opposite direction of the phase fronts, the approximations \(u_{-}=\gamma-u_{z}\approx 2\gamma\) and \(\phi\approx 2t\) can be employed. The electron energy loss due to Landau-Lifshitz radiation reaction in plane wave fields is then given by [32]
\[\gamma(t)\approx\frac{\gamma_{0}}{1+\kappa(t)}\,, \tag{34}\]
where
\[\kappa(t)=\frac{4}{3}\gamma_{0}r_{q}\omega_{0}^{2}\int_{0}^{t}\xi^{2}(t^{\prime })dt^{\prime} \tag{35}\]
is the deceleration factor after a time \(t\). The integral is taken over the normalized field amplitude \(\xi(t^{\prime})\) along the particle trajectory. The deceleration factor increases with the initial gamma factor and with the field strength along the particle trajectory. For the \(\ell=1\) OAM pulses of interest here, the field intensity is lowest on axis and rises with radial distance (up to \(r_{\rm max}\) for confined particles). As a result, the particles predominantly radiate in the regions around the turning points.
For ultrarelativistic particles with small transverse velocity, the delay behind the intensity peak can be approximately evaluated as
\[\begin{split}\eta_{d}(t)=\int_{0}^{t}&[1-\beta_{z} (\widetilde{t})]d\widetilde{t}\\ &\approx\frac{1}{2}\int_{0}^{t}\left(\frac{1}{\gamma^{2}( \widetilde{t})}+\beta_{\perp}^{2}(\widetilde{t})\right)d\widetilde{t}\,.\end{split} \tag{36}\]
Figure 3: Cycle-averaged invariant \(|A_{\mu}A^{\mu}|\) for the \(\ell=1\) OAM FF field in the plane of the field polarization. Note the different scales for the axes.
Substituting the expression for \(\gamma(t)\) [Eq. (34)] and approximating \(\beta_{\perp}^{2}(t)\approx\beta_{\perp}^{2}(0)\), which neglects any effect of the fields on the transverse motion, one obtains
\[\begin{split}\eta_{d}(t)\approx\frac{1}{2}&\left( \frac{1}{\gamma_{0}^{2}}+\beta_{\perp}^{2}(0)\right)t\\ &+\frac{2}{3}\frac{r_{q}\omega_{0}^{2}\xi_{\rm eff}^{2}}{\gamma_ {0}}t^{2}+\frac{8}{27}r_{q}^{2}\omega_{0}^{4}\xi_{\rm eff}^{4}t^{3}\,,\end{split} \tag{37}\]
where \(\xi_{\rm eff}<\xi_{0}\) is the effective field strength along the particle trajectory up to time \(t\). The first term is the contribution from the particle moving at its initial subluminal velocity with non-zero transverse component, while the terms on the second line are the contributions from radiation energy loss. In order to keep the particle close to the focus, \(\eta_{d}(t)/\eta_{0}\ll 1\) needs to be satisfied during the whole interaction.
### Space-charge effects
To asses the impact of space-charge forces on the particle motion, consider a particle bunch with the charge density [33]
\[\rho(r,z)=qN\frac{1}{2\pi\sigma_{r}^{2}}e^{-\frac{x^{2}}{2\sigma_{r}^{2}}} \lambda_{L}(z)\,, \tag{38}\]
where \(N\) is the number of particles and \(\sigma_{r}\) the bunch width. The longitudinal distribution
\[\lambda_{L}(z)=\frac{1}{2L}\left[{\rm erf}\left(\frac{L-2z}{2\sqrt{2}\sigma_{ r}}\right)+{\rm erf}\left(\frac{L+2z}{2\sqrt{2}\sigma_{r}}\right)\right] \tag{39}\]
is parameterized by the length scale \(L\) and was chosen because it permits an analytical solution for the field [33]. Comparing the strength of the ponderomotive force to the repulsive fields of the particle bunch provides a condition for when space-charge effects can be neglected (see Appendix E):
\[\begin{split} N\ll N_{\rm sc}&=0.21\frac{4\pi m}{q _{e}^{2}}\frac{L\sigma_{r}}{\sigma_{0}}\xi_{0}^{2}\\ &=8\times 10^{7}[\mu{\rm m}]^{-1}\frac{L\sigma_{r}}{\sigma_{0}}\xi_{0}^{ 2}\,,\end{split} \tag{40}\]
where the numerical value is given for electrons. As an example, a typical electron bunch from laser wakefield acceleration (LWFA) [34] has a \(\sim\)pC of charge (\(N=6.2\times 10^{6}\) for 1 pC) \(L=7\lambda_{0}\), and \(\sigma_{r}=3\lambda_{0}/(2\sqrt{2})\) (the same parameters used in the simulations presented below, see Appendix F). For a FF pulse with \(\xi_{0}=10\) and \(\sigma_{0}=3\lambda_{0}\), \(N_{\rm sc}=2\times 10^{10}\gg N\), thus space-charge forces are negligible. Note that the space-charge repulsion would be even less important in a mixed species electron-positron bunch [35].
## IV Required pulse energy: FF pulses vs other schemes
A FF laser pulse requires substantially less energy than a conventional LG10 laser pulse to confine a relativistic particle bunch. For a conventional laser pulse, the interaction time is limited by the Rayleigh range. Extending the interaction time requires increasing the Rayleigh range and the focal spot, which, in turn, requires increasing the power to maintain the strength of the ponderomotive force. In contrast, the intensity peak of a FF pulse co-propagates with the electron bunch, which decouples the interaction time from the Rayleigh range and the strength of the ponderomotive force.
Using Eq. (10), the energy required in a FF pulse for the interaction time \(t_{\rm int,F}\) is given by
\[E_{\rm F}=2P_{\rm ave}t_{\rm int,F}=\frac{\pi}{2}\mathcal{A}_{0,{\rm F}}^{2} \omega_{0}^{2}\sigma_{0,{\rm F}}^{2}t_{\rm int,F}\,, \tag{41}\]
where the subscript F denotes the parameters of the FF pulse. Similarly, for a conventional LG10 pulse, denoted by subscript C, the energy is
\[E_{\rm C}=\frac{\pi}{2}\mathcal{A}_{0,{\rm C}}^{2}\omega_{0}^{2}\sigma_{0,{\rm C }}^{2}t_{\rm int,C}\,. \tag{42}\]
Confinement of the relativistic particles depends on the strength of the ponderomotive force. For a fixed ponderomotive force
\[\frac{\mathcal{A}_{0,{\rm F}}^{2}}{\sigma_{0,{\rm F}}^{2}}=\frac{\mathcal{A}_ {0,{\rm C}}^{2}}{\sigma_{0,{\rm C}}^{2}}=K\,, \tag{43}\]
where \(K\) is proportional to the strength of the ponderomotive force. Substituting Eq. (43) into Eqs. (41) and (42) yields
\[E_{\rm F} =\frac{\pi}{2}K\omega_{0}^{2}\sigma_{0,{\rm F}}^{4}t_{\rm int,F}\,, \tag{44}\] \[E_{\rm C} =\frac{\pi}{2}K\omega_{0}^{2}\sigma_{0,{\rm C}}^{4}t_{\rm int,C}\,. \tag{45}\]
To ensure that the particles interact with the focus of the conventional pulse over the entire interaction time
\[t_{\rm int,C}=2\eta_{0,{\rm C}}=\omega_{0}\sigma_{0,{\rm C}}^{2}\,. \tag{46}\]
Now, two comparisons can be made:
1. For the same interaction time \(t_{\rm int,F}=t_{\rm int,C}=t_{\rm int}\) \[\frac{E_{\rm C}}{E_{\rm F}}=\frac{\sigma_{0,{\rm C}}^{4}}{\sigma_{0,{\rm F}}^ {4}}=\frac{t_{\rm int}^{2}}{\omega_{0}^{2}\sigma_{0,{\rm F}}^{4}}=\left(\frac{t _{\rm int}}{\eta_{0,{\rm F}}}\right)^{2}\,.\] (47) As an example, to confine an electron bunch with a radius of 2 \(\mu\)m over a distance \(L_{\rm int}=6\) mm (\(t_{\rm int}\) = 20 ps), \(E_{\rm C}/E_{\rm F}=1.1\times 10^{4}\) where \(\eta_{0,{\rm F}}=18\pi\)\(\mu\)m was used. Setting \(\xi_{0}=5\), \(E_{\rm F}=200\) J and \(E_{\rm C}\approx 2\) MJ.
b) For the same energy \(E_{\rm F}=E_{\rm C}\) \[t_{\rm int,F}=\left(\frac{t_{\rm int,C}}{\eta_{0,{\rm F}}}\right)^{2}t_{\rm int,C }\,.\] (48) Thus, a FF pulse is advantageous as long as the interaction time \(t_{\rm C}\) is longer than the Rayleigh range of the FF pulse. An electron bunch with a radius of 2 \(\mu\)m can be confined by a FF laser pulse with \(\eta_{0,{\rm F}}=18\pi\)\(\mu\)m for a distance \(L_{\rm int,F}=6\) mm (\(t_{\rm int,F}\) = 20 ps) compared to only \(L_{\rm int,C}=0.3\) mm (\(t_{\rm int,C}=0.9\) ps) for a conventional pulse, where both pulses have 200 J of energy.
Recently an alternative scheme that employs a Bessel beam to guide a relativistic electron bunch has been proposed [7; 22]. In this scheme, the electrons counter-propagate with respect to a radially polarized Bessel beam created by an axicon lens. The axicon creates an extended longitudinal region of high intensity. However, to create a ponderomotive barrier of comparable strength [36] to the FF, the axicon must maintain a high intensity across the entire region for the full interaction time. Maintaining this high intensity requires very high energies. In contrast, FF pulses concentrate the energy density along the the trajectory of the charged particles, greatly reducing the required energy.
## V Simulations
The motion of charged particles in the FF pulse was simulated using the classical Landau-Lifshitz equation of motion Eq. (11). Locally, the electrons experience plane wave-like fields. As result, the term proportional to the proper time derivative on the RHS of Eq. (11) is negligible compared to the other terms [37; 38] and has been omitted in the simulations (see Ref. [39] for an exception). The simulations were performed for electrons (mass \(m=m_{e}\), charge \(q=q_{e}<0\), classical radius \(r_{q}=r_{e}\), and normalized field strength \(\xi_{0}=|q_{e}|\mathcal{A}_{0}/m_{e}\)). However, because the ponderomotive force applies equally for positively and negatively charged particles, all of the results also describe the motion of positrons. For the list of simulation parameters see Appendix F.
### Electron bunch initialization
The electrons move predominantly in the negative \(z\) direction with the intensity peak of the FF pulse and against the phase fronts (see Fig. 1). The initial electron positions were randomly sampled from the charge distribution given by Eqs. (38) and (39), with a width and length representative of bunches produced in either laser wakefield accelerators or proposed conventional accelerators [40; 41]. Specifically, the initial variance in the radial position was chosen to be
\[\sigma_{r}(0)=\frac{r_{\rm max}}{2}=\frac{\sigma_{0}}{2\sqrt{2}}=\frac{3\pi}{ \sqrt{2}}k_{0}^{-1}=\frac{3}{2\sqrt{2}}\lambda_{0}\,. \tag{49}\]
The initial longitudinal spread of the bunch was set to
\[L(0)=14\pi k_{0}^{-1}=7\lambda_{0}\,. \tag{50}\]
With this choice, \(\sim\)99% of the simulated electrons are initialized within a longitudinal distance of \(5\lambda_{0}\) from the center of the bunch. The length of the electron bunch is therefore much shorter than the Rayleigh range \(18\pi\lambda_{0}\). As a result, electrons initialized within a longitudinal distance of \(5\lambda_{0}\) from the focus experience a ponderomotive force that is within 99% of the maximum.
The initial longitudinal components of the four-velocity were normally distributed with a standard deviation equal to 1% of the central value \(u_{z}(0)=-\sqrt{\langle\gamma_{0}\rangle^{2}-1}\). The gamma factors \(\langle\gamma_{0}\rangle\) used to generate the distribution were 1000, 200, and 100 for the three simulated cases. The transverse velocities were also normally distributed but with zero mean. The initial variance \(\sigma_{\beta_{\perp}}(0)\) and the normalized field strength \(\xi_{0}\) were chosen such that the condition in Eq. (33) was satisfied. See Table 1 in Appendix F for details. To capture the effect of electrons escaping the FF pulse, the values were also chosen to ensure that some electrons were initialized with a transverse velocity greater than \(\beta_{\perp,\rm max}\). With the longitudinal and transverse velocities known, the initial energy for each electron was fully determined. Each simulated bunch was composed of 1000 independent electrons, which was sufficient for calculating average quantities.
### Electron confinement in the FF pulse and radiation cooling
Figure 4(a) demonstrates the rapid expansion of an electron bunch in the absence of external fields. Early in time the expansion is slower as a subset of electrons move towards the bunch axis. Quickly thereafter, the initial spread in transverse momenta cause the RMS radius to evolve as approximately [see discussion below Eq. (32)]
\[\left.R(t)\right|_{\xi_{0}=0}\approx R_{0}\frac{t}{T_{s}}\approx\sigma_{\beta_ {\perp}}(0)t\,. \tag{51}\]
As shown in Fig. 4(b), this expansion is contained by counter-propagating a flying focus pulse with the electron bunch. Even though the matching condition for a constant bunch radius, i.e., Eq. (33), was satisfied for the mean energy, the energy spread in the bunch and the anharmonicity of the ponderomotive potential result in oscillations of the RMS radius. Only confined electrons, defined as those chosen as having \(r(t)<0.75\sigma_{0}\) during the entire interaction, were used to calculate the RMS radius in Fig. 4(b).
Radiation reaction gradually decreases the RMS spot size and increases the oscillation frequency of the bunch [cf. \(\langle\gamma_{0}\rangle=1000\) cases in Fig. 4(b)]. As the electrons radiate and lose energy (Fig. 5), the ponderomotive force becomes stronger [Eq. (20)], which increases the oscillation frequency [Eq. (21)]. Consistent with Eq. (34), the radiative cooling of the bunch occurs more rapidly for higher values of \(\gamma_{0}\) and \(\xi_{0}\) (Fig. 5).
The reduction in the RMS spot size of the bunch and increase in its oscillation frequency due to radiation reaction mitigate the effect of anharmonicity. Figure 6 displays the oscillation periods of electrons as a function of initial radius without [Fig. 6(a)] and with [Fig. 6(b)] radiation reaction. The period of oscillations around an axis \(i\) (either \(x\) or \(y\)) was determined numerically as an average period over the interaction time
\[T_{i}=\frac{2(t_{i}^{(n_{i})}-t_{i}^{(1)})}{n_{i}-1}\,, \tag{52}\]
where \(n_{i}\) is the number of times the electron crosses the \(i^{\text{th}}\) axis. The first crossing happens at time \(t_{i}^{(1)}\) and last at time \(t_{i}^{(n_{i})}\). The arithmetic mean of \(T_{x}\) and \(T_{y}\) is plotted in Fig. 6.
Without radiation reaction [Fig. 6(a)], electrons initialized at small radii oscillate with a period close to that predicted by Eq. (21), marked by the horizontal dashed lines. In contrast, electrons initialized at larger radii undergo oscillations with a longer period due to the weakening of the ponderomotive potential with increasing radius. Figure 6(b) demonstrates the decrease in the oscillation period resulting from radiation reaction. The decrease in period is most pronounced for electrons initialized further from the \(z\)-axis in regions of high intensity, where radiation reaction is strongest.
The increase in the strength of the ponderomotive potential as electrons lose energy to radiation reaction results in the confinement of more electrons. This is illustrated in Fig. 7(a) which shows the initial phase space distributions of confined and unconfined electrons for \(\langle\gamma_{0}\rangle=1000\). The trapping boundary predicted by Eq. (27) is also plotted as a red dashed line. Consistent with the reduction in period shown in Fig. 6, the increase in trapping is most pronounced for electrons initialized in regions of high intensity [electrons well outside of the red dashed line in Fig. 7(a) are still confined].
In Figs. 7(b) and 7(c), \(\langle\gamma_{0}\rangle\) is lower, which reduces the effect of radiation reaction on the electron trajectories. The highest \(\xi_{0}/\gamma_{0}\) ratio is presented in Fig. 7(c). In this case the coupling between longitudinal and transverse motion becomes important, and Eq. (27) is no longer accurate, which can be observed as the lack of confinement within the red-dashed boundary.
In addition to reducing the RMS radius of the electron bunch and improving the transverse confinement, radiation reaction reduces the emittance of the electron beam (Fig. 8). However, all of these improvements in the quality of the electron bunch come at the expense of its average energy (Fig. 5). In fact, comparing the \(\langle\gamma_{0}\rangle=1000\)
Figure 4: Time evolution of the RMS radius for a) freely traveling electrons with no external field and b) electrons confined to the ponderomotive potential of a FF pulse with \(\sigma_{0}=6\pi k_{0}^{-1}=3\lambda_{0}\). The dotted line denotes the case of \(\langle\gamma_{0}\rangle=1000\) with no radiation reaction. For the parameters of the electron bunches see Appendix F.
Figure 5: Average relativistic gamma factor of electrons confined to the ponderomotive potential of the FF pulse with \(\sigma_{0}=6\pi k_{0}^{-1}=3\lambda_{0}\). The dotted line denotes the case of \(\langle\gamma_{0}\rangle=1000\) with no radiation reaction. For the parameters of the electron bunches see Appendix F.
cases in Figs. 5 and 8 shows that the relative change in the emittance \(\Delta\varepsilon_{\perp}=[\varepsilon_{\perp}(0)-\varepsilon_{\perp}(t_{\rm int })]/\varepsilon_{\perp}(0)\) is approximately equal to the relative change in the average energy \(\Delta\langle\gamma\rangle=[\langle\gamma_{0}\rangle-\langle\gamma(t_{\rm int })\rangle]/\langle\gamma_{0}\rangle\):
\[\Delta\varepsilon_{\perp}\approx\Delta\langle\gamma\rangle\,. \tag{53}\]
For the lower \(\langle\gamma_{0}\rangle\) cases, where radiation reaction is less important, the emittance remains relatively constant during the interaction.
In Fig. 8, the emittance was calculated using the electrons that remained confined to the FF pulse, i.e., those with \(r(t)<0.75\sigma_{0}\) during the entire interaction. The difference in the initial emittances of the beams with and without radiation in Fig. 8 was due to the different statistics of the confined electrons in the two cases. The jump in emittance during the ramp on and ramp off of the FF pulse results from the onset of electron oscillations in the fields: the statistical definition of emittance [Eq. (D1)] uses mechanical and not canonical transverse momentum.
### Longitudinal motion
Over the entire interaction length, the electron bunch remains in the vicinity of the intensity peak to within a small fraction of the Rayleigh range (Fig. 9). The longitudinal delay of the bunch with respect to the FF intensity peak is in excellent agreement with the predictions of Eq. 37. In Fig. 9, the average delay for the three simulated bunches are plotted as thick lines, while Eq. 37 is plotted as thin lines. For the purposes of Fig. 9, Eq. 37 was evaluated using average quantities of the bunch, i.e., \(\beta_{\perp}^{2}(0)\to\sigma_{\beta_{\perp}}^{2}(0)\) and \(\gamma_{0}\to\langle\gamma_{0}\rangle\), and the effective field strength along the particle trajectory was set to \(\xi_{\rm eff}=\xi_{0}/2\).
The increase in delay due to radiation reaction is only significant for the \(\langle\gamma_{0}\rangle=1000\) case. For the \(\langle\gamma_{0}\rangle=100\) and \(200\) cases, the delay results almost entirely from the initial subluminal velocity of the electrons. This is demonstrated by the similarity of the thick lines and the dotted lines which show the average delay in the absence of the fields of the FF pulse.
Any predictable delay can be eliminated by using a FF pulse with a focal velocity equal to that of the electrons. Closed form expressions for FF pulses with focal velocities \(\beta_{F}\neq 1\) have been derived in the paraxial approximation [20] and exactly [42]. However, this more general treatment is not necessary here, because the average electron delay \(\langle\eta-\eta(0)\rangle\) is much smaller than the Rayleigh range \(\eta_{0}\) over the entire interaction length.
## VI Summary and conclusions
A flying focus pulse with \(\ell=1\) OAM can prevent the spreading of relativistic particle bunches over macroscopic distances, providing an alternative to magnetic optics at high power laser facilities. The peak intensity of the FF pulse travels at the vacuum speed of light in the opposite direction as its phase fronts. Charged particles traveling with the peak intensity experience a ponderomotive potential that confines their transverse motion over distances far greater than a Rayleigh range. Radiation reaction decreases the RMS radius and emittance of the particle bunch and improves the transverse confinement at the cost of a reduction in the average particle energy. Simulations demonstrated the confinement of 50 - 500 MeV electron bunches with 40 - 4 mrad beam divergences over 6 mm. The electron bunches maintained
Figure 6: Oscillation period in the ponderomotive potential as a function of the initial distance from the \(z\)-axis. The horizontal dashed lines indicate the period in the harmonic approximation, Eq. (21). The vertical dotted line marks \(r_{\rm max}=\sigma_{0}/\sqrt{2}\). a) radiation reaction is switched off (\(r_{e}=0\)). The gray crosses show the equivalent simulation runs with zero energy and momentum spread. b) radiation reaction is included. For the parameters of the electron bunches and field intensities see Appendix F.
a tight RMS radius of \(\sim 1\,\mu\)m.
All-optical confinement of a charged particle bunch with a FF could have utility in any situation where the bunch must be transported from its source to its target with a small RMS radius. For instance, the transverse size of the particle bunch determines the spatial resolution of probes based on secondary radiation sources, such as bremsstrahlung x-ray imaging. The bunch size also contributes to the Pierce parameter, which is critical to the performance of free-electron-lasers.
Flying focus pulses require much less energy to confine a charged particle bunch than either LG10 Gaussian or axicon-focused Bessel pulses. In contrast to these pulses, the peak intensity of the FF travels with the electron bunch, which decouples the interaction length from the Rayleigh range. For the simulated examples, the FF pulse had an energy and power of \(~{}200\) J and \(<5\) TW,
Figure 8: Time evolution of the normalized transverse emittance of confined electrons co-traveling with the intensity peak of a FF pulse, with \(\sigma_{0}=6\pi k_{0}^{-1}=3\lambda_{0}\). The \(\left\langle\gamma_{0}\right\rangle=1000\) case with no radiation reaction is shown as the dotted line. For the parameters of the electron bunches see Appendix F.
Figure 7: Initial transverse phase space of electrons co-travelling with the intensity peak of a FF pulse, with \(\sigma_{0}=6\pi k_{0}^{-1}=3\lambda_{0}\). The analytical approximation for the boundary between confined and not confined electrons (red dashed line) is given by Eq. (27). Each simulation evolved 1000 independent electron trajectories and \(n_{c}\) indicates the percentage of confined electrons. For the parameters of the electron bunches see Appendix F.
Figure 9: Average longitudinal displacement of confined electrons from the focus of the FF pulse with \(\sigma_{0}=6\pi k_{0}^{-1}=3\lambda_{0}\) (thick lines). Thin lines indicate the estimate of Eq. (37) for the longitudinal lag. The dotted lines show the results from simulations without the fields of the FF pulse (\(\xi_{0}=0\)). For the parameters of the electron bunches see Appendix F.
respectively, compared to the 2 MJ required in a LG10 Gaussian pulse.
The distance over which a particle bunch remains confined can be lengthened by using FF pulses with a peak intensity that travels at a velocity equal to that of the particles. Such pulses have been experimentally demonstrated and theoretically analyzed [9; 42]. The use of a velocity-matched FF pulse would provide an additional advantage over Gaussian or Bessel pulses.
The electron bunches considered in this work had parameters characteristic of the bunches created in LWFA. The short lengths (\(\sim 10\,\mu\)m) and high divergences (\(>1\) mrad) make these bunches ideal for confinement by a FF pulse. The short length also ensures that the bunch sits in a region of near-constant peak intensity. The high divergences ensure that the confinement afforded by the FF pulse has an impact on the transport. At high-intensity laser facilities, a laser pulse can be used for both LWFA and transport of the resulting bunches, without the need for magnetic optics. In contrast, conventional e-/e+ accelerators, such as those produced at SLAC, have much longer bunch lengths (\(\sim 1\) mm) and smaller divergences (\(<1\) mrad). However, shorter bunch lengths are expected for next generation e-/e+ colliders, such as the ILC or CLIC.
While the simulations were performed for electrons, the results are equally applicable to positrons. In fact, mixed electron-positron bunches [35] experience less Coulomb repulsion due to their lower net charge and would be easier to confine. This property could be exploited to guide the products of Breit-Wheeler pair production from the collision of a high-intensity laser pulse with hard photons. Moreover, transverse confinement in a FF pulse could provide an alternative to injecting electron beams for mitigating alignment sensitivity in wakefield and direct laser acceleration of positrons [43; 44].
###### Acknowledgements.
We would like to thank Konstantin Beyer, Dustin Froula, Yevgeny Gelfer, Ondrej Klimo, Francesco Schillaci, Stefan Weber, and Kathleen Weichman (ordered alphabetically) for enlightening discussions and Levi Schachter for correspondence. The work of MV is supported by the Portuguese Science Foundation grant FCT No. CEECIND/01906/2018 and PTDC/FISPLA/3800/2021. The work of JPP and DR is supported by the Office of Fusion Energy Sciences under Award Number DE-SC0019135 and DE-SC00215057, the Department of Energy National Nuclear Security Administration under Award Number DE-NA0003856, the University of Rochester, and the New York State Energy Research and Development Authority.
## Appendix A Electromagnetic fields of \(\ell=1\) Oam FF beam
Here, the electromagnetic field components of a FF beam are derived for the special case where the focal velocity is equal and opposite to the phase velocity at the vacuum speed of light. The vector four-potential \(A^{\mu}\), linearly polarized along \(y\)-direction, is fully determined by Eqs. (1), (5), and (6). The electromagnetic field components can be calculated using the standard formulas
\[\mathbf{E}=-\partial_{t}\mathbf{A}-\mathbf{\nabla}A^{0},\quad\mathbf{B}=\mathbf{\nabla}\times\mathbf{ A}, \tag{10}\]
which can be straightforwardly evaluated after a somewhat lengthy calculation. For the sake of conciseness, the common factor is taken out
\[\mathbf{E}=\sqrt{2}\frac{\mathcal{A}_{0}}{\sigma_{\eta}}e^{-r^{2}/\sigma_{\eta}^{ 2}}\mathbf{\mathcal{E}},\quad\mathbf{B}=\sqrt{2}\frac{\mathcal{A}_{0}}{\sigma_{\eta} }e^{-r^{2}/\sigma_{\eta}^{2}}\mathbf{\mathcal{B}}, \tag{11}\]
where the remaining dimensionless components of the electric field are
\[\mathcal{E}_{x} =\frac{r}{\omega_{0}\sigma_{\eta}^{2}}\left[\frac{2xy}{\sigma_{ \eta}\sigma_{0}}\sin\Psi_{1}(0,2)-\cos\Psi_{1}(2,1)\right], \tag{12}\] \[\mathcal{E}_{y} =\frac{r\sigma_{0}}{\sigma_{\eta}}T_{1}(2)+\frac{1}{\omega_{0} \sigma_{\eta}}T_{2},\] (13) \[\mathcal{E}_{z} =\frac{\sigma_{0}}{\sigma_{\eta}}\left[\frac{2xy}{\sigma_{\eta} \sigma_{0}}\cos\Psi_{1}(0,1)+\sin\Psi_{1}(1,0)\right]. \tag{14}\]
The phases are defined in Eq. (2). Similarly, the dimensionless components of the magnetic field are given by
\[\mathcal{B}_{x} =\frac{r\sigma_{0}}{\sigma_{\eta}}T_{1}(1)+\frac{1}{\omega_{0} \sigma_{\eta}}T_{2}, \tag{15}\] \[\mathcal{B}_{y} =-\frac{r}{\omega_{0}\sigma_{\eta}^{2}}\left[\frac{2xy}{\sigma_{ \eta}\sigma_{0}}\sin\Psi_{1}(0,2)-\cos\Psi_{1}(2,1)\right],\] (16) \[\mathcal{B}_{z} =-\frac{\sigma_{0}}{\sigma_{\eta}}\left[\frac{2rx}{\sigma_{\eta} \sigma_{0}}\cos\Psi_{1}(0,1)-\cos\Psi_{1}(1,0)\right], \tag{17}\]
where
\[T_{1}(j) =\left[(-1)^{j}\omega_{0}-\frac{r^{2}}{\eta_{0}\sigma_{\eta}^{2}} \right]\sin\Psi_{1}(0,0) \tag{18}\] \[+\frac{\sigma_{0}}{\eta_{0}\sigma_{\eta}}\left[\sin\Psi_{1}(0,1)- \frac{2r^{2}\eta}{\sigma_{\eta}^{2}\eta_{0}}\cos\Psi(0,1)\right],\]
\[T_{2}=\frac{2ry^{2}}{\sigma_{\eta}^{2}\sigma_{0}}\sin\Psi_{1}(0,2)-\frac{2y}{ \sigma_{\eta}}\cos\Psi_{1}(1,1)\,. \tag{19}\]
In order to derive these expressions the trigonometric angle addition formulas and identities
\[\sin\left[\arctan\left(\frac{\eta}{\eta_{0}}\right)\right] =\frac{\sigma_{0}\eta}{\sigma_{\eta}\eta_{0}}\,, \tag{20}\] \[\cos\left[\arctan\left(\frac{\eta}{\eta_{0}}\right)\right] =\frac{\sigma_{0}}{\sigma_{\eta}} \tag{21}\]
were frequently used.
## Appendix B Average \(\ell=1\) OAM FF beam power
In this appendix, a formula for the average beam power in the \(\ell=1\) OAM FF beam is obtained. The cycle average for a general function \(f(\eta,r,\theta,\phi)\) can be written as
\[\overline{f(\eta,r,\theta)}=\frac{1}{\phi_{A}}\int_{0}^{\phi_{A}}f(\eta,r,\theta,\phi)d\phi\,, \tag{10}\]
where the average is calculated over the phase interval \(\phi_{A}\). In this work, the averages are performed for an ultra-relativistic observer who is approximately comoving with the field focus \(\eta=t+z\approx 0\). At focus, the phase defined in Eq. (2) can be written as
\[\Psi_{1}(a)|_{\eta=0}=\omega_{0}\phi+(1-a)\theta\,. \tag{11}\]
The cycle averages of the following expressions [see definition Eq. (10)] are useful
\[\overline{\sin[\Psi_{1}(a_{1})]\sin[\Psi_{1}(a_{2})]}|_{\eta=0} =\frac{1}{2}\cos[(a_{1}-a_{2})\theta], \tag{12}\] \[\overline{\cos[\Psi_{1}(a_{1})]\cos[\Psi_{1}(a_{2})]}|_{\eta=0} =\frac{1}{2}\cos[(a_{1}-a_{2})\theta],\] (13) \[\overline{\sin[\Psi_{1}(a_{1})]\cos[\Psi_{1}(a_{2})]}|_{\eta=0} =\frac{1}{2}\sin[(a_{1}-a_{2})\theta]\,. \tag{14}\]
The average power transmitted through the \(xy\) plane at \(\eta=0\) is given by the cycle-averaged Poynting vector flux
\[P_{\rm ave}=\int dxdy\left.\overline{E_{x}B_{y}-E_{y}B_{x}}\right|_{\eta=0}. \tag{15}\]
In the simulations presented in Section V, \(\omega_{0}\sigma_{0}=6\pi\), thus only the leading order terms \(\propto(\omega_{0}\sigma_{0})^{n}\) are considered. The leading-order term has \(n=2\), there is no contribution with \(n=1\), and any terms with \(n\leq 0\) are neglected. Ultimately, the leading contribution to the beam power comes from
\[\left.\overline{T_{1}(2)T_{1}(1)}\right|_{\eta=0}\approx-\frac{1}{2}\omega_{0 }^{2}\,. \tag{16}\]
After performing the integration over the transverse coordinates the final expression for the average power reads
\[\begin{split} P_{\rm ave}=\frac{\pi}{4}A_{0}^{2}& \left[\omega_{0}^{2}\sigma_{0}^{2}+O(1)\right]\\ &\approx 21.5[{\rm GW}]\left(\xi_{0}\frac{\sigma_{0}}{\lambda_{0}} \right)^{2}\end{split} \tag{17}\]
and is identical to the conventional LG10 beam. The numerical value in the last expression is given for the field strength \(\xi_{0}\) scaled to electron (positron) mass and charge. In the cases considered here, \(\sigma_{0}=3\lambda_{0}\), which means that the pulses with \(t_{\rm int}=20\) ps and \(\xi_{0}=5\) have a power of about 5 TW and a total energy of about 200 J, see Eq. (10).
## Appendix C Transverse ponderomotive force
A formula for the transverse ponderomotive force acting on a charged particle in the FF pulse is derived in this appendix. Radiation-reaction is neglected for this derivation. In terms of the vector four-potential, the Lorentz equation is
\[\frac{d}{d\tau}(mu^{\mu})=q(\partial^{\mu}A^{\nu}-\partial^{\nu}A^{\mu})u_{\nu }\,. \tag{18}\]
Since \(u_{\nu}\partial^{\nu}=d/d\tau\) along the particle trajectory, the second term on the RHS can be combined with the proper time derivative on the LHS. The remaining product \(u_{\nu}A^{\nu}\) on the RHS can be expressed in lightcone coordinates, yielding
\[\begin{split}\frac{d}{d\tau}&(mu^{\mu}+qA^{\mu})\\ &=\frac{q}{2}\left(u_{-}\partial^{\mu}A_{+}+u_{+}\partial^{\mu}A_ {-}\right)-q\mathbf{u}_{\perp}\cdot\partial^{\mu}\mathbf{A}_{\perp}\,.\end{split} \tag{19}\]
In this expression the definition of the dot product of two four-vectors \(a^{\mu}\) and \(b^{\mu}\) in the lightcone coordinates
\[a_{\mu}b^{\mu}=\frac{1}{2}(a_{+}b_{-}+a_{-}b_{+})-\mathbf{a}_{\perp}\cdot\mathbf{b}_{ \perp}\,, \tag{20}\]
was used. Finally, the lightcone components \(a_{+}\) and \(a_{-}\) are defined as
\[a_{+}=a^{0}+a^{z},\quad a_{-}=a^{0}-a^{z}\,. \tag{21}\]
In the gauge used here \(A_{+}\) vanishes, and therefore the first term on the RHS of Eq. (19) does not contribute. From the constraint \(u^{2}=1\) on the four-velocity, \(u_{+}\) can be expressed as
\[u_{+}=\frac{1+\mathbf{u}_{\perp}^{2}}{u_{-}}\approx\frac{1}{2\gamma} \tag{22}\]
provided that the perpendicular velocity is small (\(\xi_{0}\ll\gamma\)), and the particle moves with ultra-relativistic velocity in the negative \(z\) direction (\(u_{-}\approx 2\gamma\)). This allows the second term on the RHS of Eq. (19) to be neglected compared to the third term. Employing this approximation, one finds
\[\frac{d}{d\tau}(mu^{\mu}+qA^{\mu})\approx-q\mathbf{u}_{\perp}\cdot\partial^{\mu} \mathbf{A}_{\perp}\,. \tag{23}\]
In the perpendicular direction, the ansatz
\[\mathbf{u}_{\perp}=-\frac{q}{m}\mathbf{A}_{\perp}+\delta\mathbf{u}_{\perp} \tag{24}\]
can be made, where the first term is the exact solution of perpendicular motion in a plane wave [27], and the second term represents a deviation from this motion due to nontrivial transverse structure of the field. Upon substituting this ansatz in to both sides of Eq. (23), the
leading-order correction to the perpendicular component of the four-velocity reads
\[\frac{d}{d\tau}\delta\mathbf{u}_{\perp}\approx-\frac{q^{2}}{2m^{2}}\mathbf{\nabla}_{\perp }A_{\perp}^{2}\,, \tag{104}\]
where the identity \(\mathbf{A}_{\perp}\cdot\nabla_{i}\mathbf{A}_{\perp}=\nabla_{i}A_{\perp}^{2}/2\) was used. In a plane wave, \(\mathbf{A}_{\perp}\) does not depend on the transverse coordinates and this correction vanishes as expected. Finally, the proper-time derivative can be written in terms of the derivative with respect to the laboratory time because \(d/d\tau=\gamma d/dt\) along the particle trajectory.
Upon applying the cycle-averaging procedure [defined in Eq. (101)] at focus (\(\eta=0\)) to Eq. (103), the oscillatory plane wave term vanishes [see its prescription in Eq. (1)] and one obtains
\[\overline{\mathbf{u}_{\perp}}=\overline{\delta\mathbf{u}_{\perp}}\,. \tag{105}\]
In order to carry out the cycle-averaging integration, the functions \(r(t)\) and \(\theta(t)\), corresponding to the polar coordinates of the charge at the time \(t\) on the \(xy\) plane, are considered to be changing on a much slower scale and effectively constant in the averaging interval \(\phi_{A}\). This result implies that the cycle-averaged perpendicular velocity is solely given by the term describing the deviation from the plane wave motion. Therefore performing a cycle average of Eq. (104) yields
\[\frac{d\overline{\mathbf{u}}_{\perp}}{dt}\approx-\frac{q^{2}}{2m^{2}\gamma_{0}}\bm {\nabla}_{\perp}\overline{A_{\perp}^{2}}|_{\eta=0}\,, \tag{106}\]
where it was assumed that the relativistic gamma factor of the particle remains approximately unchanged and can be taken out of the average, which is correct up to terms on the order of \(O(1/\gamma_{0})\)[45].
## Appendix D Normalized transverse emittance
The transverse emittance is defined as a quantity proportional to the phase-space area of a bunch in the transverse direction. For computational purposes, the statistical definition of normalized transverse emittance is more useful [46]. By generalizing the standard definition of the emittance to two-dimensional vectors, one obtains
\[\varepsilon_{\perp}=\frac{1}{m}\sqrt{\sigma_{\mathbf{r}}^{2}\sigma_{\mathbf{p}_{\perp }}^{2}-\sigma_{\mathbf{r},\mathbf{p}_{\perp}}^{4}}\,. \tag{107}\]
The variance \(\sigma_{\mathbf{r}}^{2}\) of the transverse position vector \(\mathbf{r}=(x,y)\) is defined as
\[\sigma_{\mathbf{r}}^{2}=\langle\mathbf{r}\cdot\mathbf{r}\rangle-\langle\mathbf{r}\rangle \cdot\langle\mathbf{r}\rangle\,, \tag{108}\]
where, in polar coordinates,
\[\langle\mathbf{r}\cdot\mathbf{r}\rangle=\langle r^{2}\rangle\,. \tag{109}\]
The relativistic transverse momentum is \(\mathbf{p}_{\perp}=m\gamma(\beta_{x},\beta_{y})\). Its variance \(\sigma_{\mathbf{p}_{\perp}}^{2}\) is
\[\sigma_{\mathbf{p}_{\perp}}^{2}=\langle\mathbf{p}_{\perp}\cdot\mathbf{p}_{\perp}\rangle- \langle\mathbf{p}_{\perp}\rangle\cdot\langle\mathbf{p}_{\perp}\rangle\,, \tag{110}\]
where
\[\mathbf{p}_{\perp}\cdot\mathbf{p}_{\perp}=m^{2}\gamma^{2}(r^{\prime 2}+r^{2}\theta^{ \prime 2})\,. \tag{111}\]
Finally, the cross variance \(\sigma_{\mathbf{r},\mathbf{p}_{\perp}}\) is given by
\[\sigma_{\mathbf{r},\mathbf{p}_{\perp}}^{2}=\langle\mathbf{r}\cdot\mathbf{p}_{\perp}\rangle- \langle\mathbf{r}\rangle\cdot\langle\mathbf{p}_{\perp}\rangle\,, \tag{112}\]
where
\[\langle\mathbf{r}\cdot\mathbf{p}_{\perp}\rangle=m\langle\gamma x\beta_{x}+\gamma y \beta_{y}\rangle=m\langle\gamma rr^{\prime}\rangle\,. \tag{113}\]
Now, since the average transverse position \(\langle\mathbf{r}\rangle\) and average transverse momentum \(\langle\mathbf{p}_{\perp}\rangle\) are approximately zero throughout the evolution of the bunch due to cylindrical symmetry, the normalized emittance can be re-written as
\[\varepsilon_{\perp}=\frac{1}{m}\sqrt{\langle\mathbf{r}\cdot\mathbf{r}\rangle\langle \mathbf{p}_{\perp}\cdot\mathbf{p}_{\perp}\rangle-\langle\mathbf{r}\cdot\mathbf{p}_{\perp} \rangle^{2}}\,. \tag{114}\]
After substitution from Eqs. (109), (111), and (113) one finds
\[\varepsilon_{\perp}=\sqrt{\langle r^{2}\rangle\langle\gamma^{2}r^{\prime 2}+ \gamma^{2}r^{2}\theta^{\prime 2}\rangle-\langle\gamma rr^{\prime}\rangle^{2}}\,. \tag{115}\]
Finally, the approximations that the motion is ultra-relativistic, that radiation reaction is neglected, and that the fields are relatively weak, i.e., \(\xi_{0}\ll\gamma_{0}\), can be made. With these approximations, the relativistic Lorentz factor \(\gamma\) is approximately constant, has very little spread, and can be taken out of the ensemble averages. This gives the normalized transverse emittance
\[\varepsilon_{\perp}\approx\gamma\sqrt{\langle r^{2}\rangle\langle r^{\prime 2 }+r^{2}\theta^{\prime 2}\rangle-\langle rr^{\prime}\rangle^{2}}\,, \tag{116}\]
which appears in Eq. (30) for the evolution of the RMS radius.
## Appendix E Coulomb repulsion among particles
In this appendix, an estimate of the Coulomb repulsion force is presented, and the conditions for which this force can be neglected compared to the ponderomotive force are established. The charged particle bunches are modeled analytically by the charge distribution in Eqs. (38) and (39). As was shown in Ref. [33], the electric field generated in the rest frame of this distribution can be computed exactly.
At \(z=0\) (in the middle of the bunch), only a purely radial field remains
\[\mathbf{E}(r,\theta,0)=E_{r}\hat{\mathbf{r}}=\frac{1}{4\pi}\frac{qN}{Lr}f(r)\hat{\mathbf{r} }\,, \tag{117}\]
where \(f(r)\) is given by
\[\begin{split} f(r)&=\frac{L}{\sqrt{L^{2}/4+r^{2}}} \mathrm{erf}\left(\frac{\sqrt{L^{2}/4+r^{2}}}{\sqrt{2}\sigma_{r}}\right)\\ &-2e^{-r^{2}/2\sigma_{r}^{2}}\mathrm{erf}\left(\frac{L}{2\sqrt{2} \sigma_{r}}\right)\,.\end{split} \tag{118}\]
In the laboratory frame, this transverse electric field is enhanced by a factor of \(\gamma_{0}\) because of the Lorentz transformation. A magnetic field in the azimuthal direction is also induced due to presence of a non-zero current in this frame. Performing the Lorentz transformation explicitly [47], the Cartesian components of the fields are
\[\mathbf{E}_{\text{lab}}(x,y,0) =\gamma_{0}E_{r}\left(\frac{x}{r}\hat{\mathbf{x}}+\frac{y}{r}\hat{\mathbf{ y}}\right)\,, \tag{100}\] \[\mathbf{B}_{\text{lab}}(x,y,0) =\gamma_{0}\beta_{0}E_{r}\left(-\frac{y}{r}\hat{\mathbf{x}}+\frac{x}{ r}\hat{\mathbf{y}}\right)\,. \tag{101}\]
Using these fields, the laboratory frame four-force is given by
\[\begin{split}\mathcal{F}^{\mu}&=qF^{\mu\nu}u_{\nu} =q\gamma_{0}\gamma E_{r}\\ &\times\left[\beta_{r},\frac{x}{r}(1-\beta_{0}\beta_{z}),\frac{y }{r}(1-\beta_{0}\beta_{z}),\beta_{0}\beta_{r}\right]\,.\end{split} \tag{102}\]
Assuming that the particle deviates only slightly from the ultra-relativistic straight-line motion, i.e., \(\beta_{0}\approx\beta_{z}\) and \(\gamma\approx\gamma_{0}=(1-\beta_{0}^{2})^{-1/2}\), the components of the force in the laboratory frame are
\[\begin{split}\mathcal{F}^{0}&=q\gamma_{0}^{2}E_{r} \beta_{r},\quad\mathcal{F}^{r}=qE_{r},\\ \mathcal{F}^{\theta}&=0,\quad\mathcal{F}^{z}=q\gamma_ {0}^{2}E_{r}\beta_{0}\beta_{r}\,.\end{split} \tag{103}\]
Under the same approximations (neglecting time derivatives of \(\gamma=\gamma_{0}\)), the equation for the radial motion in the absence of the fields of the FF pulse is
\[r^{\prime\prime}-\frac{L_{z}^{2}}{m^{2}\gamma_{0}^{2}r^{3}}=\frac{qE_{r}}{m \gamma_{0}^{2}}=\frac{1}{4\pi}\frac{q^{2}N}{mLr\gamma_{0}^{2}}f(r)\,. \tag{104}\]
Notice that the dependence on the gamma factor \(\gamma_{0}\) is the same as for the ponderomotive force Eq. (17).
Now, the acceleration arising from the ponderomotive force [Eq. (17)] and the acceleration due to Coulomb repulsion (104) can be compared. Since both of these forces are zero on axis and increase with radius up to a certain point, it makes sense to compare the accelerations at their respective maxima. For the ponderomotive force, this is at the radius
\[r_{P,\text{max}}=\frac{\sqrt{5-\sqrt{17}}}{2}r_{\text{max}}\,, \tag{105}\]
which can be derived by taking the second derivative of the ponderomotive potential in Eq. (19) and solving for the root in the interval \((0,r_{\text{max}})\). The acceleration due to ponderomotive force at its maximum reads
\[a_{P}(r=r_{P,\text{max}})=0.21\frac{\xi_{0}^{2}}{\gamma_{0}^{2}\sigma_{0}}\,. \tag{106}\]
The radius of the maximum Coulomb force \(r_{C,\text{max}}\) must be determined numerically. The electron bunch is typically much longer than it is wide \(L>\sigma_{r}\), which allows the error functions in Eq. (100) to be approximated by 1. The remaining function \(\sigma_{r}f(r)/r\) has an upper bound \(\sigma_{r}f(r)/r<1\) for any \(L>\sigma_{r}\). Substituting this in to the expression for the acceleration in Eq. (104), one finds that the Coulomb acceleration is less than
\[a_{C}(r=r_{C,\text{max}})<\frac{1}{4\pi}\frac{q^{2}N}{m\gamma_{0}^{2}L\sigma_ {r}}\,. \tag{107}\]
Thus, the ratio of the maximal accelerations is proportional to
\[\begin{split}\frac{a_{C}(r=r_{C,\text{max}})}{a_{P}(r=r_{P, \text{max}})}=&\frac{q^{2}}{4\pi m}\frac{\sigma_{0}}{L\sigma_{r}} \frac{N}{0.21\xi_{0}^{2}}\\ &=1.3\times 10^{-8}[\mu\text{m}]\frac{\sigma_{0}}{L\sigma_{r}} \frac{N}{\xi_{0}^{2}}\,,\end{split} \tag{108}\]
where the numerical factor is given for electrons. For the electron bunches considered here \(\sigma_{0}/[L(0)\sigma_{r}(0)]=0.4~{}\mu\text{m}^{-1}\) and the accelerations become comparable, for example, when \(\xi_{0}=10\) and \(N=2\times 10^{10}\), corresponding to a total bunch charge of 3 nC.
## Appendix F Simulation parameters
For the simulation results presented in this work, the distances are measured in the units of \(k_{0}^{-1}=2\pi/\lambda_{0}\) and the time in units \(\omega_{0}^{-1}=2\pi/\lambda_{0}\). In these units, the classical electron radius \(r_{e}=1.77\times 10^{-8}~{}k_{0}^{-1}\).
Numerical integration of the electron equations of motion was performed using a fourth-order Runge-Kutta scheme with a time step \(dt=0.05~{}\omega_{0}^{-1}\) and a total integration time \(t_{\text{int}}=4\times 10^{4}~{}\omega_{0}^{-1}=21.2\) ps. A fifth order polynomial was employed to smoothly switch the fields on and off. The ramp time of the field envelope \(g(\phi)\) was set to \(\sim 0.4\) ps and the period of the laser pulse was about 3.3 fs, corresponding to a \(\lambda_{0}=1\mu\)m. Thus, the time scales were sufficiently disparate that the pulse envelope approximation and the expression for the laser power presented in Appendix B were valid.
For all simulations the FF spot size was set to \(\sigma_{0}=6\pi k_{0}^{-1}=3\lambda_{0}\), which corresponds to the Rayleigh range \(\eta_{0}=36\pi^{2}k_{0}^{-1}=18\pi\lambda_{0}\). Therefore the characteristic length in the longitudinal direction \(\eta_{0}\) is about 19 times longer than the characteristic length in the transverse direction \(\sigma_{0}\) (see discussion in Section III.3).
The electron bunch initialization is described in Section V.1. To summarize, the initial transverse position variance was set to \(\sigma_{r}(0)=3\pi/\sqrt{2}~{}k_{0}^{-1}=3\lambda_{0}/(2\sqrt{2})\) and the longitudinal length parameter \(L=14\pi~{}k_{0}^{-1}=7\lambda_{0}\)
\begin{table}
\begin{tabular}{c|c c c|c} \hline \((\gamma_{0})\) & \(\sigma_{\beta_{z}}(0)\) & \(R^{2}(0)/k_{0}^{-2}\) & \(\varepsilon_{\perp}(0)/\omega_{0}^{-1}\) & \(\xi_{0}\) \\ \hline \hline
1000 & 0.0021 & 44.4 & 13.9 & 5.9 \\
200 & 0.0054 & 45.2 & 7.10 & 3.0 \\
100 & 0.021 & 44.6 & 14.1 & 6.0 \\ \hline \end{tabular}
\end{table}
Table 1: Initial parameters for the electron bunches and normalized laser field amplitude \(\xi_{0}\). The parameter \(\xi_{0}\) was fixed so that the initial bunch satisfies the matching condition from Eq. (33).
The variance of the initial longitudinal component of the four-velocity was chosen to be 1% of its central value \(u_{z}(0)=-\sqrt{\langle\gamma_{0}\rangle^{2}-1}\) with \(\langle\gamma_{0}\rangle=1000\), 200, and 100. Table 1 summarizes the variances of initial transverse velocities and corresponding initial transverse emittances. Together with the initial mean squared radius this determined the dimensionless field strength \(\xi_{0}\) required so that the condition from Eq. (33) was satisfied.
The beam divergences without the fields of the FF pulse [see Fig. 4(a) and Eq. (51)]
\[\begin{split}\Theta=2\arctan\left(\frac{R(t_{\text{int}})-R(0)}{ \langle\beta_{z}(0)\rangle t_{\text{int}}}\right)\\ \approx 2\arctan\left(\frac{\sigma_{\beta_{\perp}}(0)}{\langle \beta_{z}(0)\rangle}\right)\end{split} \tag{11}\]
were 4.2, 11 and 42 mrad respectively. The electrons had a high enough transverse momentum so that some escaped the ponderomotive barrier of the FF pulse and also large enough beam divergences \(>1\) mrad to be relevant to LWFA-based electron sources [48].
|
2306.03074 | A General Perspective on Objectives of Reinforcement Learning | In this lecture, we present a general perspective on reinforcement learning
(RL) objectives, where we show three versions of objectives. The first version
is the standard definition of objective in RL literature. Then we extend the
standard definition to the $\lambda$-return version, which unifies the standard
definition of objective. Finally, we propose a general objective that unifies
the previous two versions. The last version provides a high level to understand
of RL's objective, where it shows a fundamental formulation that connects some
widely used RL techniques (e.g., TD$(\lambda)$ and GAE), and this objective can
be potentially applied to extensive RL algorithms. | Long Yang | 2023-06-05T17:50:29Z | http://arxiv.org/abs/2306.03074v1 | # A General Perspective on Objectives of Reinforcement Learning
###### Abstract
In this lecture, we present a general perspective on reinforcement learning (RL) objectives, where we show three versions of objectives. The first version is the standard definition of objective in RL literature. Then we extend the standard definition to the \(\lambda\)-return version, which unifies the standard definition of objective. Finally, we propose a general objective that unifies the previous two versions. The last version provides a high level to understand of RL's objective, where it shows a fundamental formulation that connects some widely used RL techniques (e.g., TD(\(\lambda\)) and GAE), and this objective can be potentially applied to extensive RL algorithms.
###### Contents
* 1 Introduction
* 2 Markov Decision Process
* 2.1 Single-Step Transition Probability Matrix
* 2.2 Multi-Step State Transition Probability Matrix
* 2.3 Discounted State Distribution
* 2.4 Reward
* 2.5 Value Function
* 2.6 Bellman Equation
* 2.7 Objective of Reinforcement Learning
* 3 \(\lambda\)-Return Version of Objective
* 3.1 Bellman Operator
* 3.2 \(\lambda\)-Bellman Operator
* 3.3 \(\lambda\)-Version of Transition Probability Matrix
* 3.4 \(\lambda\)-Version of Reward
* 3.5 \(\lambda\)-Version of Discounted State Distribution
* 3.6 \(\lambda\)-Return Version of Objective
* 4 A General Version of Objective
* 4.1 Main Result
* 4.2 General Discussion
* 4.3 Application
* 5 Bibliographical Remarks
* 6 Conclusion
Introduction
Although reinforcement learning (RL) is widely applied to extensive fields, there is stills lack a work that establishes the objective of starting from RL from the Markov decision process, which is very unfriendly to beginners. To fill the gap in this view, in this lecture, we provide a self-contained, teachable technical introduction to the objectives of RL, where each section tackles a particular line of work from the transition probability matrix over the Markov decision process, reward, Bellman equation, discounted state distribution, and objectives.
Concretely, this lecture provides three equivalent versions of objectives. The first version is presented in Theorem 2.3, where it shows the objective as the expectation with respect to the random variable \((s,a,s^{{}^{\prime}})\). Theorem 2.3 illustrates all the random factors in the Markov decision process (MDP), and we refer to it as the _standard objective_ of MDP. Furthermore, Theorem 3.3 extends and unifies the objective that appears in Theorem 2.3. Theorem 3.3 is traceable to TD(\(\lambda\)) (Sutton, 1984, 1988), and we present it as the expectation with respect to the random variable the state \(s\), where the state \(s\) follows the \(\lambda\)-version of discounted state distribution. Finally, we present a general objective that unifies the previous two versions (see Theorem 4.1), which provides a high level to understand of RL's objective, where it shows a fundamental formulation that connects some widely used RL techniques (e.g., TD(\(\lambda\)) and GAE), and this objective can be potentially applied to extensive RL algorithms. For example, Yang et al. (2022) apply the main technique of Theorem 4.1 to obtain the surrogate function with respect to GAE (Schulman et al., 2016). Although GAE has been widely used in RL, it lacks a theoretical analysis of the related algorithms. Theorem 4.1 provides a possible way to establish GAE and empirical results by rigorous analysis. To clarify this view, we present a surrogate function with respect to GAE, see Section 4.3, where it provides a theoretical fundament for policy optimization with GAE.
## 2 Markov Decision Process
Reinforcement learning (RL) (Sutton and Barto, 2018) is often formulated as a _Markov decision process_ (MDP) (Howard, 1960; Puterman, 2014). In this section, we review some necessary notation w.r.t. MDP.
An MDP is described as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathbb{P},r,\rho_{0},\gamma)\).
* \(\mathcal{S}\) is the state space;
* \(\mathcal{A}\) is the action space;
* \(\mathbb{P}(\cdot|\cdot,\cdot):\mathcal{S}\times\mathcal{A}\times\mathcal{S} \rightarrow[0,1]\), each \(\mathbb{P}(s^{{}^{\prime}}|s,a)\) denotes the probability of state transition from \(s\) to \(s^{{}^{\prime}}\) underplaying the action \(a\);
* \(r(\cdot|\cdot,\cdot):\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow \mathbb{R}\); each \(r(s^{{}^{\prime}}|s,a)\) denotes the reward of state transition from \(s\) to \(s^{{}^{\prime}}\) underplaying the action \(a\);
* \(\rho_{0}(\cdot):\mathcal{S}\rightarrow[0,1]\) is the initial state distribution;
* \(\gamma\in(0,1)\) is the discount factor.
The probability and reward satisfy Markov property, i.e., \(\mathbb{P}(s^{{}^{\prime}}|s,a)\) and \(r(s^{{}^{\prime}}|s,a)\) only depend on the immediately preceding state \(s\) and action \(a\), not at all on earlier states and actions.
A stationary Markov policy \(\pi\) is a probability distribution defined on \(\mathcal{S}\times\mathcal{A}\), \(\pi(a|s)\) denotes the probability of playing \(a\) in state \(s\). We use \(\Pi\) to denote the set that collects all the stationary Markov policies. Let
\[\tau=\{s_{t},a_{t},r_{t+1}\}_{t\geq 0}\sim\pi \tag{1}\]
be the trajectory generated by \(\pi\), where
\[s_{0}\sim\rho_{0}(\cdot),\ a_{t}\sim\pi(\cdot|s_{t}),\ s_{t+1}\sim\mathbb{P}( \cdot|s_{t},a_{t}),\ \text{and}\ r_{t+1}=r(s_{t+1}|s_{t},a_{t}).\]
### Single-Step Transition Probability Matrix
Let \(\mathbf{P}_{\pi}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\) be a state transition probability matrix, and their components are:
\[\mathbf{P}_{\pi}[s,s^{{}^{\prime}}]=\sum_{a\in\mathcal{A}}\pi(a|s)\mathbb{P}(s ^{{}^{\prime}}|s,a)=:\mathbb{P}_{\pi}(s^{{}^{\prime}}|s), \tag{2}\]
which denotes one-step state transformation probability from \(s\) to \(s^{{}^{\prime}}\) by executing \(\pi\). To better understand the one-step state transition under a policy \(\pi\), we illustrate it in the next Figure 1.
### Multi-Step State Transition Probability Matrix
We are interested in the state distribution induced by a policy. Recall the following visitation sequence \(\tau=\{s_{t},a_{t},r_{t+1}\}_{t\geq 0}\) induced by \(\pi\), we use \(\mathbb{P}_{\pi}(s_{t}=s|s_{0})\) to denote the probability of visiting \(s\) after \(t\) time steps from the initial state \(s_{0}\) by executing \(\pi\). Particularly, we notice if \(t=0\), \(s_{t}\neq s_{0}\), then \(\mathbb{P}_{\pi}(s_{t}=s|s_{0})=0\), i.e.,
\[\mathbb{P}_{\pi}(s_{t}=s|s_{0})=0,\ \ t=0\ \text{and}\ s\neq s_{0}. \tag{3}\]
In this lecture, to simplify the expressions, we also use \(\mathbb{P}_{\pi}^{(t)}\) to denote the notation \(\mathbb{P}_{\pi}(s_{t}=s^{{}^{\prime}}|s_{0}=s)\), where \(s_{0}\sim\rho_{0}(\cdot)\), i.e.,
\[\mathbb{P}_{\pi}^{(t)}(s^{{}^{\prime}}|s)=:\mathbb{P}_{\pi}(s_{t}=s^{{}^{ \prime}}|s_{0}=s).\]
Furthermore, we use \(\mathbf{P}_{\pi}^{(t)}\) to denote the \(t\)-step transition matrix collects all the probability of transition after \(t\) time steps by executing \(\pi\), i.e.,
\[\mathbf{P}_{\pi}^{(t)}[s,s^{{}^{\prime}}]=\mathbb{P}_{\pi}^{(t)}(s^{{}^{\prime} }|s).\]
In order to express the above stochastic process more vividly, we introduce a chain (induced by \(\pi\)) as follows,
\[s_{0}\xrightarrow{a_{0}\sim\pi(\cdot|s_{0})}\{r_{1},s_{1}\} \xrightarrow{a_{1}\sim\pi(\cdot|s_{1})}\{r_{2},s_{2}\}\xrightarrow{a_{2}\sim \pi(\cdot|s_{2})}\{r_{3},s_{3}\}\cdots. \tag{4}\]
Particularly, for the chain (4) starting from the initial state \(s_{0}\), the following equity holds
\[\mathbb{P}_{\pi}(s_{t}=s|s_{0})=\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P} _{\pi}(s_{t}=s|s_{t-1}=s^{{}^{\prime}})\mathbb{P}_{\pi}(s_{t-1}=s^{{}^{\prime} }|s_{0}), \tag{5}\]
we will show it later. Due to the Markov property, we know \(\mathbb{P}_{\pi}(s_{t}=s|s_{t-1}=s^{{}^{\prime}})=\mathbb{P}_{\pi}(s|s^{{}^{ \prime}})\), then, we rewrite (5) as follows
\[\mathbb{P}_{\pi}(s_{t}=s|s_{0})=\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P} _{\pi}(s|s^{{}^{\prime}})\mathbb{P}_{\pi}(s_{t-1}=s^{{}^{\prime}}|s_{0}), \tag{6}\]
which can be rewritten as the following concise formulation,
\[\mathbb{P}_{\pi}^{(t)}(s|s_{0})=\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P }_{\pi}(s|s^{{}^{\prime}})\mathbb{P}_{\pi}^{(t-1)}(s^{{}^{\prime}}|s_{0}). \tag{7}\]
Eq.(7) is the Chapman-Kolmogorov equation for MDP, we formally present it in the next Theorem 2.1, which illustrates the relationship between single-step state transition probability and multi-step state transition probability.
**Theorem 2.1** (Chapman-Kolmogorov Equation).: _Let \(\mathbb{P}_{\pi}^{(t)}(s^{{}^{\prime}}|s)\) be the probability of transition from state \(s\) to state \(s^{{}^{\prime}}\) after \(t\) additional steps by executing a stationary Markovian policy \(\pi\), and its corresponding \(t\)-step transition matrix is \(\mathbf{P}_{\pi}^{(t)}\). Then,_
\[\mathbb{P}_{\pi}^{(t)}(s|s_{0})=\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P} _{\pi}(s|s^{{}^{\prime}})\mathbb{P}_{\pi}^{(t-1)}(s^{{}^{\prime}}|s_{0}). \tag{8}\]
_Furthermore, we know_
\[\mathbf{P}_{\pi}^{(t)}=\mathbf{P}_{\pi}^{t}. \tag{9}\]
Proof.:We only need to show the result (5), we give a simple derivation of (5) follows Weng (2018),
* For the case \(t=0\), \[\mathbb{P}_{\pi}(s_{t}=s|s_{0})=\mathbb{P}_{\pi}(s_{0}=s|s_{0})=1,\] which is a trivial fact.
* For the case \(t=1\), we know \[\mathbb{P}_{\pi}(s_{1}=s|s_{0})=\sum_{a\in\mathcal{A}}\pi(a|s_{0})\mathbb{P}(s|s_{ 0},a)=:\mathbb{E}_{a\sim\pi(\cdot|s_{0})}[\mathbb{P}(s|s_{0},a)],\] which is reduced to single state transition probability by executing \(\pi\), and it is same with the result of (5). In fact, since the chain (4) starts from the initial state \(s_{0}\), then we have \[\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s_{1}=s|s_{0}= s^{{}^{\prime}})\mathbb{P}_{\pi}(s_{0}=s^{{}^{\prime}}|s_{0})\] \[=\sum_{s^{{}^{\prime}}\in\mathcal{S}-\{s_{0}\}}\mathbb{P}_{\pi}(s _{1}=s|s_{0}=s^{{}^{\prime}})\underbrace{\mathbb{P}_{\pi}(s_{t-1}=s^{{}^{ \prime}}|s_{0})}_{=0,\text{ if }t=1;\tau\text{ starts from }s_{0}}+\mathbb{P}_{\pi}(s_{1}=s|s_{0}) \underbrace{\mathbb{P}_{\pi}(s_{t-1}=s_{0}|s_{0})}_{=1;\text{ if }t=1}.\]
* For the general case time \(t\), we can first travel from \(s_{0}\) to a middle point \(s^{{}^{\prime}}\) (any state can be a middle point), after \(t-1\) steps, and then go to the final state \(s\) during the last step. In this way, we are able to update the visitation probability recursively as (5).
Eq.(9) is a matrix version of Eq.(8).
### Discounted State Distribution
Let \(d_{\pi}^{s_{0}}(s)\) denote the normalized discounted weighting of the future state \(s\) encountered starting at \(s_{0}\) by executing \(\pi\),
\[d_{\pi}^{s_{0}}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathbb{P}_{\pi}(s_{ t}=s|s_{0}). \tag{10}\]
Furthermore, since \(s_{0}\sim\rho_{0}(\cdot)\), we define
\[d_{\pi}^{\rho_{0}}(s)=\mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}[d_{\pi}^{s_{0}}(s )]=\sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0})d_{\pi}^{s_{0}}(s)=\int_{s_{0}\in \mathcal{S}}\rho_{0}(s_{0})d_{\pi}^{s_{0}}(s)\mathrm{d}s_{0} \tag{11}\]
as the discounted state visitation distribution over the initial distribution \(\rho_{0}(\cdot)\).
We use \(\mathbf{d}_{\pi}^{\rho_{0}}\in\mathbb{R}^{|\mathcal{S}|}\) to collect all the normalized discounted state distributions, and its components are:
\[\mathbf{d}_{\pi}^{\rho_{0}}[s]=d_{\pi}^{\rho_{0}}(s),\ \ s\in\mathcal{S}.\]
Recall \(\mathbf{P}_{\pi}\in\mathbb{R}^{|\mathcal{S}|\times|\mathcal{S}|}\) denotes the one-step state transition matrix by executing \(\pi\), and we use \(\mathbf{\rho}_{0}\in\mathbb{R}^{|\mathcal{S}|}\) denotes the initial state distribution vector, and their components are:
\[\mathbf{P}_{\pi}[s,s^{\prime}]=\sum_{a\in\mathcal{A}}\pi(a|s)\mathbb{P}(s^{ \prime}|s,a),\ \ \mathbf{\rho}_{0}[s]=\rho_{0}(s).\]
Then, we rewrite \(\mathbf{d}_{\pi}^{\rho_{0}}\) as a matrix version as follows,
\[\mathbf{d}_{\pi}^{\rho_{0}}=(1-\gamma)\sum_{t=0}^{\infty}(\gamma\mathbf{P}_{ \pi})^{t}\mathbf{\rho}_{0}=(1-\gamma)(\mathbf{I}-\gamma\mathbf{P}_{\pi})^{-1}\mathbf{ \rho}_{0}. \tag{12}\]
### Reward
It is noteworthy that if the reward \(r_{t+1}\) depends on the state of the environment at the next state, we use \(r(s_{t+1}|s_{t},a_{t})\) to replace \(r_{t+1}\) to denote a real value that the decision-maker receives at time \(t\) when the system is at state \(s_{t}\), action \(a_{t}\) is played and the system transforms to the next state \(s_{t+1}\). Then, the expected reward at time \(t\) can be evaluated as follows,
\[\mathbb{E}[r_{t+1}]=R(s_{t},a_{t})=\sum_{s_{t+1}\in\mathcal{S}} \mathbb{P}(s_{t+1}|s_{t},a_{t})r(s_{t+1}|s_{t},a_{t}). \tag{13}\]
Under most notions of optimality, all of the information necessary to make a decision at time \(t\) is summarized in \(r_{t+1}\); however, under some criteria, we must use \(r(s_{t+1}|s_{t},a_{t})\) instead of \(r_{t+1}\).
Furthermore, due to the Markov property in the MDP, for each \((s,a)\in\mathcal{S}\times\mathcal{A}\), we rewrite (13) as follows:
\[R(s,a)=\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}(s^{{}^{ \prime}}|s,a)r(s^{{}^{\prime}}|s,a). \tag{14}\]
Let \(\mathbf{r}_{\pi}\in\mathbb{R}^{|\mathcal{S}|}\) be the expected reward according to \(\pi\), i.e., their components are: \(\forall s\in\mathcal{S}\),
\[\mathbf{r}_{\pi}[s]=\sum_{a\in\mathcal{A}}\sum_{s^{{}^{\prime}} \in\mathcal{S}}\pi(a|s)\mathbb{P}(s^{{}^{\prime}}|s,a)r(s^{\prime}|s,a)=\sum_ {a\in\mathcal{A}}\pi(a|s)R(s,a)=:R_{\pi}(s). \tag{15}\]
Starting from state \(s\), the root node at the left, the agent could take any of some set of actions (e.g., \(a\)), then the environment could respond with one of several next state \(s^{{}^{\prime}}\), then we obtain the reward \(r(s^{{}^{\prime}}|s,a)\). Figure 2 has shown the reward after the state transformation from state \(s\) to \(s^{{}^{\prime}}\) by executing \(\pi\), which also provides an insight for one-step state transformation probability \(R_{\pi}(s)\).
### Value Function
The _state value function_ of \(\pi\) is defined as
\[V_{\pi}(s)=\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{ t+1}\Big{|}s_{0}=s\right], \tag{16}\]
where \(\mathbb{E}_{\pi}[\cdot|\cdot]\) denotes a conditional expectation on actions which are selected by \(\pi\). Its _state-action value function_ is
\[Q_{\pi}(s,a)=\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t+1}\Big{|}s_ {0}=s,a_{0}=a\right], \tag{17}\]
and advantage function is
\[A_{\pi}(s,a)=Q_{\pi}(s,a)-V_{\pi}(s). \tag{18}\]
### Bellman Equation
Bellman equation illustrates the relationship between the states' values and actions, which plays a central role in MDP theory and reinforcement learning.
**Theorem 2.2** (Bellman Equation).: _The state value function \(V_{\pi}(s)\) and state-action value function \(Q_{\pi}(s,a)\) satisfy the following equation:_
\[V_{\pi}(s)= R_{\pi}(s)+\gamma\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{ \pi}(s^{{}^{\prime}}|s)V_{\pi}(s^{{}^{\prime}}), \tag{19}\] \[Q_{\pi}(s,a)= R(s,a)+\gamma\sum_{s^{{}^{\prime}}\in\mathcal{S}}\sum_{a^{{}^{ \prime}}\in\mathcal{A}}\mathbb{P}(s^{{}^{\prime}}|s,a)\pi(a^{{}^{\prime}}|s^{ {}^{\prime}})Q_{\pi}(s^{{}^{\prime}},a^{{}^{\prime}}). \tag{20}\]
_Proof._First, we notice
\[G_{t}=:r_{t+1}+\gamma r_{t+2}+\gamma^{2}r_{t+3}+\cdots\] \[=r_{t+1}+\gamma G_{t+1}.\]
Then we rewrite the state value function as follows
\[V_{\pi}(s)= \mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^{t}r_{t+1}\Big{|} s_{0}=s\right]\] \[= \mathbb{E}_{\pi}\left[r_{1}+\gamma G_{1}|s_{0}=s\right].\]
For the first term, we know
\[\mathbb{E}_{\pi}[r_{1}|s_{0}=s]=\mathbb{E}_{a\sim\pi(\cdot|s),s^{{}^{\prime}} \sim\mathbb{P}(\cdot|s,a)}[r(s^{{}^{\prime}}|s,a)]=\sum_{a\in\mathcal{A}}\sum _{s^{{}^{\prime}}\in\mathcal{S}}\pi(a|s)\mathbb{P}(s^{{}^{\prime}}|s,a)r(s^{ {}^{\prime}}|s,a). \tag{21}\]
For the second term, we know
\[\mathbb{E}_{\pi}[G_{1}|s_{0}=s] =\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s^{{}^{ \prime}}|s)\mathbb{E}_{\pi}\left[G_{1}|s_{0}=s,s_{1}=s^{{}^{\prime}}\right]\] \[=\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s^{{}^{ \prime}}|s)\mathbb{E}_{\pi}\left[G_{1}|s_{1}=s^{{}^{\prime}}\right] \tag{22}\] \[=\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s^{{}^{ \prime}}|s)V_{\pi}(s^{{}^{\prime}})\]
\[= \sum_{a\in\mathcal{A}}\sum_{s^{{}^{\prime}}\in\mathcal{S}}\pi(a|s) \mathbb{P}(s^{{}^{\prime}}|s,a)V_{\pi}(s^{{}^{\prime}}), \tag{23}\]
where Eq.(22) holds due to the conditional independence as follows,
\[\mathbb{E}_{\pi}\left[G_{1}|s_{1}=s^{{}^{\prime}}\right]=\mathbb{E}_{\pi} \left[G_{1}|s_{1}=s^{{}^{\prime}},s_{0}=s\right].\]
Such conditional independence property is due to the memoryless Markov property that the future behavior totally depends on the current state.
Then, combining (21) and (22), we obtain the _Bellman equaiton_ as follows,
\[V_{\pi}(s)= \underbrace{\sum_{a\in\mathcal{A}}\sum_{s^{{}^{\prime}}\in \mathcal{S}}\pi(a|s)\mathbb{P}(s^{{}^{\prime}}|s,a)r(s^{{}^{\prime}}|s,a)}_{ \text{mean of current rewards}}+\underbrace{\gamma\sum_{a\in\mathcal{A}}\sum_{s^{{} ^{\prime}}\in\mathcal{S}}\pi(a|s)\mathbb{P}(s^{{}^{\prime}}|s,a)V_{\pi}(s^{ {}^{\prime}})}_{\text{mean of future rewards}} \tag{24}\] \[\stackrel{{\eqref{eq:Bellman equaiton}}}{{=}} R_{\pi}(s)+\gamma\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s^{{}^{ \prime}}|s)V_{\pi}(s^{{}^{\prime}}). \tag{25}\]
Similarly, we know the state-action value function version of the Bellman equation.
Finally, we use \(\mathbf{v}_{\pi}\in\mathbb{R}^{|\mathcal{S}|}\) to collect all the state value functions, and each entry of \(\mathbf{v}_{\pi}\) is defined as
\[\mathbf{v}_{\pi}[s]=V_{\pi}(s),\]
then we rewrite the Bellman equation as the following matrix version:
\[\mathbf{v}_{\pi}=\mathbf{r}_{\pi}+\gamma\mathbf{P}_{\pi}\mathbf{v}_{\pi}=( \mathbf{I}-\gamma\mathbf{P}_{\pi})^{-1}\mathbf{r}_{\pi}. \tag{26}\]
### Objective of Reinforcement Learning
Recall \(\tau=\{s_{t},a_{t},r_{t+1}\}_{t\geq 0}\sim\pi\), according to \(\tau\), we define the expected return \(J(\pi|s_{0})\) by
\[J(\pi|s_{0})= \mathbb{E}_{\tau\sim\pi}[R(\tau)]=V_{\pi}(s_{0}), \tag{27}\]
where \(R(\tau)=\sum_{t\geq 0}\gamma^{t}r_{t+1}\), and the notation \(J(\pi|s_{0})\) is "conditional" on \(s_{0}\) is to emphasize the trajectory \(\tau\) starting from \(s_{0}\). Let
\[J(\pi)=\mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}[J(\pi|s_{0})]=\mathbb{E}_{s_{0} \sim\rho_{0}(\cdot)}[V_{\pi}(s_{0})]=\sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0} )V_{\pi}(s_{0}). \tag{28}\]
The goal of reinforcement learning is to search the optimal policy \(\pi_{\star}\) satisfies
\[\pi_{\star}=\arg\max_{\pi}J(\pi). \tag{29}\]
To see the objective (28) clearly, we use rewrite \(J(\pi)\) with respect to \(\mathbf{d}_{\pi}^{\rho_{0}}\).
**Theorem 2.3**.: _The objective \(J(\pi)\) shares the following versions_
\[J(\pi)= \sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0})V_{\pi}(s_{0})\] \[= \frac{1}{1-\gamma}\sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0})\sum_{s \in\mathcal{S}}d_{\pi}^{s_{0}}(s)R_{\pi}(s) \tag{30}\] \[= \frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{\pi}^{\rho_{0}}(\cdot),a \sim\pi(\cdot|s),s^{{}^{\prime}}\sim\mathbb{P}(\cdot|s,a)}\left[r(s^{{}^{\prime }}|s,a)\right]. \tag{31}\]
_Furthermore, the matrix version is_
\[J(\pi)=\boldsymbol{\rho}_{0}^{\top}\mathbf{v}_{\pi}=\boldsymbol{\rho}_{0}^{ \top}(\mathbf{I}-\gamma\mathbf{P}_{\pi})^{-1}\mathbf{r}_{\pi}=\frac{1}{1- \gamma}\left\langle\mathbf{d}_{\pi}^{\rho_{0}},\mathbf{r}_{\pi}\right\rangle. \tag{32}\]
_Proof._Recall the Bellman equation, we obtain
\[V_{\pi}(s_{0})= \sum_{a\in\mathcal{A}}\pi(a|s_{0})R(s_{0},a)+\gamma\sum_{s^{{}^{ \prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s_{1}=s^{{}^{\prime}}|s_{0})V_{\pi}(s^ {{}^{\prime}}), \tag{33}\]
and we unroll the expression of (33) repeatedly, then we have
\[V_{\pi}(s_{0})= R_{\pi}(s_{0})+\gamma\sum_{s^{{}^{\prime}}\in\mathcal{S}} \mathbb{P}_{\pi}(s_{1}=s^{{}^{\prime}}|s_{0})\underbrace{\left(R_{\pi}(s^{{}^{ \prime}})+\gamma\sum_{s^{{}^{\prime\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s_{ 2}=s^{{}^{\prime\prime}}|s_{1}=s^{{}^{\prime}})V_{\pi}(s^{{}^{\prime\prime}}) \right)}_{=V_{\pi}(s^{{}^{\prime}})}\] \[= R_{\pi}(s_{0})+\gamma\sum_{s^{{}^{\prime}}\in\mathcal{S}} \mathbb{P}_{\pi}(s_{1}=s^{{}^{\prime}}|s_{0})R_{\pi}(s^{{}^{\prime}})\] \[\qquad\qquad+\gamma^{2}\sum_{s^{{}^{\prime\prime}}\in\mathcal{S} }\underbrace{\left(\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}(s_{1} =s^{{}^{\prime}}|s_{0})\mathbb{P}_{\pi}(s_{2}=s^{{}^{\prime\prime}}|s_{1}=s^{ {}^{\prime}})\right)}_{=\mathbb{P}_{\pi}(s_{2}=s^{{}^{\prime\prime}}|s_{0})}V _{\pi}(s^{{}^{\prime\prime}})\] \[= R_{\pi}(s_{0})+\gamma\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{1 }=s|s_{0})R_{\pi}(s)+\gamma^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{2}=s| s_{0})V_{\pi}(s)\] \[= R_{\pi}(s_{0})+\gamma\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{1 }=s|s_{0})R_{\pi}(s)\] \[\qquad\qquad+\gamma^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{ 2}=s|s_{0})\left(R_{\pi}(s)+\gamma\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{ P}_{\pi}(s_{3}=s^{{}^{\prime}}|s_{2}=s)V_{\pi}(s^{{}^{\prime}})\right)\] \[= R_{\pi}(s_{0})+\gamma\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{1 }=s|s_{0})R^{\pi}(s)+\gamma^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{2}=s| s_{0})R_{\pi}(s)\] \[\qquad\qquad+\gamma^{3}\sum_{s^{{}^{\prime}}\in\mathcal{S}} \underbrace{\left(\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{2}=s|s_{0}) \mathbb{P}_{\pi}(s_{3}=s^{{}^{\prime}}|s_{2}=s)\right)}_{=\mathbb{P}_{\pi}(s_{3 }=s^{{}^{\prime}}|s_{0})}V_{\pi}(s^{{}^{\prime}})\] \[= R_{\pi}(s_{0})+\gamma\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{1 }=s|s_{0})R^{\pi}(s)+\gamma^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}(s_{2}=s| s_{0})R^{\pi}(s)\]
\[\mathcal{B}_{\pi}\mathbf{v}_{\pi}=\mathbf{v}_{\pi}. \tag{39}\]
### \(\lambda\)-Bellman Operator
Furthermore, we define \(\lambda\)-_Bellman operator_\(\mathcal{B}_{\pi}^{\lambda}\) as follows,
\[\mathcal{B}_{\pi}^{\lambda}=(1-\lambda)\sum_{t=0}^{\infty}\lambda^{t}(\mathcal{ B}_{\pi})^{t+1},\]
which implies
\[\mathcal{B}_{\pi}^{\lambda}:\mathbb{R}^{|\mathcal{S}|}\rightarrow \mathbb{R}^{|\mathcal{S}|}, \tag{40}\] \[v\mapsto \mathbf{r}_{\pi}^{(\lambda)}+\tilde{\gamma}\mathbf{P}_{\pi}^{( \lambda)}v, \tag{41}\]
where
\[\mathbf{P}_{\pi}^{(\lambda)}=(1-\gamma\lambda)\sum_{t=0}^{\infty}(\gamma \lambda)^{t}\mathbf{P}_{\pi}^{t+1},\ \ \mathbf{r}_{\pi}^{(\lambda)}=\sum_{t=0}^{\infty}(\gamma\lambda\mathbf{P}_{\pi})^ {t}\mathbf{r}_{\pi},\ \ \tilde{\gamma}=\frac{\gamma(1-\lambda)}{1-\gamma\lambda}. \tag{42}\]
**Remark 3.1** (\(\lambda\)-Return Version of Bellman Equation).: _According to Bellman equation (39), \(\mathbf{v}_{\pi}\) is fixed point of \(\lambda\)-operator \(\mathcal{B}_{\pi}^{\lambda}\), i.e.,_
\[\mathbf{v}_{\pi}=\mathbf{r}_{\pi}^{(\lambda)}+\tilde{\gamma} \mathbf{P}_{\pi}^{(\lambda)}\mathbf{v}_{\pi}. \tag{43}\]
_Recall \(\tau=\{s_{t},a_{t},r_{t+1}\}_{t\geq 0}\sim\pi\), according to (43), the value function of initial state \(s_{0}\) is_
\[V_{\pi}(s_{0}) =\mathbf{v}_{\pi}[s_{0}]=\mathbf{r}_{\pi}^{(\lambda)}[s_{0}]+ \tilde{\gamma}\mathbf{P}_{\pi}^{(\lambda)}\mathbf{v}_{\pi}[s_{0}]\] \[=R_{\pi}^{(\lambda)}(s_{0})+\tilde{\gamma}\sum_{s^{{}^{\prime}} \in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{1}=s^{{}^{\prime}}|s_{0})V_{ \pi}(s^{{}^{\prime}}). \tag{44}\]
### \(\lambda\)-Version of Transition Probability Matrix
Let
\[\mathbb{P}_{\pi}^{(\lambda)}(s^{{}^{\prime}}|s)=\mathbf{P}_{\pi}^ {(\lambda)}[s,s^{{}^{\prime}}]=:(1-\gamma\lambda)\sum_{t=0}^{\infty}(\gamma \lambda)^{t}\left(\mathbf{P}_{\pi}^{t+1}[s,s^{{}^{\prime}}]\right), \tag{45}\]
where \(\mathbf{P}_{\pi}^{t+1}[s,s^{{}^{\prime}}]\) is the \((s,s^{{}^{\prime}})\)-th component of matrix \(\mathbf{P}_{\pi}^{t+1}\), which is the probability of visiting \(s^{{}^{\prime}}\) after \(t+1\) time steps from the state \(s\) by executing \(\pi\), i.e.,
\[\mathbf{P}_{\pi}^{t+1}[s,s^{{}^{\prime}}]=\mathbb{P}_{\pi}(s_{t+1 }=s^{{}^{\prime}}|s). \tag{46}\]
Thus, we rewrite \(\mathbb{P}_{\pi}^{(\lambda)}(s^{{}^{\prime}}|s)\) (45) as follows
\[\mathbb{P}_{\pi}^{(\lambda)}(s^{{}^{\prime}}|s)=(1-\gamma\lambda) \sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{P}_{\pi}(s_{t+1}=s^{{}^{\prime}} |s),\ \ s\in\mathcal{S}. \tag{47}\]
**Remark 3.2**.: _Recall the following visitation sequence \(\tau=\{s_{t},a_{t},r_{t+1}\}_{t\geq 0}\) induced by \(\pi\), it is similar to the probability \(\mathbb{P}_{\pi}(s_{t}=s^{{}^{\prime}}|s_{0})\), we introduce \(\mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s^{{}^{\prime}}|s_{0})\) as the probability of transition from state \(s\) to state \(s^{{}^{\prime}}\) after \(t\) time steps under the dynamic transformation matrix \(\mathbf{P}_{\pi}^{(\lambda)}\). Then, the following equity holds_
\[\mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s|s_{0})=\sum_{s^{{}^{\prime}} \in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s|s_{t-1}=s^{{}^{\prime}}) \mathbb{P}_{\pi}^{(\lambda)}(s_{t-1}=s^{{}^{\prime}}|s_{0}). \tag{48}\]
### \(\lambda\)-Version of Reward
Similarly, let
\[R_{\pi}^{(\lambda)}(s)=:\mathbf{r}_{\pi}^{(\lambda)}[s]= \sum_{t=0}^{\infty}(\gamma\lambda\mathbf{P}_{\pi})^{t}\mathbf{r}_{ \pi}[s]=\sum_{t=0}^{\infty}(\gamma\lambda)^{t}\left(\sum_{s^{{}^{\prime}}\in \mathcal{S}}\mathbb{P}_{\pi}(s_{t}=s^{{}^{\prime}}|s)R_{\pi}(s^{{}^{\prime}})\right)\] \[= \sum_{t=0}^{\infty}\sum_{s^{{}^{\prime}}\in\mathcal{S}}(\gamma \lambda)^{t}\mathbb{P}_{\pi}(s_{t}=s^{{}^{\prime}}|s)R_{\pi}(s^{{}^{\prime}}). \tag{49}\]
### \(\lambda\)-Version of Discounted State Distribution
It is similar to normalized discounted distribution \(d_{\pi}^{\rho_{0}}(s)\), we introduce \(\lambda\)-return version of discounted state distribution \(d_{\pi}^{\lambda}(s)\) as follows: \(\forall s\in\mathcal{S}\),
\[d_{\pi}^{s_{0},\lambda}(s) =(1-\tilde{\gamma})\sum_{t=0}^{\infty}\tilde{\gamma}^{t}\mathbb{ P}_{\pi}^{(\lambda)}(s_{t}=s|s_{0}), \tag{50}\] \[d_{\pi}^{\lambda}(s) =\mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}\left[d_{\pi}^{s_{0}, \lambda}(s)\right],\] (51) \[\mathbf{d}_{\pi}^{\lambda}[s] =d_{\pi}^{\lambda}(s), \tag{52}\]
where \(\mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s|s_{0})\) is the \((s_{0},s)\)-th component of the matrix \(\left(\mathbf{P}_{\pi}^{(\lambda)}\right)^{t}\), i.e.,
\[\mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s|s_{0})=:\left(\mathbf{P}_{\pi}^{(\lambda) }\right)^{t}[s_{0},s].\]
Similarly, \(\mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s^{{}^{\prime}}|s)\) is the \((s,s^{{}^{\prime}})\)-th component of the matrix \(\left(\mathbf{P}_{\pi}^{(\lambda)}\right)^{t}\), i.e.,
\[\mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s^{{}^{\prime}}|s)=:\left(\mathbf{P}_{\pi} ^{(\lambda)}\right)^{t}[s,s^{{}^{\prime}}].\]
Finally, we rewrite \(\mathbf{d}_{\pi}^{\lambda}\) as the following matrix version,
\[\mathbf{d}_{\pi}^{\lambda}=(1-\tilde{\gamma})\sum_{t=0}^{\infty}\left(\gamma \mathbf{P}_{\pi}^{(\lambda)}\right)^{t}\boldsymbol{\rho}_{0}=(1-\tilde{\gamma })\left(\mathbf{I}-\tilde{\gamma}\mathbf{P}_{\pi}^{(\lambda)}\right)^{-1} \boldsymbol{\rho}_{0}. \tag{53}\]
### \(\lambda\)-Return Version of Objective
**Theorem 3.3**.: _The objective \(J(\pi)\) (28) can be rewritten as the following version:_
\[J(\pi)=\frac{1}{1-\tilde{\gamma}}\sum_{s\in\mathcal{S}}d_{\pi}^{\lambda}(s)R_ {\pi}^{(\lambda)}(s)=\frac{1}{1-\tilde{\gamma}}\mathbb{E}_{s\sim d_{\pi}^{ \lambda}(\cdot)}\left[R_{\pi}^{(\lambda)}(s)\right].\]
_Proof._We unroll the expression of (44) repeatedly, then we have
\[V_{\pi}(s_{0})= R_{\pi}^{(\lambda)}(s_{0})+\tilde{\gamma}\sum_{s^{{}^{\prime}} \in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{1}=s^{{}^{\prime}}|s_{0}) \underbrace{\left(R_{\pi}^{(\lambda)}(s^{{}^{\prime}})+\tilde{\gamma}\sum_{s^ {{}^{\prime\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{2}=s^{{}^{ \prime\prime}}|s_{1}=s^{{}^{\prime}})V_{\pi}(s^{{}^{\prime\prime}})\right)}_{=V _{\pi}(s^{{}^{\prime}})}\]
\[= R_{\pi}^{(\lambda)}(s_{0})+\tilde{\gamma}\sum_{s^{{}^{\prime}}\in \mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{1}=s^{{}^{\prime}}|s_{0})R_{\pi}^{( \lambda)}(s^{{}^{\prime}})\] \[+\tilde{\gamma}^{2}\sum_{s^{{}^{\prime\prime}}\in\mathcal{S}} \underbrace{\left(\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}^{( \lambda)}(s_{1}=s^{{}^{\prime}}|s_{0})\mathbb{P}_{\pi}^{(\lambda)}(s_{2}=s^{ {}^{\prime\prime}}|s_{1}=s^{{}^{\prime}})\right)}_{\stackrel{{( \ref{eq:R_pi})}}{{=}}:\mathbb{P}_{\pi}^{(\lambda)}\left(s_{2}=s^{{}^{\prime \prime}}|s_{0}\right)}V_{\pi}(s^{{}^{\prime\prime}})\] \[= R_{\pi}^{(\lambda)}(s_{0})+\tilde{\gamma}\sum_{s\in\mathcal{S}} \mathbb{P}_{\pi}^{(\lambda)}(s_{1}=s|s_{0})R_{\pi}^{(\lambda)}(s)+\tilde{ \gamma}^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{2}=s|s_{0}) V_{\pi}(s)\] \[= R_{\pi}^{(\lambda)}(s_{0})+\tilde{\gamma}\sum_{s\in\mathcal{S}} \mathbb{P}_{\pi}^{(\lambda)}(s_{1}=s|s_{0})R_{\pi}^{(\lambda)}(s)\] \[+\tilde{\gamma}^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{( \lambda)}(s_{2}=s|s_{0})\left(R_{\pi}^{(\lambda)}(s)+\tilde{\gamma}\sum_{s^{{ }^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{3}=s^{{}^{\prime}}|s _{2}=s)V_{\pi}(s^{{}^{\prime}})\right)\] \[= R_{\pi}^{(\lambda)}(s_{0})+\tilde{\gamma}\sum_{s\in\mathcal{S}} \mathbb{P}_{\pi}^{(\lambda)}(s_{1}=s|s_{0})R_{\pi}^{(\lambda)}(s)+\tilde{ \gamma}^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{2}=s|s_{0})R _{\pi}^{(\lambda)}(s)\] \[+\tilde{\gamma}^{3}\sum_{s^{{}^{\prime}}\in\mathcal{S}} \underbrace{\left(\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{2}=s|s _{0})\mathbb{P}_{\pi}^{(\lambda)}(s_{3}=s^{{}^{\prime}}|s_{2}=s)\right)}_{ \stackrel{{(\ref{eq:R_pi})}}{{=}}:\mathbb{P}_{\pi}^{(\lambda)}(s_ {3}=s^{{}^{\prime}}|s_{0})}V_{\pi}(s^{{}^{\prime}})\] \[= R^{(\lambda)}(s_{0})+\tilde{\gamma}\sum_{s\in\mathcal{S}} \mathbb{P}_{\pi}^{(\lambda)}(s_{1}=s|s_{0})R_{\pi}^{(\lambda)}(s)+\tilde{ \gamma}^{2}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s_{2}=s|s_{0})R _{\pi}^{(\lambda)}(s)\] \[+\tilde{\gamma}^{3}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{( \lambda)}(s_{3}=s|s_{0})V_{\pi}(s)\] \[= \cdots\] \[= \sum_{s\in\mathcal{S}}\sum_{t=0}^{\infty}\tilde{\gamma}^{t} \mathbb{P}_{\pi}^{(\lambda)}(s_{t}=s|s_{0})R_{\pi}^{(\lambda)}(s) \tag{54}\] \[\stackrel{{(\ref{eq:R_pi})}}{{=}} \frac{1}{1-\tilde{\gamma}}\sum_{s\in\mathcal{S}}d_{\pi}^{s_{0}, \lambda}(s)R_{\pi}^{(\lambda)}(s). \tag{55}\]
According to (28) and (55), we have
\[J(\pi)= \sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0})V_{\pi}(s_{0})\] \[\stackrel{{(\ref{eq:R_pi})}}{{=}} \frac{1}{1-\tilde{\gamma}}\sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0}) \sum_{s\in\mathcal{S}}d_{\pi}^{s_{0},\lambda}(s)R_{\pi}^{(\lambda)}(s)\] \[= \frac{1}{1-\tilde{\gamma}}\sum_{s\in\mathcal{S}}\underbrace{\left( \sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0})d_{\pi}^{s_{0},\lambda}(s)\right)}_{ =d_{\pi}^{\lambda}(s)}R_{\pi}^{(\lambda)}(s)\] \[= \frac{1}{1-\tilde{\gamma}}\sum_{s\in\mathcal{S}}d_{\pi}^{\lambda}( s)R_{\pi}^{(\lambda)}(s)\] \[= \frac{1}{1-\tilde{\gamma}}\mathbb{E}_{s\sim d_{\pi}^{\lambda}( \cdot)}\left[R_{\pi}^{(\lambda)}(s)\right]. \tag{56}\]
This concludes the proof of Theorem 3.3.
**Remark 3.4** (Unification).: _If \(\lambda\to 0\), then Theorem 3.3 is reduced to Theorem 2.3._
## 4 A General Version of Objective
### Main Result
**Theorem 4.1** ([10]).: _For any function \(\varphi(\cdot):\mathcal{S}\to\mathbb{R}\), for any policy \(\pi\), for any trajectory satisfies \(\tau=\{s_{t},a_{t},r_{t+1}\}_{t\geq 0}\sim\pi\), let_
\[\delta_{t}^{\varphi} =r(s_{t+1}|s_{t},a_{t})+\gamma\varphi(s_{t+1})-\varphi(s_{t}),\] \[\delta_{\pi,t}^{\varphi}(s) =\mathbb{E}_{s_{t}\sim\mathbb{P}_{\pi}(\cdot|s),a_{t}\sim\pi( \cdot|s_{t}),s_{t+1}\sim\mathbb{P}(\cdot|s_{t},a_{t})}\left[\delta_{t}^{ \varphi}\right],\]
_then, the objective \(J(\pi)\) (56) can be rewritten as the following version:_
\[J(\pi)= \mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}[\varphi(s_{0})]+\frac{1}{ 1-\tilde{\gamma}}\sum_{s\in\mathcal{S}}d_{\pi}^{\lambda}(s)\left(\sum_{t=0}^{ \infty}\gamma^{t}\lambda^{t}\delta_{\pi,t}^{\varphi}(s)\right) \tag{57}\] \[= \mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}[\varphi(s_{0})]+\frac{1}{ 1-\tilde{\gamma}}\mathbb{E}_{s\sim d_{\pi}^{\lambda}(\cdot)}\left[\sum_{t=0}^ {\infty}\gamma^{t}\lambda^{t}\delta_{\pi,t}^{\varphi}(s)\right]. \tag{58}\]
_We introduce a vector \(\mathbf{\delta}_{\pi,t}^{\varphi}\in\mathbb{R}^{|\mathcal{S}|}\) and its components are: for any \(s\in\mathcal{S}\),_
\[\mathbf{\delta}_{\pi,t}^{\varphi}[s]=:\delta_{\pi,t}^{\varphi}(s).\]
_Then, we rewrite the objective as the following vector version_
\[J(\pi)=\mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}[\varphi(s_{0})]+\frac{1}{1- \tilde{\gamma}}\sum_{t=0}^{\infty}\gamma^{t}\lambda^{t}\langle\mathbf{d}_{\pi }^{\lambda},\mathbf{\delta}_{\pi,t}^{\varphi}\rangle. \tag{59}\]
Proof.:We show it by the following three steps.
**Step 1: Rewrite the objective \(J(\pi)\) in Eq.(56).**
We rewrite the discounted distribution \(\mathbf{d}_{\pi}^{\lambda}\) (53) as follows,
\[\mathbf{\rho}_{0}-\frac{1}{1-\tilde{\gamma}}\mathbf{d}_{\pi}^{\lambda}+\frac{ \tilde{\gamma}}{1-\tilde{\gamma}}\mathbf{P}_{\pi}^{(\lambda)}\mathbf{d}_{\pi }^{\lambda}=\mathbf{0}. \tag{60}\]
Let \(\varphi(\cdot)\) be a real number function defined on the state space \(\mathcal{S}\), i.e., \(\varphi:\mathcal{S}\to\mathbb{R}\). Then we define a vector function \(\mathbf{\phi}(\cdot)\in\mathbb{R}^{|\mathcal{S}|}\) to collect all the values \(\{\varphi(s)\}_{s\in\mathcal{S}}\), and its components are
\[\mathbf{\phi}[s]=\varphi(s),\ \ s\in\mathcal{S}.\]
Now, we take the inner product between the vector \(\mathbf{\phi}\) and (60), we have
\[0 =\left\langle\mathbf{\rho}_{0}-\frac{1}{1-\tilde{\gamma}}\mathbf{d}_{ \pi}^{\lambda}+\frac{\tilde{\gamma}}{1-\tilde{\gamma}}\mathbf{P}_{\pi}^{( \lambda)}\mathbf{d}_{\pi}^{\lambda},\mathbf{\phi}\right\rangle\] \[=\left\langle\mathbf{\rho}_{0},\mathbf{\phi}\right\rangle-\frac{1}{1- \tilde{\gamma}}\left\langle\mathbf{d}_{\pi}^{\lambda},\mathbf{\phi}\right\rangle+ \frac{\tilde{\gamma}}{1-\tilde{\gamma}}\left\langle\mathbf{P}_{\pi}^{(\lambda )}\mathbf{d}_{\pi}^{\lambda},\mathbf{\phi}\right\rangle. \tag{61}\]
We express the first term \(\langle\mathbf{\rho}_{0},\mathbf{\phi}\rangle\) of (61) as follows,
\[\langle\mathbf{\rho}_{0},\mathbf{\phi}\rangle=\sum_{s\in\mathcal{S}}\rho_{0}(s)\varphi(s )=\mathbb{E}_{s\sim\rho_{0}(\cdot)}[\varphi(s)]. \tag{62}\]
We express the second term \(\langle\mathbf{d}_{\pi}^{\lambda},\mathbf{\phi}\rangle\) of (61) as follows,
\[-\frac{1}{1-\tilde{\gamma}}\langle\mathbf{d}_{\pi}^{\lambda},\mathbf{\phi}\rangle=- \frac{1}{1-\tilde{\gamma}}\sum_{s\in\mathcal{S}}d_{\pi}^{\lambda}(s)\varphi(s )=-\frac{1}{1-\tilde{\gamma}}\mathbb{E}_{s\sim d_{\pi}^{\lambda}(\cdot)}[ \varphi(s)]. \tag{63}\]
We express the third term \(\langle\tilde{\gamma}\mathbf{P}_{\pi}^{(\lambda)}\mathbf{d}_{\pi}^{\lambda}, \mathbf{\phi}\rangle\) of (61) as follows,
\[\frac{\tilde{\gamma}}{1-\tilde{\gamma}}\langle\mathbf{P}_{\pi}^{ (\lambda)}\mathbf{d}_{\pi}^{\lambda},\mathbf{\phi}\rangle= \frac{\tilde{\gamma}}{1-\tilde{\gamma}}\sum_{s^{{}^{\prime}}\in \mathcal{S}}\left(\mathbf{P}_{\pi}^{(\lambda)}\mathbf{d}_{\pi}^{\lambda}\right) [s^{{}^{\prime}}]\varphi(s^{{}^{\prime}}) \tag{64}\] \[= \frac{\tilde{\gamma}}{1-\tilde{\gamma}}\sum_{s^{{}^{\prime}}\in \mathcal{S}}\left(\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s^{{}^{ \prime}}|s)d_{\pi}^{\lambda}(s)\right)\varphi(s^{{}^{\prime}}). \tag{65}\]
According to Theorem 3.3, put the results (56) and (61) together, we have
\[J(\pi)\stackrel{{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eq:eqeqeq:eq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eq:eq:eqeqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq
We consider the second term \(\tilde{\gamma}\sum_{s\in\mathcal{S}}\mathbb{P}_{\pi}^{(\lambda)}(s^{{}^{\prime}}|s) \varphi(s)-\varphi(s)\) of (67) as follows,
\[\tilde{\gamma}\sum_{s^{{}^{\prime}}\in\mathcal{S}}\mathbb{P}_{\pi} ^{(\lambda)}(s^{{}^{\prime}}|s)\varphi(s^{{}^{\prime}})-\varphi(s)\] \[\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eq
\[+\gamma\underbrace{\sum_{a_{t}\in\mathcal{A}}\pi(a_{t}|s_{t})\sum_{s_{t+1} \in\mathcal{S}}\mathbb{P}(s_{t+1}|s_{t},a_{t})}_{=\mathbb{P}_{\pi}(s_{t+1}|s_{t})} \varphi(s_{t+1})-\varphi(s_{t})\] \[= \sum_{t=0}^{\infty}(\gamma\lambda)^{t}\sum_{s_{t}\in\mathcal{S}} \mathbb{P}_{\pi}(s_{t}|s)\sum_{a_{t}\in\mathcal{A}}\pi(a_{t}|s_{t})\sum_{s_{t+ 1}\in\mathcal{S}}\mathbb{P}(s_{t+1}|s_{t},a_{t})\left(r(s_{t+1}|s_{t},a_{t})+ \gamma\varphi(s_{t+1})-\varphi(s_{t})\right) \tag{77}\] \[= \sum_{t=0}^{\infty}(\gamma\lambda)^{t}\mathbb{E}_{s_{t}\sim \mathbb{P}_{\pi}(\cdot|s),a_{t}\sim\pi(\cdot|s_{t}),s_{t+1}\sim\mathbb{P}( \cdot|s_{t},a_{t})}\left[r(s_{t+1}|s_{t},a_{t})+\gamma\varphi(s_{t+1})-\varphi (s_{t})\right], \tag{78}\]
the equation from Eq.(74) to Eq.(75) holds since:
\[\mathbb{P}_{\pi}(s_{t+1}|s)\stackrel{{\eqref{eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq: eq eq: eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq eq: eq eq: eq eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq eq: eq eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq eq: eq eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq: eq eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq eq: eq: eq eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq: eq eq: eq: eq eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq: eq eq eq: eq eq: eq eq: eq eq eq: eq eq eq: eq eq: eq eq eq eq: eq eq: eq eq eq: eq eq eq: eq eq eq: eq eq: eq eq eq: eq eq eq: eq eq eq: eq: eq eq eq: eq eq eq: eq eq eq eq:
### General Discussion
The objective shown Theorem 4.1
\[J(\pi)= \mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}[\varphi(s_{0})]+\frac{1}{1- \tilde{\gamma}}\sum_{s\in\mathcal{S}}d_{\pi}^{\lambda}(s)\left(\sum_{t=0}^{ \infty}\gamma^{t}\lambda^{t}\delta_{\pi,t}^{\varphi}(s)\right) \tag{80}\] \[= \mathbb{E}_{s_{0}\sim\rho_{0}(\cdot)}[\varphi(s_{0})]+\frac{1}{1- \tilde{\gamma}}\mathbb{E}_{s\sim d_{\pi}^{\lambda}(\cdot)}\left[\sum_{t=0}^{ \infty}\gamma^{t}\lambda^{t}\delta_{\pi,t}^{\varphi}(s)\right] \tag{81}\]
unifies previous results according to the following way:
* if \(\varphi(s)=V_{\pi}(s)\), then Theorem 4.1 implies the objective shown in Theorem 3.3, i.e., \[J(\pi)=\frac{1}{1-\tilde{\gamma}}\sum_{s\in\mathcal{S}}d_{\pi}^{\lambda}(s)R_ {\pi}^{(\lambda)}(s)=\frac{1}{1-\tilde{\gamma}}\mathbb{E}_{s\sim d_{\pi}^{ \lambda}(\cdot)}\left[R_{\pi}^{(\lambda)}(s)\right];\] (82)
* if \(\varphi(s)=V_{\pi}(s)\) and \(\lambda\to 0\), then Theorem 4.1 implies the objective shown in Theorem 2.3, i.e., \[J(\pi)= \frac{1}{1-\gamma}\sum_{s_{0}\in\mathcal{S}}\rho_{0}(s_{0})\sum _{s\in\mathcal{S}}d_{\pi}^{s_{0}}(s)R_{\pi}(s)\] (83) \[= \frac{1}{1-\gamma}\mathbb{E}_{s\sim d_{\pi}^{\rho_{0}}(\cdot),a \sim\pi(\cdot|s),s^{{}^{\prime}}\sim\mathbb{P}(\cdot|s,a)}\left[r(s^{{}^{ \prime}}|s,a)\right].\] (84)
### Application
In this section, we apply Theorem 4.1 to provide an equivalent policy optimization shown in (Schulman et al., 2016, Section 6.1). This policy optimization formally establishes a problem with respect to GAE, which is widely used in modern reinforcement learning.
**Proposition 4.2**.: ([Yang et al., 2022, Proposition 1]) _For any two policies \(\pi\) and \(\pi^{{}^{\prime}}\), let_
\[\epsilon_{\pi,t}^{\varphi}=:\max_{s_{t}\in\mathcal{S}}\left\{ \mathbb{E}_{a_{t}\sim\pi(\cdot|s_{t}),s_{t+1}\sim\mathbb{P}(\cdot|s_{t},a_{t}) }[|\delta_{t}^{\varphi}|]\right\},\] \[\epsilon_{\pi}^{V}(\pi^{{}^{\prime}})=:\sup_{t\in\mathbb{N}^{+}} \{\epsilon_{\pi,t}^{\varphi}:\varphi=V_{\pi^{{}^{\prime}}}\},\] \[D_{\mathrm{TV}}(\pi,\pi^{{}^{\prime}})[s]=\frac{1}{2}\sum_{a\in \mathcal{A}}\left|\pi(a|s)-\pi(a^{{}^{\prime}}|s)\right|,\]
_then_
\[J(\pi)-J(\pi^{{}^{\prime}})\geq\frac{1}{1-\tilde{\gamma}} \mathbb{E}_{s\sim d_{\pi^{{}^{\prime}}}^{\lambda}(\cdot),a\sim\pi(\cdot|s)} \Bigg{[}A_{\pi^{{}^{\prime}}}^{\texttt{GLE}(\gamma,\lambda)}(s,a)\\ -\frac{2\tilde{\gamma}\left(\gamma\lambda(|\mathcal{S}|-1)+1 \right)\epsilon_{\pi}^{V}(\pi^{{}^{\prime}})}{(1-\tilde{\gamma})(1-\gamma \lambda)}D_{\mathrm{TV}}(\pi,\pi^{{}^{\prime}})[s]\Bigg{]}, \tag{85}\]
_where we consider the pair \((s,a)\) stats at time \(t\), i,e., \((s,a)=(s_{t},a_{t})\), and_
\[A_{\pi^{{}^{\prime}}}^{\texttt{GLE}(\gamma,\lambda)}(s_{t},a_{t})=\sum_{\ell= 0}^{\infty}(\gamma\lambda)^{\ell}\mathbb{E}_{s_{t+\ell+1}}\left[\delta_{t+ \ell}^{V_{\pi^{{}^{\prime}}}}\right], \tag{86}\]
_where \(\delta_{t}^{V}=r_{t+1}+\gamma V(s_{t+1})-V(s_{t})\) is TD error, and \(V(\cdot)\) is an estimator of value function._
Furthermore, we know,
\[\mathbb{E}_{s\sim d^{\lambda}_{\pi^{{}^{\prime}}}(\cdot)}\left[D_{ \mathrm{TV}}(\pi,\pi^{{}^{\prime}})[s]\right]\leq \mathbb{E}_{s\sim d^{\lambda}_{\pi^{{}^{\prime}}}(\cdot)}\left[\sqrt{ \frac{1}{2}\mathrm{KL}(\pi,\pi^{{}^{\prime}})[s]}\right]\leq\sqrt{\frac{1}{2} \mathbb{E}_{s\sim d^{\lambda}_{\pi^{{}^{\prime}}}(\cdot)}\left[\mathrm{KL}(\pi,\pi^{{}^{\prime}})[s]\right]}, \tag{87}\]
where \(\mathrm{KL}(\cdot,\cdot)\) is KL-divergence, and
\[\mathrm{KL}(\pi,\pi^{{}^{\prime}})[s]=\mathrm{KL}(\pi(\cdot|s),\pi^{{}^{\prime }}(\cdot|s));\]
the first inequality follows Pinsker's inequality (Csiszar and Korner, 2011) and the second inequality follows Jensen's inequality. Then the bound shown in (85) holds if we make the following substitution:
\[\mathbb{E}_{s\sim d^{\lambda}_{\pi^{{}^{\prime}}}(\cdot)}\left[D_{ \mathrm{TV}}(\pi,\pi^{{}^{\prime}})[s]\right]\leftarrow\sqrt{\frac{1}{2} \mathbb{E}_{s\sim d^{\lambda}_{\pi^{{}^{\prime}}}(\cdot)}\left[\mathrm{KL}( \pi,\pi^{{}^{\prime}})[s]\right]}.\]
Finally, according to trust region methods, we obtain the following policy optimization problem,
\[\max_{\pi\in\Pi} \mathbb{E}_{s\sim d^{\lambda}_{\pi_{k}}(\cdot),a\sim\pi(\cdot|s)} \left[A^{\mathsf{GAE}(\gamma,\lambda)}_{\pi_{k}}(s,a)\right],\] (88) s.t., \[\mathbb{E}_{s\sim d^{\lambda}_{\pi_{k}}(\cdot)}\left[\mathrm{KL}(\pi,\pi_ {k})[s]\right]\leq\delta, \tag{89}\]
which unifies (Schulman et al., 2016, Section 6.1) and provides a theoretical fundament for policy optimization with GAE.
## 5 Bibliographical Remarks
The objective presented in Theorem 2.3 has been widely used in extensive reinforcement learning literature (e.g., (Silver et al., 2014)), which is a fundamental way to understand the stochastic influence of the objective. Theorem 3.3 is parallel to Theorem 2.3, where Theorem 3.3 considers the \(\lambda\)-version dynamics. \(\lambda\)-return plays a centre way to show Theorem 3.3, where the \(\lambda\)-return and its error-reduction properties is introduced by (Watkins, 1989). The property of \(\lambda\)-Bellman operator is well-documented (e.g., (Bertsekas, 2022)). Theorem 4.1 is a \(\mathrm{TD}(\lambda)\) version of objective, \(\mathrm{TD}(\lambda)\) is introduced by Sutton (1984, 1988). Off-line \(\mathrm{TD}(\lambda)\) share a natural structure with GAE (generalized advantage estimation) (Schulman et al., 2016), Theorem 4.1 provides a possible way to formulate the GAE and such an idea has been utilized by (Yang et al., 2022).
## 6 Conclusion
This lecture presents a general perspective on RL objectives, where we show three versions of objectives. The first is the standard definition, then we extend it to the \(\lambda\)-return version, and the final unifies the previous two versions of objectives. The last version provides a theoretical fundament for policy optimization with GAE. |
2310.19912 | Collisional Shaping of Nuclear Star Cluster Density Profiles | A supermassive black hole (SMBH) surrounded by a dense, nuclear star cluster
resides at the center of many galaxies. In this dense environment,
high-velocity collisions frequently occur between stars. About $10 \%$ of the
stars within the Milky Way's nuclear star cluster collide with other stars
before evolving off the main-sequence. Collisions preferentially affect
tightly-bound stars, which orbit most quickly and pass through regions of the
highest stellar density. Over time, collisions therefore shape the bulk
properties of the nuclear star cluster. We examine the effect of collisions on
the cluster's stellar density profile. We show that collisions produce a
turning point in the density profile which can be determined analytically.
Varying the initial density profile and collision model, we characterize the
evolution of the stellar density profile over $10$ Gyr. We find that old,
initially cuspy populations exhibit a break around $0.1$ pc in their density
profile, while shallow density profiles retain their initial shape outside of
$0.01$ pc. The initial density profile is always preserved outside of a few
tenths of parsec irrespective of initial conditions. Lastly, we comment on the
implications of collisions for the luminosity and color of stars in the
collisionly-shaped inner cluster. | Sanaea C. Rose, Morgan MacLeod | 2023-10-30T18:25:38Z | http://arxiv.org/abs/2310.19912v1 | # Collisional Shaping of Nuclear Star Cluster Density Profiles
###### Abstract
A supermassive black hole (SMBH) surrounded by a dense, nuclear star cluster resides at the center of many galaxies. In this dense environment, high-velocity collisions frequently occur between stars. About 10% of the stars within the Milky Way's nuclear star cluster collide with other stars before evolving off the main-sequence. Collisions preferentially affect tightly-bound stars, which orbit most quickly and pass through regions of the highest stellar density. Over time, collisions therefore shape the bulk properties of the nuclear star cluster. We examine the effect of collisions on the cluster's stellar density profile. We show that collisions produce a turning point in the density profile which can be determined analytically. Varying the initial density profile and collision model, we characterize the evolution of the stellar density profile over 10 Gyr. We find that old, initially cuspy populations exhibit a break around 0.1 pc in their density profile, while shallow density profiles retain their initial shape outside of 0.01 pc. The initial density profile is always preserved outside of a few tenths of parsec irrespective of initial conditions. Lastly, we comment on the implications of collisions for the luminosity and color of stars in the collisionly-shaped inner cluster.
Stellar dynamics; Galactic center; Star clusters; Stellar mergers
## 1 Introduction
Most galaxies harbor a supermassive black hole at their center (e.g. Ferrarese and Ford, 2005; Kormendy and Ho, 2013). A dense cluster of stars and stellar remnants surrounds these SMBHs (e.g., Morris, 1993; Schodel et al., 2003; Ghez et al., 2005, 2008; Gillessen et al., 2009, 2017; Neumayer et al., 2020). The proximity of the Milky Way's galactic nucleus (GN) presents a unique opportunity to study the populations of stars and compact objects surrounding a SMBH. The stellar density profile in particular can tell us about the dynamical history of the GN (e.g., Baumgardt et al., 2006; Lockmann and Baumgardt, 2008; Merritt, 2010; Bar-Or et al., 2013; Mastrobuono-Battisti et al., 2014).
Within the GN, stars trace orbits dominated by the gravity of the SMBH (e.g., Ghez et al., 2008; Genzel et al., 2010). A power law of the form \(\rho\propto r^{-\alpha}\) is often used to describe the stellar mass density as a function of distance \(r\) from the SMBH. In this dense environment, stars also experience weak gravitational interactions with one another. These interactions allow the stars to exchange energy over time. This process, called relaxation, redistributes the stellar orbits onto an equilibrium density profile. Theoretical models predict that an old, relaxed population should have a cuspy density profile with slope \(\alpha\) lying between 1 and 1.75, depending on what is assumed about the star formation history and relative abundances of stars and compact objects (e.g., Bahcall and Wolf, 1976; Aharon and Perets, 2016; Linial and Sari, 2022).
Contrary to expectations, however, the old stellar population in the GN, traced using bright evolved giants, does not appear to have a cuspy density profile within \(\sim 0.1\) pc of the SMBH (e.g., Genzel et al., 1996; Bailey and Davies, 1999; Buchholz et al., 2009; Do et al., 2009, 2013, 2013, 2014, 2018; Baumgardt et al., 2018; Habibi et al., 2019). These observations have prompted several proposed mechanisms to preferentially destroy red giants at these radii (e.g., Davies et al., 1998; Alexander, 1999; Bailey and Davies, 1999; Dale et al., 2009; Amaro-Seoane and Chen, 2014; Zajacek et al., 2020; Amaro-Seoane et al., 2020). It is therefore possible that the red giant density profile does not trace the distributions of the main
sequence stars or compact object populations. Recent observational campaigns suggest that the main-sequence stars lie on a cusp, albeit a shallower one with index \(\alpha\) between 1.1 and 1.4 (Gallego-Cano et al., 2018; Schodel et al., 2018). However, observations of the stellar cusp are challenging (e.g., Schodel et al., 2020), and there may be connections between the main-sequence stellar density profile and the products of their evolution (e.g., Rose et al., 2023). Specifically, the core-like profile of the bright evolved stars internal to \(\sim 0.1\) pc may have a dynamical origin (e.g., Merritt, 2010; Rose et al., 2023).
We explore the potential of one such dynamical process, direct collisions between stars, to shape the stellar density profile. In the dense environment of a nuclear star cluster, direct collisions between objects become possible (e.g., Dale and Davies, 2006; Dale et al., 2009; Rubin and Loeb, 2011; Mastrobuono-Battisti et al., 2021; Rose et al., 2020, 2023). These collisions have been studied in a variety of contexts in the literature, including AGN variability (e.g., Murphy et al., 1991; Torricelli-Ciamponi et al., 2000; Freitag and Benz, 2002), electromagnetic and gravitational wave signals (e.g., Dale and Davies, 2006; Balberg et al., 2013; Amaro Seoane, 2023), and the presence of young-seeming, bright stars (e.g., Sills et al., 1997, 2001; Lombardi et al., 2002; Rose et al., 2023). Several theoretical and computational studies have shown that destructive collisions can deplete the supply of stars near the SMBH (e.g., Duncan and Shapiro, 1983; Murphy et al., 1991; David et al., 1987, 1987; Rauch, 1999; Freitag and Benz, 2002; Rose et al., 2023). The frequency and outcomes of direct collisions depends on distance from the SMBH (e.g., Lai et al., 1993; Rauch, 1999; Rubin and Loeb, 2011; Hu and Loeb, 2021; Rose et al., 2023). This process may therefore have distinct effects on the stellar density profile (e.g., Rauch, 1999; Freitag and Benz, 2002).
We leverage a toy model developed by Rose et al. (2022, 2023) to examine the effects of stellar collisions on the density profile of a GN using the Milky Way's as an example (e.g., Ghez et al., 2005). We vary the initial density profile and the collision model to build a comprehensive picture of possible evolutions of the density profile, the circumstances under which a break in the profile arises, and regions of the nuclear star cluster in which the original profile is preserved.
This paper is organized as follows. In Section 2, we provide an overview of our model and the key dynamical processes considered. Section 3 provides an analytic framework to understand collisional shaping of the density profile and to aid in the interpretation of our simulated results. Section 4.2 presents and discusses the evolution of the density profile for different simulations, while Section 4.3 discusses implications for the luminosity profile. We conclude in Section 5.
## 2 A Nuclear Star Cluster Model
This section describes our semi-analytic approach to modeling the stellar surroundings of a SMBH. Our model adopts a simplified description of the stellar density profile and dynamics around the SMBH, but includes the effects of star-star collisions. We adopt conditions representative of the Milky Way's GN, but these models can be easily adapted to other GN.
### Nuclear Star Cluster Properties
In our model nuclear star cluster, the stellar mass density is described by a power law:
\[\rho(r_{\bullet})=\rho_{0}\left(\frac{r_{\bullet}}{r_{0}}\right)^{-\alpha}\, \tag{1}\]
where \(\alpha\) is the slope and \(r_{\bullet}\) denotes distance from the SMBH. The density profile within the sphere of influence of the SMBH is normalized using \(\rho_{0}=1.35\times 10^{6}\,M_{\odot}/\mathrm{pc}^{3}\) at \(r_{0}=0.25\,\mathrm{pc}\), based on observations of this region (Genzel et al., 2010). In our simulations, we test three of values of \(\alpha\), 1.25, 1.5, and 1.75, to encapsulate the range of theoretical predictions and observed density profiles.
The velocity dispersion within the cluster is also a function of distance from the SMBH:
\[\sigma(r_{\bullet})=\sqrt{\frac{GM_{\bullet}}{r_{\bullet}(1+\alpha)}}, \tag{2}\]
where \(\alpha\) is the slope of the density profile and \(M_{\bullet}\) is the mass of the SMBH (Alexander, 1999; Alexander and Pfuhl, 2014). We set \(M_{\bullet}\) equal to \(4\times 10^{6}\) M\({}_{\odot}\), the mass of the Milky Way's SMBH (e.g., Ghez et al., 2003). For a uniform mass cluster of 1 M\({}_{\odot}\) stars, the number density \(n\) is simply \(\frac{\rho(r_{\bullet})}{1M_{\odot}}\). Together, the density and velocity dispersion in the nuclear star cluster determine the frequency of interactions between stars and set key timescales for various dynamical processes.
### Overview of Dynamical Processes
#### 2.2.1 Collision Rate
In dense star clusters, direct collisions between objects become possible (e.g., Dale and Davies, 2006; Dale et al., 2009; Rubin and Loeb, 2011; Mastrobuono-Battisti et al., 2021; Rose et al., 2022, 2023). The timescale for a direct collision can be estimated as, \(t_{\rm coll}^{-1}=n\sigma A\), where \(A\) is the cross-section of interaction, \(n\) is the number density, and \(\sigma\) is the velocity dispersion, here a measure of the relative velocity between objects. For a direct collision, the cross-section of interaction \(A\) is the physical
cross-section enhanced by gravitational focusing. The eccentricity of a star's orbit about the SMBH, \(c_{\bullet}\), can also affect the collision timescale; more eccentric orbits have shorter collision timescales compared to circular orbits with the same semimajor axes (e.g., Rose et al., 2020). Including the eccentricity dependence, the collision timescale can be written as:
\[t_{\rm coll}^{-1} = \pi n(a_{\bullet})\sigma(a_{\bullet}) \tag{3}\] \[\times \left(f_{1}(e_{\bullet})r_{c}^{2}+f_{2}(e_{\bullet})r_{c}\frac{2G (M_{\odot}+M)}{\sigma(a_{\bullet})^{2}}\right)\.\]
where \(f_{1}(e_{\bullet})\) and \(f_{2}(e_{\bullet})\) are given by Rose et al. (2020) equations 20 and 21, \(G\) is the gravitational constant, \(a_{\bullet}\) is the semimajor axis of the star's orbit, and \(r_{c}\) is the sum of the radii of the colliding stars. We plot the collision timescale in red in Figure 1 for the range of density profiles considered in this study. \(\alpha=1.75\) is shown in the solid line, while \(\alpha=1.25\) is shown in the dashed line. This timescale assumes a uniform population of solar mass stars. Therefore, \(r_{c}=2R_{\odot}\).
#### 2.2.2 Two-Body Relaxation
Even more frequent than direct physical collisions are weak gravitational interactions between passing stars. These interactions cause the orbital parameters to change slowly over time in a diffusion process. Over the so-called relaxation timescale, the star's orbital energy and angular momentum change by order of themselves. Like the collision timescale, the relaxation timescale \(t_{\rm rlx}\) depends on properties of the cluster such as the density and velocity dispersion, both functions of \(r_{\bullet}\). The timescale can be expressed as:
\[t_{\rm rlx}=0.34\frac{\sigma^{3}}{G^{2}\rho(M_{*})\ln\Lambda_{\rm rlx}}\, \tag{4}\]
where \(\langle M_{*}\rangle\) is the average mass of the objects in the cluster, here taken to be 1 M\({}_{\odot}\), and \(\ln\Lambda_{\rm rlx}\) is the coulomb logarithm (e.g., Binney & Tremaine, 2008; Merritt, 2013). We plot this timescale in blue in Figure 1 for a range of density profiles, spanning \(\alpha=1.25\) (dashed) to 1.75 (solid). The timescale is less than or comparable to the duration of our simulations, 10 Gyr, shown in grey in Figure 1. Additionally, while outside of 0.1 pc, the collision timescale is long compared to the total simulation time, relaxation processes can change the orbits of the stars and move them into regimes where collisions become likely. The reverse is also true: stars that begin in regions where collisions are common can migrate further from the SMBH. It is therefore important to account for this diffusion process in our semi-analytic models.
### Semianalytic Model
We use a toy model developed by Rose et al. (2022, 2023) to simulate the effects of collisions on the star cluster. This model follows a subset of 1000 stars of varying masses, drawn from a Kroupa initial mass function (IMF), embedded in a fixed cluster of 1 M\({}_{\odot}\) stars. The index \(\alpha\) of the surrounding star cluster's density profile is treated as a free parameter with a default value of 1.75, the expectation for an old, dynamically relaxed population (e.g., Bahcall & Wolf, 1976). The orbital eccentricities of the stars in our sample are drawn from a thermal distribution, while their semimajor axes are drawn from a uniform distribution in log distance from the SMBH.
The code takes a statistical approach to collisions. It computes to probability of a collision occurring over a timestep \(\Delta t\), taken to be \(10^{6}\) yr, as \(\Delta t/t_{\rm coll}\). We then draw a random number between 0 and 1. If the number is less than or equal to the collision probability, we treat the star as having collided, and update its properties given the models described in Section 2.4. This prescription repeats until the code has reached the desired simulation time, 10 Gyr, or the star has reached the end of its main-sequence lifetime, whichever occurs first. We also simulate the effects of relaxation in our code. Over each timestep, we apply a small instantaneous velocity kick to the star, from which we calculate the new, slightly altered orbital parameters. The velocity kick is drawn from a Guassian distribution with a standard deviation that depends on the ratio of the orbital period to the relaxation timescale for the star in question (e.g., Bradnick et al., 2017; Lu & Naoz, 2019; Rose et al., 2022, 2023; Naoz et al., 2022, see the latter for the full set of equations). This prescription allows us to account for diffusion in the orbital parameters over time from interactions with the surrounding stars.
### Treatment of Collision Outcomes
If a collision occurs, we must adjust the mass of the star in our sample accordingly. The collision outcome depends in part on the speed of the impact. The velocity dispersion in the nuclear star cluster exceeds 100 km/s within about 0.1 pc of the SMBH. Heuristically, high velocity collisions should result in mass loss from the colliding stars. In extreme cases, when the relative velocity is larger than the escape speed from the star, one might expect the collision to fully unbind the star.
Rose et al. (2023) find that the collision outcomes can generally be understood in three regimes. Near the SMBH, the velocity dispersion exceeds the escape velocity from the stars, leading to destructive collisions with high mass loss. Between about 0.01-0.1 pc, collisions are
common and can lead to mergers. These merger products, more massive than their progenitor stars, evolve off the main-sequence over a shorter lifetime. Outside of 0.1 pc, collisions are less frequent, but always lead to mergers with little mass loss.
In more detail, collision outcomes depend on the gas dynamics of the collisions themselves and are dependent on several other conditions beyond the impact velocity, such as the impact parameter and the internal structure of the stars (e.g., Lai et al., 1993; Freitag and Benz, 2002; Rubin and Loeb, 2011). Previous studies have leveraged hydrodynamic simulations to understand collision outcomes at high velocities for different impact parameters (e.g., Lai et al., 1993; Rauch, 1999; Freitag and Benz, 2002). Lai et al. (1993) and Rauch (1999), in particular, provide fitting formulae based on their results. These formulae can be easily implemented in a code such as ours to estimate the mass lost from the stars and whether or not a merger occurred (Rose et al., 2023).
In this study, we compare three different recipes for determining the collision outcome. We begin by considering a limiting case in which collisions are fully destructive. In this prescription, the occurrence of a single collision terminates the code. Once a star in our sample experiences a collision, its mass is set to zero and it is removed from the population. While unphysical, this treatment of collisions allows us to build an intuition for the underlying physics of our models. We then proceed to include a more complicated treatment of the collision outcomes. These simulations utilize either fitting formulae from Rauch (1999) or Lai et al. (1993), discussed in more detail in Rose et al. (2023). Henceforth, we refer to simulations that use fitting formulae from these studies as "Rauch99" and "Lai+93", respectively. These fitting formulae allow us to calculate the mass loss from a given collision given the mass ratio of the colliding stars, the impact parameter, which is drawn statistically, and the relative velocity. Our simulations always assume that the relative velocity is equal to the velocity dispersion, a function of distance from the SMBH given by Eq. (2). As described in Rose et al. (2023), our Rauch99 simulations favor mergers with little mass loss, while collisions lead to higher mass loss in the Lai+93 simulations and mergers are less likely. Together, these prescriptions are selected to span the range of possible collision outcomes.
## 3 Characteristic Radii
As mentioned above, the relevance of various physical processes becomes clear when one compares their associated timescales to the duration of our simulations. We show two key timescales, relaxation in green and collision in red, in Figure 1 along with the simulation time 10 Gyr (grey). We can define characteristic radii in the nuclear star cluster by equating various timescales.
### The Collision Radius
Collisions play a crucial role in shaping the stellar demographics where the collision timescale, \(t_{\rm coll}\), is less than the age of the population. Setting \(t_{\rm coll}\) equal to \(t_{\rm age}\) gives a critical radius, \(r_{\rm coll}\), within which the vast majority of the stars have collided. Outside of \(r_{\rm coll}\), a fraction of the stellar population will still experience collisions. This fraction can be estimated using \(t_{\rm coll}/t_{\rm age}\) (see also figure 1 in Rose et al., 2023). In addition to population age, \(r_{\rm coll}\) also depends on the steepness of the density profile. For example, the age of an old 10 Gyr population and collision timescale intersect closer to 0.04 pc for \(\alpha=1.25\), compared to \(\sim 0.1\) pc for \(\alpha=1.75\).
This analysis informs where in the nuclear star cluster we expect collisions to be an important process in shaping the cluster properties. Because collisions modify or destroy stars at high enough velocity, we predict that \(r_{\rm coll}\) will correspond to a break in the stellar density profile. Within \(r_{\rm coll}\), collisions are an important process in determining the stellar density profile. Outside of this critical radius, collisions are rare, and the density profile is not shaped by collisions. Over time, as the age of the population increases, the inflection point will move further from the SMBH. For an old 10 Gyr pop
Figure 1: Assuming a uniform population of 1 M\({}_{\odot}\) stars, we plot relevant timescales for a range of stellar density profiles, \(\alpha=1.25\) (solid line) to \(\alpha=1.75\) (dashed line), in the nuclear star cluster. The collision and relaxation timescales are in red and green, respectively. We also include a destruction timescale, approximately the time needed for the two stellar cores to collide, or the collision timescale (Eq. 3) calculated for \(r_{c}=2\times 0.33R_{\odot}\). Within about 0.01 pc of the SMBH, the kinetic energy is sufficiently high that a collision with small impact parameter can unbind the stars. To guide the eye, the grey line shows the total simulation time of 10 Gyr.
ulation, with properties similar to the Milky Way GN, \(r_{\rm coll}\) occurs at about 0.1 pc, shown in Figure 1.
### The Destruction Radius
We can perform a similar analysis to determine where in the cluster collisions can effectively deplete the entire supply of stars. Within about 0.01 pc of the SMBH, the velocity dispersion, given by Eq. 2, exceeds the escape velocity from a Sun-like star. In this region, collisions have the potential to destroy the stars. About two thirds of the Sun's mass is concentrated in the inner third of its radius (e.g., Christensen-Dalsgaard et al., 1996). A collision will result in high mass loss when the impact parameter is small enough that the dense cores interact (e.g., Lai et al., 1993; Rauch, 1999; Rose et al., 2023). We define a characteristic timescale over which the stars will be destroyed by setting the impact parameter \(r_{c}\) equal to 0.33 R\({}_{\odot}\). Figure 1 shows this timescale in green. This timescale is consistent with Rose et al. (2023), who find the time needed to deplete the stellar population within 0.01 pc to be about a Gyr. Similar to \(r_{\rm coll}\), we define \(r_{\rm dest}\) as the radius at which the destruction timescale equals the population age. We stress that this definition is only valid in regions where the collision velocity is high enough to destroy the stars.
### Generalizing to Other Galactic Nuclei
The break radius for any GN can be found by equating the population age and collision timescale: \(t_{\rm age}=(n\sigma A)^{-1}\). Similar to the Milky Way's GN, the SMBH of mass \(M_{\bullet}\) dominates the gravitational potential within the sphere of influence, and the relative velocity \(\sigma\) can be calculated using Eq. 2. The initial stellar density profile of the cluster must be calibrated to the mass of the central SMBH. Using the \(M\)-\(\sigma\) relation, the stellar density profile for a GN with a SMBH of arbitrary mass can be written as a power law:
\[\rho(r_{\bullet})=\frac{3-\alpha}{2\pi}\frac{M_{\bullet}}{r_{\bullet}^{3}} \left(\frac{G(M_{0}M_{\bullet})^{1/2}}{\sigma_{0}^{2}r_{\bullet}}\right)^{-3+ \alpha}\, \tag{5}\]
where \(M_{0}=1.3\times 10^{8}\,M_{\odot}\), \(\sigma_{0}=200\)kms\({}^{-1}\), and \(r_{\bullet}\) is the distance from the SMBH (Tremaine et al., 2002).
We can derive a scaling relation using \(r_{\rm coll}=0.09\) pc for the Milky Way's GN at 10 Gyr for a Bahcall-Wolf profile (\(\alpha=1.75\)). For a GN with arbitrary stellar density profile slope \(\alpha\), the break radius can be calculated using the following:
\[r_{\rm coll} = \left[1.49\times 10^{-6}\,(6.04)^{\alpha}\,\frac{(3-\alpha)^{2}}{1+ \alpha}\right]^{\frac{1}{1+2\alpha}} \tag{6}\] \[\times \left(\frac{t_{\rm age}}{10^{10}\ {\rm yr}}\right)^{\frac{2}{1+2 \alpha}}\left(\frac{M_{\bullet}}{4\times 10^{6}M_{\odot}}\right)^{\frac{\alpha}{1+2 \alpha}}{\rm pc}.\]
For values of \(\alpha\) between 1 and 2, the first term in brackets changes by a factor \(\sim 5.4\). The following equation can serve as an approximation in this range of \(\alpha\):
\[r_{\rm coll}\sim 0.07\left(\frac{t_{\rm age}}{10^{10}\ {\rm yr}}\right)^{ \frac{2}{1+2\alpha}}\left(\frac{M_{\bullet}}{M_{\odot}}\right)^{\frac{\alpha} {1+2\alpha}}{\rm pc}. \tag{7}\]
If the GN also has a Bahcall-Wolf profile, i.e., \(\alpha=1.75\), the full relation can instead be simplified to:
\[r_{\rm coll}=0.09\left(\frac{t_{\rm age}}{10^{10}\ {\rm yr}}\right)^{0.44} \left(\frac{M_{\bullet}}{4\times 10^{6}\ {\rm M_{\odot}}}\right)^{0.39}{\rm pc}\, \tag{8}\]
We use this last relation to calculate \(r_{\rm coll}\) in our fiducial models, setting \(M_{\bullet}=4\times 10^{6}\ {\rm M_{\odot}}\).
## 4 Numerical Results
Here, we present results from several simulations with various initial stellar density profiles and collision outcome prescriptions, reviewed in Section 2.4. We discuss these results in the context of the stellar density and luminosity profiles, and we compare the simulations to our predictions from Section 3.
### Collisional Shaping of Stars
Figure 2 shows the time evolution of the model population of "tracer" stars. We show four snapshots for two different simulations, both of which adopt \(\alpha=1.75\) for the density profile. The top row assumes that collisions are fully destructive, while the bottom row uses a Rauch99 prescription for collision outcomes. The grey points in the plots show the stellar masses at a given time due to stellar evolution alone. As can be seen in both rows of the figure, the vertical extent of the grey points decreases with time because the time elapsed has exceeded the main-sequence lifetime of stars above a mass threshold.
The red points show the stellar masses from our simulations, which account for the effects of collisions. We mark the of location \(r_{\rm coll}\) with a dashed red vertical line in each snapshot, the distance within which the majority of the stars have collided. Calculated from Eq. 8, \(r_{\rm coll}\) sweeps out over time.
Depending on the collision prescription and the impact velocity, collisions can either add or remove mass from the impacting stars (as described in detail by Rose et al., 2023). In the top row, collisions carve out a starless region over time. The bottom row, on the other hand, incorporates a more complete and complex treatment of collision outcomes. Thus, the region inside \(r_{\rm coll}\) contains both collisionally-merged, more massive stars and collisionally-stripped, less massive stars as compared to their progenitors.
### Cluster Density Profile
We consider the evolution of the stellar density profile over time due to collisions, which can work to either destroy or merge stars. To generate a density profile, we slice up each snapshot, including those depicted in Figure 2, in distance. In each annulus with width \(\delta r\) that lies distance \(r\) from the SMBH, we divide the total mass in the red points by the total mass in the grey points, which tells us the fractional change in mass at that distance at a particular time. We then convolve those fractions with the original density profile of the stars to obtain a collisionally-modified profile.
We juxtapose the evolving density profiles of three simulations in Figure 3. Each simulation uses the same initial value for \(\alpha\), \(1.75\). They use different collision outcome prescriptions, noted in the plot titles. The upper panel of each plot shows the stellar density as a function of distance from the SMBH at a particular time. As indicated by the colorbar on the right, the redder curves correspond to older populations. The bottom
Figure 3: The evolution of the stellar density profile for different simulations. The title of each column states the collision outcome prescription used in the simulation shown below. The black lines represent the initial, unmodified density profile, with \(\alpha=1.75\). The colored lines show the profile at a given time, indicated by the colorbar on the right. The grey lines in the upper left corner of each plot show other examples of density profile slopes to guide the eye. The left column presents results for fully destructive collisions. These density profiles exhibit clear turning points coinciding with where the time elapsed, or population age, equals the collision timescale (see Figure 1). In the other rows, a more complex treatment of collision outcomes obscures the inflection point in the density profiles. However, a similar trend is present: the inflection point moves further from the SMBH with time, corresponding roughly to where the population age equals the collision timescale. The red arrows draw attention to this approximate distance from the SMBH for an old (\(\sim 10\) Gyr) population.
Figure 2: The stellar population at four different times over the course of two simulations. As indicated by the column labels, time increases left to right. The grey points show the masses of the stars at the given time as determined by main-sequence evolution alone. The red dots show the simulated masses of the stars in our sample, which can also change due to collisions. The top row corresponds to fully destructive collisions, meaning a star is removed from the sample when it undergoes a single collision. The bottom row shows results that use the mass loss prescription from Rauch (1999). Both simulations assume a Bahcall-Wolf (\(\alpha=1.75\)) profile for the surrounding stars. We mark \(r_{\rm coll}\) with a red dashed vertical line, where \(r_{\rm coll}\) is calculated from Eq. 8.
panel shows the fractional change in stellar density due to collisions compared to the expected value. The black line shows the original, reference density profile.
In each case, collisions deplete the stellar mass near the SMBH. This process causes the density profile to flatten within \(r_{\rm coll}\). Over time, this \(r_{\rm coll}\) moves further from the SMBH, shifting the break radius in the density profile. The fully destructive case in the left column of Figure 3 provides the clearest example of the break radius sweeping outward over time: the bluest density profiles diverge from the unmodified profile at smaller radii. Equating timescales suggests a break radius of \(\sim 0.1\) pc for a \(\sim 10\) Gyr population with these initial conditions (Eq. 8). The break radius of the fully destructive model falls slightly outside of this radius, closer to \(0.3\) pc for a \(7\) Gyr old population, because collisions still affect a fraction of the population outside of \(r_{\rm coll}\) leading to mass loss for some percentage of the stars (see Figure 1 in Rose et al. (2023)), causing break radii to be gradual rollovers rather than abrupt transitions.
A more realistic mass loss and merger prescription gives a less distinct break radius, as seen in the second two columns of Figure 3. In these prescriptions, a single collision results in comparatively smaller fractional changes in total stellar mass. Compared to the Rauch99 model, the Lai+93 density curves show greater flattening within \(0.05\) pc. The greater flattening occurs because fitting formulae from the corresponding study Lai et al. (1993) give a higher fractional mass loss per collision in this region compared to Rauch (1999). Mergers can cause also contribute to a break in the density profile. While mergers result in fractional mass loss, generally between five to ten percent (see figure 2 in Rose et al., 2023), they also hasten the evolution of the stars off the main-sequence. As a result, there is less mass in main-sequence stars at late times within \(0.1\) pc compared to the outer region of the nuclear star cluster.
We also examine the dependence of the break radius on the initial density profile assumed for the population. We test three values of \(\alpha\): \(1.25\), \(1.5\), and \(1.75\). Figure 4 juxtaposes the evolving density profiles for three simulations with these different values of \(\alpha\). These simulations use the same collision outcome prescription, Rauch99. We find that a break in the density profile is ubiquitous, but its location depends on \(\alpha\), shifting from \(\sim 0.1\) pc for \(\alpha=1.75\) to \(\sim 0.04\) pc for \(\alpha=1.25\). The steeper the initial profile, the greater the fractional change in density, as seen in the bottom panels of the Figure. To illustrate these trends, we indicate the point at which the density at \(7\) Gyr is halved compared to the expected value using a grey vertical line in each panel. It shifts inward with increasing \(\alpha\). Regardless of initial conditions, we find that the density profile is always preserved, i.e. unmodified by collisions, outside about \(0.2\) pc. Observations of this region can therefore be used to constrain the underlying, original density distribution.
### Cluster Luminosity and Color
As we discuss in the previous section, mass loss due to collisions together with the accelerated evolution of merger products can lead to a discernible flattening of the stellar density profile over time. In this section, we briefly address the effect of collisions on the stellar luminosity. Mergers are the typical outcome of collisions outside of about \(0.01\) pc (Rose et al., 2023). Consistent with our previous treatment of the merger products, we assume that the properties of the merged star are similar to those of a typical main-sequence star of the same mass (e.g. Leiner et al., 2019). In this case, the luminosity of the stars should follow a simple mass-luminosity relation: \(L\propto M^{2.3}\) for \(M<0.43\,M_{\odot}\), \(\propto M^{4}\) for Sun-like
Figure 4: The stellar density profile with varying initial density profile of the stars. The title of each column indicates the value of \(\alpha\) used for the stellar density profile. Each plot has the same axes to facilitate a comparison between them. The form of the evolving density profiles suggest that the initial profile is always preserved outside of \(0.2\) pc, where collisions are more rare. A steeper the initial profile, however, also results in a more pronounced break in the density profile at a further distance from the SMBH. To highlight this trend, we have added a dashed grey line marking where the mass density is halved due to collisions for the oldest population shown in the figure (darkest red curve, \(7\) Gyr).
stars, and \(\propto M^{3.5}\) for \(M>2\,M_{\odot}\)(e.g., Salaris and Cassisi, 2005; Duric, 2003). Additionally, we can calculate the peak wavelength for the stars in our sample with Wien's displacement law, which we use the peak wavelength as a proxy for color. The general assumption here is that we consider the properties of these objects after a period of thermal relaxation, not when they are still cooling following collisional shock-heating (see discussion of thermal versus collision timescale in Rose et al., 2023).
We illustrate the effects of collisions on color and luminosity in Figure 5. This figure presents a snapshot of our sample stars at \(4.64\times 10^{9}\) yr. The left panel shows the expected population in the absence of collisions, equivalent to the grey points in Figure 2, while the right side shows the simulated population. We plot the peak wavelength of the stars versus their distance from the SMBH, both on a logarithmic scale. The y-axis is inverted so that more massive (bluer) stars are higher, facilitating a direct comparison with the mass versus radius plot in Figure 2. Additionally, we color-code the points by bolometric luminosity, calculated using the relations above.
As can be seen in the figure, mergers produce brighter and bluer stars than expected for a population of that age, like "blue stragglers" in a star cluster population (Leiner et al., 2019). Collisionally stripped stars are also present in Figure 5. These stars have undergone one or more high-speed collisions, leading to mass loss. Their new, lower masses would suggest that they are redder and less luminous. However, while we have plotted them here based on this assumption, their appearance is highly uncertain. Models of stripped stars in binary systems may provide clues as to their appearance, suggesting they are in fact more luminous than their progenitor stars (Gotberg et al., 2018). We therefore distinguish them with gold outlines in the figure.
Previously, we showed that collisions always produce a break in the stellar mass density profile. The break occurs because collisions can only ever reduce the mass contained in the stars; it cannot increase the mass. Luminosity, however, scales as mass to the 3.5 power. We therefore do not expect collisions to necessarily produce a decrease in the bolometric luminosity profile of the cluster within \(r_{\rm coll}\). The bluer, brighter merger products may in fact outshine the other stars in their vicinity. Figure 5 shows that the colors and luminosities of stars are affected by collisions in a way that varies systematically with radius. Luminous, blue merger remnants exist from 0.01-1 pc, around \(r_{\rm coll}\sim 0.1\) pc. Stripped stars from the highest-velocity collisions preferentially lie within 0.1 pc. Future work may examine the evolution of the luminosity profile, both bolometric and in specific bands, by leveraging comprehensive hydrodynamic and stellar evolution simulations in order to understand the dynamical and thermal evolution of merger products.
## 5 Conclusions
Collisions between main-sequence stars are common within a tenth of a parsec of the SMBH in the center of the Milky Way galaxy. The impact velocities of these collisions are often on the order of, if not larger, than the escape speed from the stars (e.g., Lai et al., 1993; Balberg et al., 2013; Rose et al., 2023). As a result, individual collisions can result in mass loss from the stars (e.g., Lai et al., 1993). On a population level, mass loss from collisions can affect the stellar density profile of the cluster, or feed the central SMBH (e.g. Rubin and Loeb, 2011). Some key findings of our work include:
1. Collisions affect the majority of stars inside the collision radius, \(r\lesssim r_{\rm coll}\), defined by \(t_{\rm coll}=t_{\rm age}\). In our GN, \(r_{\rm coll}\sim 0.1\) pc (equation 8).
2. As described in further detail by Rose et al. (2023), lower-velocity collisions lead to massive, "blue straggler" merger remnants, while the highest-velocity collisions lead to stripped, low-mass remnants. The occurrence of these products depends
Figure 5: The peak wavelength of each star in our sample \(4.64\times 10^{9}\) yr into our Rauch99, \(\alpha=1.75\) simulation. Peak wavelength, a proxy for color, is plotted versus distance from the SMBH. We also colorcode the stars in the sample by bolometric luminosity. The left panel shows the stars without collisions, the same as the grey points in the lower right plot of Figure 2, while the right panel shows the simulated population with collisions. Collision-induced mergers cause a population of brighter, bluer stars to form, stars which would not otherwise exist in a population of the same age. We also include the population of collisionally-stripped stars, though the luminosity, color, and general appearance of these stars is uncertain.
on radius within the cluster, with the majority at \(r\lesssim r_{\rm coll}\) (Figures 2 and 5).
3. Collisions always result in at least partial stellar mass loss. Additionally, many collisions can merge stars into more massive stars between 0.01 and 0.1 pc. These stars evolve off the main-sequence more quickly, creating a deficit in stellar mass compared to regions further from the SMBH. We examine the effect of collisions on the stellar mass density profile in Figures 3 and 4. Inside a break radius, \(\sim r_{\rm coll}\), density profiles decrease from their nominal values. We find that the location slope of the stellar mass density inside the break radius depends most-strongly on the collision model adopted and how much mass is expelled in high velocity collisions.
Our results demonstrate a simple, intuitive relation between the location of \(r_{\rm coll}\) and the density profile and individual properties of stars in GN. Equation 7 highlights how these results can be extrapolated to other systems. Our findings highlight how future work could address key uncertainties by exploring the interplay of dynamics, hydrodynamics, and stellar evolution in this unique astrophysical setting.
We thank Smadar Naoz and Abraham Loeb for many helpful discussions. SR thanks the Dissertation Year Fellowship, Bhaumik Institute Fellowship, and CIERA Lintheimer Fellowship for partial support. SR acknowledges partial support from NASA ATP 80NSSC20K0505 and NSF-AST 2206428 grants. MM is grateful for support from a Clay Postdoctoral Fellowship at the Smithsonian Astrophysical Observatory.
|
2304.14371 | Neural Field Conditioning Strategies for 2D Semantic Segmentation | Neural fields are neural networks which map coordinates to a desired signal.
When a neural field should jointly model multiple signals, and not memorize
only one, it needs to be conditioned on a latent code which describes the
signal at hand. Despite being an important aspect, there has been little
research on conditioning strategies for neural fields. In this work, we explore
the use of neural fields as decoders for 2D semantic segmentation. For this
task, we compare three conditioning methods, simple concatenation of the latent
code, Feature Wise Linear Modulation (FiLM), and Cross-Attention, in
conjunction with latent codes which either describe the full image or only a
local region of the image. Our results show a considerable difference in
performance between the examined conditioning strategies. Furthermore, we show
that conditioning via Cross-Attention achieves the best results and is
competitive with a CNN-based decoder for semantic segmentation. | Martin Gromniak, Sven Magg, Stefan Wermter | 2023-04-12T15:04:37Z | http://arxiv.org/abs/2304.14371v1 | # Neural Field Conditioning Strategies
###### Abstract
Neural fields are neural networks which map coordinates to a desired signal. When a neural field should jointly model multiple signals, and not memorize only one, it needs to be conditioned on a latent code which describes the signal at hand. Despite being an important aspect, there has been little research on conditioning strategies for neural fields. In this work, we explore the use of neural fields as decoders for 2D semantic segmentation. For this task, we compare three conditioning methods, simple concatenation of the latent code, Feature Wise Linear Modulation (FiLM), and Cross-Attention, in conjunction with latent codes which either describe the full image or only a local region of the image. Our results show a considerable difference in performance between the examined conditioning strategies. Furthermore, we show that conditioning via Cross-Attention achieves the best results and is competitive with a CNN-based decoder for semantic segmentation.
Keywords:neural fields conditioning semantic segmentation +
Footnote †: journal: Journal of Computer Vision
## 1 Introduction
Lately, neural networks for semantic segmentation have been mostly based on the fully convolutional network (FCN) [11] paradigm. FCN models typically consist of an encoder and a decoder which are both build with stacked convolution layers. The purpose of the encoder is to extract features from the image. With increasing depth of the encoder, the features get more abstract and the resolution of the feature maps is progressively reduced. The decoder on the other hand takes the low-resolution feature map from the encoder as an input and upscales them to the resolution of the original image so that pixel-level classification can be performed.
While encoders in the form of convolutional neural networks (CNN) have been rigorously studied, considerably less research has been conducted on the decoder side of semantic segmentation networks. The main challenge for the
decoder is to upscale the feature map to the image's original resolution and simultaneously produce accurate region borders. In CNN-based decoders, typically upsampling or transposed convolution operators are used to progressively increase the spatial resolution of the feature maps. These operations introduce a particular kind of inductive bias. For example, transposed convolutions can create spectral artifacts in the upscaled feature maps [5]. Another apparent disadvantage of CNN decoders is that they struggle to capture long-range dependencies between different parts of the image, due to their locally connected structure.
In the last few years, neural fields, aka implicit neural representations or coordinate-based networks, have received much attention for learning a variety of different signals, for example, 1D audio signals [22], 2D images [4, 26] and 3D geometries [12, 24]. A neural fields takes (spatial) coordinates \(x\in\mathbb{R}^{d}\) as input and maps them to a task-dependent signal \(y=\Phi(x)\) through a neural network. For example, a neural field representing an RGB image takes 2D image coordinates as input and produces three RGB values at each location. One interesting property of neural fields is that they represent signals as continuous functions on their respective spatial domain.
Inspired by the recent successes of neural fields, we explore the use of neural fields as decoders in semantic segmentation networks. In this regard, we hypothesize that (continuous) neural fields provide an inductive bias which can be better suited for reconstructing high-resolution semantic maps compared with (discrete) CNN-based decoders. In our work, we examine multiple conditioning strategies, which enable the neural field decoder to make use of the information in the latent feature map produced by the encoder. Through our comparative study, we aim to provide more insights into conditioning methods of neural fields, as research have been extremely sparse in this regard. Furthermore, we believe that 2D semantic segmentation provides a well-defined task for studying conditioning methods, as it has comprehensive metrics and the possibility for insightful visualizations of the learned geometries.
## 2 Related Work
**Semantic Segmentation** Encoder-decoder fully convolutional networks [11] have become the predominant approach for semantic segmentation. They share the challenge how to encode high-level features in typically low-resolution feature maps and subsequently upscale these feature maps to retrieve pixel-accurate semantic predictions. Multiple approaches [19, 1] have introduced skip-connections between the encoder and the decoder, which help the decoder to combine global with local information and therefore recover sharp object boundaries. One drawback of CNNs is that, because of their locally connected structure, they struggle to combine information which is spatially distributed across the feature maps. Research attempting to mitigate this drawback has proposed attention mechanisms over feature maps to selectively capture and combine information on a global scale [6]. Extending the concept of attention further, neural network ar
chitectures based fully on transformers have been proposed recently for semantic segmentation [25]. In our work, we utilize a CNN, which is more efficient than transformers, for extracting features and use attention in one of our conditioning methods.
**CNN Decoders** Research on decoders has been more sparse than research on neural network encoders, i.e. CNN backbones. Wojna et al. [28] compared different CNN-based decoders for several pixel-wise prediction tasks and observed significant variance in results between different types of decoders. Multiple works [14, 5] have provided evidence that upscaling using transposed convolution operators introduce artifacts in the feature maps and therefore the decoder's output. We aim to avoid any explicit or implicit discrete artifacts by using a continuous neural field decoder.
**Neural Fields** Neural fields were introduced in 2019 as a representation for learning 3D shapes [12, 15]. Following works extended neural fields by learning colored appearance of scenes and objects [24, 13]. Particularly NeRF [13] has attracted a lot of attention, as it is able to generate very realistic novel views of a scene, learning from images and associated poses. NeRF effectively overfits a neural network for one individual scene. This limits the usability as the neural field needs to be re-trained for every new scene. Some works have explored the use of neural fields for semantic segmentation. Vora et al. [27] built a 3D segmentation on top of the NeRF approach. Hu et al. [9] used neural fields in conjunction with CNNs for upsampling and aligning feature maps in the decoder of a semantic segmentation network.
**Neural Field Conditioning** When a neural field should share knowledge between different signals, it needs to be conditioned on a latent code which describes the signal at hand. Several conditioning approaches have been explored in the literature. Methods based on global conditional codes use one code to describe the whole signal [12, 23]. Methods based on local conditional codes [29, 4] use a different code for each spatial area in the signal. On top of these, there exist multiple methods how a neural field can actually consume a conditional code, which we describe in detail in Section 3.3. Rebain et al. [18] compared different methods for conditioning neural fields for 2D and 3D tasks, but did not consider global and local conditional codes. In the neural field community, there is a lack of comparative research on what conditioning strategies work well for which task. We attempt to shed more light on this by comparing different conditioning strategies for the well-defined task of 2D semantic segmentation.
## 3 Method
### Neural Network Architecture and Training Procedure
Our high-level architecture involves a CNN decoder and a neural field decoder(see Figure 1). We use a CNN to efficiently encode an image into a feature map with
size \(c\times h\times w\), where \(c\) is the number of channels, \(w\) is the spatial width and \(h\) is the spatial height. From this feature map, we calculate the conditional code for the neural field decoder in different ways, depending on the conditioning strategy. During training, for every image, we sample \(S\) random points within the image. At test time, the points are densely sampled so that there exists one point for each pixel. The point coordinates are normalized to the \([0,1]\) range, stacked, and fed to the neural field decoder as input. For every point, the decoder predicts the semantic class at that position in the image. We use a cross-entropy loss to train the whole setup in an end-to-end fashion. At test time, the class predictions per point are arranged into an image. Thereby, we can compare the predicted feature map with the labeled feature map using standard image segmentation metrics.
### Latent Code Source: Global vs. Local
First, we differentiate how the conditional code is calculated based on the feature map from the encoder. We consider a _global_ code and a _local_ code. The global code represents the content of the complete image. Naturally, it can capture the global context in the image well. However, due to its limited capacity, it might not be able to capture fine, local geometries. On the other hand, the local code represents a spatially limited area in the image. It can utilize its full capacity
Figure 1: Our high level neural network architecture. A CNN encoder encodes an image into a feature map. During training, \(S\) points per image are sampled within the image (red) and fed into the decoder. The decoder is a conditional neural field for which we use different conditioning strategies. For every point the decoder outputs a prediction of the semantic class at this position (purple).
for modeling the geometry in one area with high fidelity, however, it might lack global context. For example, the probability of detecting a car rises when a street is detected somewhere in the image.
We calculate the global code vector by applying a global average pooling operation. It averages all the entries in the feature map across the spatial dimensions (see the top path in Figure 2). This is a standard operation which is used, for example, in the ResNet classification head [8]. Through this procedure, we calculate one global code per image.
For calculating the local code, we utilize the point coordinates, in addition to the feature map. For every point, we "look up" the value of the feature map at this position. For this purpose, we normalize the feature map's spatial dimensions to the [0,1] range, and therefore effectively align it with the input image. We then perform a bilinear interpolation within the feature maps based on the point coordinate to calculate the local code vector (see the middle path in Figure 2). As a result, we have \(S\) local codes per image, one for every point.
In addition to using either a global or a local code, we also consider the combination of both to jointly exploit their individual advantages. We do this by concatenating both codes.
### Conditioning the Neural Field Decoder
Conditioning a neural field enables it to effectively adapt the knowledge which is shared across all signals to the signal at hand.
Figure 2: A visualization of our conditioning strategies. We consider three conditioning methods: Concat conditioning, FiLM conditioning and Cross-Attention conditioning (right side). For Concat and FiLM conditioning, one feature vector is used, which can be calculated from global features (top path) or local features (mid path). The input to the Cross-Attention Transformer is the whole feature map, which is reshaped and treated as tokens (bottom path).
#### 3.3.2 Conditioning by Concatenation
In the simplest conditioning method, the conditional code is concatenated to the point coordinates and is jointly used as input to the neural field. We re-concatenate the conditional code to other hidden layers using skip connections. This approach is used by HyperNeRF [16]. It has the advantage of being conceptually simple, however, it is computationally inefficient [18], because it requires \(O(k(c+k))\) parameters for the fully connected layers in the neural field, where \(k\) is the hidden layer width and \(c\) is the size of the conditioning vector.
#### 3.3.3 Feature-Wise Linear Modulation
Another way to condition a neural field is to use the latent code together with an MLP to regress the parameters of the neural field. When all parameters of the neural field are calculated in this way, the approach is known as hyper-networks [7]. Feature-wise Linear Modulation (FiLM) [17] is a more constraint subtype of hyper-networks where, instead of predicting all the weights, feature-wise modulations of activations in the neural field are predicted. This approach is used in Occupancy Networks [12] and piGAN [2].
#### 3.3.4 Cross-Attention
Conditioning by Cross-Attention has been introduced by Jiang et al. [10] and was extended in the Scene Representation Transformer [21]. The core idea is to selectively attend to features at different spatial positions, based on the point coordinates. A transformer architecture with Cross-Attention layers is used where the queries are derived from the point coordinates and the feature maps serve as a set of tokens. This approach does have an interesting connection with using local codes, as both approaches calculate a feature vector by weighting entries in the feature maps based on the current point coordinate. However, in difference to the spatial "look up" of local codes, which can be performed for free, the Cross-Attention operation can flexibly query both local and global information as needed at the cost of more computation [18].
## 4 Experiments
We evaluate seven conditioning strategies on a publicly available dataset for semantic segmentation. Concat conditioning and FiLM conditioning are used in conjunction with global, local and combined conditional codes each. The Cross-Attention Transformer uses the reshaped feature map as input (see Figure 2).
### Dataset
For our experiments, we used the Potsdam dataset4 which is part of the ISPRS semantic labeling contest [20]. It consists of satellite images of the German city Potsdam together with dense label masks for six classes: Impervious surfaces,
Building, Low vegetation, Tree, Car and Clutter/background. The orthographic images have a sampling distance of \(0.05\) m/px. The total dataset consists of \(38\) tiles with a size of \(6000\times 6000\) px from which we use the same \(24\) tiles for training as in the original contest. From the remaining tiles, we use \(7\) for validation and \(7\) for testing. From the tiles, we randomly crop patches of \(256\times 256\) or \(512\times 512\) pixels.
### Encoder and Decoder Implementations
For the Concat and the FiLM decoder, we use a similar neural network architecture, which is based on Occupancy networks [12] (see Figure 2(a)). We use either concatenation plus conventional batchnorm or conditional batchnorm at the designated places in the neural network architecture. For the Cross-Attention conditioning, we use a transformer architecture based on the Scene Representation Transformer [21] (see Figure 2(b)). It uses one multi-head attention module per block. Keys and values are calculated from the feature tokens while the queries are calculated from the point coordinates. We can scale both neural network architectures by repeating the yellow blocks \(N\) times or increasing the width of the MLP layers. For all experiments we use a ResNet34 [8] backbone as the encoder, pre-trained on ImageNet. Its output feature map has a size of \(512\times 8\times 8\) for input images with size \(256\times 256\) pixels and \(512\times 16\times 16\) for input images with size \(512\times 512\) pixels respectively.
### Points Embedding
It has been shown that when coordinates are directly used as inputs, neural fields have a bias towards learning low-frequency signals. To counter this, we embed
Figure 3: Our neural network architectures used for the Concat and FiLM conditioning (left) and for the Cross-Attention Transformer (right). The yellow block can be repeated \(N\) times. For the Concat approach, the orange block denoted with an asterisk represents a concatination followed by a batchnorm layer. For FiLM, the same block denotes a conditional batchnorm layer. Other batchnorm and layernorm layers have been omitted for clarity.
both image coordinates independently into a higher dimensional space by using Fourier features as it is commonly done with neural fields [26]:
\[\gamma(x)=(sin(2^{0}\pi x),sin(2^{1}\pi x),...,sin(2^{l}\pi x),cos(2^{0}\pi x), cos(2^{1}\pi x),...,cos(2^{l}\pi x)), \tag{1}\]
where \(x\) is an image coordinate and \(l\) controls the embedding size.
### Training Parameters
The influence of the parameters used in our experiments was evaluated in preliminary runs, based on the validation performance. For all experiments, we choose a fixed learning rate of \(1\times 10^{-4}\) for the Adam Optimizer and a batch size of 64. We use horizontal and vertical flipping as data augmentation and perform early stopping based on the IoU metric on the validation set. For all neural field architectures, 512 points are sampled per image and we choose \(l=4\) as the size of the points embedding. Empirically, we have found that the results are not sensitive to both these parameters. We have explored scaling the neural field architectures by increasing the number of blocks and the MLP layers' width. With that approach, we use a hidden size of 512 for all MLP layers. One block is used within the Concat and FiLM conditioning network and two blocks are used within the Cross-Attention Transformer. For all architectures, we try to have approximately the same amount of parameters to make a fair comparison.
## 5 Results
In Table 1 we show the Intersection over Union (IoU), F-Score and the number of parameters for all seven conditioning strategies and two different image sizes on the test set. We also compare our neural field decoder with the DeepLabV3+
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \hline & & \multicolumn{4}{c|}{Image Size} & \\ Decoder & Conditional & \multicolumn{2}{c|}{256} & \multicolumn{2}{c|}{512} & \multicolumn{1}{c}{Params} \\ conditioning & Code Source & IoU & F-score & IoU & F-score & \\ \hline Concatenation & global & 0.689 & 0.816 & 0.659 & 0.794 & 2.1M \\ & local & 0.725 & 0.840 & 0.666 & 0.799 & 2.1M \\ & global + local & 0.728 & 0.842 & 0.712 & 0.832 & 4.0M \\ \hline FiLM & global & 0.695 & 0.820 & 0.660 & 0.795 & 2.1M \\ & local & 0.729 & 0.843 & 0.650 & 0.788 & 2.1M \\ & global + local & 0.729 & 0.843 & 0.707 & 0.829 & 3.7M \\ \hline Cross-Attention & feature tokens & 0.758 & 0.862 & 0.754 & 0.860 & 2.6M \\ \hline \hline DeepLabV3+ [3] & - & 0.760 & 0.863 & 0.763 & 0.866 & 5.4M \\ \end{tabular}
\end{table}
Table 1: Results for all examined decoder architectures.
Figure 4: The predictions of all examined decoder architectures on three example images (\(512\times 512\) px) from the test set. For Concat and FiLM conditioning, \(g\) denotes a global code source, \(l\) denotes a local code source and \(g/l\) denotes a concatenation of global and local code. The class color code is: white=Impervious surfaces, blue=Building, cyan=Low vegetation, green=Tree, yellow=Car, red=Clutter/background. By comparing the predictions with the ground truth segmentation masks, it can be observed that the ability to represent details, e.g. distinct objects or angular corners, varies greatly between the approaches. Only the Cross-Attention and the DeepLabV3+ decoders are able to faithfully represent the segmentation masks, while the Concat and FiLm approaches tend to produce overly smooth geometries.
[3] fully convolutional neural network for semantic segmentation which also uses a ResNet34 backbone. In Figure 4 we show the predictions of all decoder architectures for three example images. From the results, we can make multiple key observations.
First, the Concat and FiLM decoders perform very similarly in all aspects, regardless of the conditional code source and the image size.
Second, conditioning via Cross-Attention works best amongst all neural field approaches. Furthermore, it performs similarly to the DeepLabV3+ FCN. Notably, the Cross-Attention decoder has half as much parameters and no access to the intermediate feature maps of the encoder.
Third, the performance of the Concat and FiLM approaches can be improved by using a combination of global and local features, particularly for larger images. In that case, the performance of both approaches is not much lower compared with the Cross-Attention decoder.
Fourth, the performance of the Concat and FiLM conditioning decreases with larger input images when using global codes. This can be expected, as it is harder to model more geometries in larger images with the same code length.
Fifth, when using local codes, the performance is also degraded when dealing with larger images. This is unexpected, as the sampling distance (meters per pixel) remains the same and therefore the size of the features should also remain the same. This could be an indication that the individual vectors in the feature map produced by the CNN encoder do not model purely local features, as stated by methods using this approach [4, 29]. This is further supported by the fact that modern CNN architectures have very large receptive fields so that one feature vector in the output feature map receives input from the complete input image. In our case, the ResNet34 encoder has a receptive field of 899 pixels which fully covers both our image sizes.
## 6 Conclusion
In this work, we performed a comparative study of neural field conditioning strategies and explored the idea of a neural field-based decoder for 2D semantic segmentation. Our results show that neural fields can have a competitive performance when compared with a classic CNN decoder while requiring even fewer parameters. We also showed that the performance of the neural field is considerably affected by the conditioning strategy. The best conditioning strategy likely depends on the task. For the task of 2D semantic segmentation, a Cross-Attention-based Transformer is superior to Concat and FiLM conditioning. However, also the combination of local and global conditional codes is a promising approach, as the performance is not much lower. Lastly, for local features, we showed an unexpected degradation in performance when increasing the image size. Further research is required to explain this observation and deduce consequences for local conditioning methods. |
2307.12250 | Spin polarization of heavy quarks in matter: predictions from effective
field theories | The spin polarization of heavy quarks in heavy-ion collisions at the LHC is
estimated from effective field theories (EFTs). One EFT is similar to the HQET
used at zero temperature. This gives a coupling of the heavy quark spin to
colour and electromagnetic fields in heavy-ion collisions. The second EFT
describes the interaction of heavy quarks and hydrodynamic modes, and gives the
coupling between the heavy quark spin and the local vorticity of the fireball.
Using these, we find that the measurement of polarization of the heavy quark
from small to moderate p_T at the LHC is predicted with a single free parameter
proportional to the vorticity. As a result, the heavy quark polarization is the
same whether it is derived from the spin alignment of heavy vector mesons or
the polarization of heavy baryons. We also predict that the parameter does not
differ much between charm and bottom quarks. | Sourendu Gupta | 2023-07-23T07:44:23Z | http://arxiv.org/abs/2307.12250v1 | # Spin polarization of heavy quarks in matter:
###### Abstract
The spin polarization of heavy quarks in heavy-ion collisions at the LHC is estimated from effective field theories (EFTs). One EFT is similar to the HQET used at zero temperature. This gives a coupling of the heavy quark spin to colour and electromagnetic fields in heavy-ion collisions. The second EFT describes the interaction of heavy quarks and hydrodynamic modes, and gives the coupling between the heavy quark spin and the local vorticity of the fireball. Using these, we find that the measurement of polarization of the heavy quark from small to moderate \(p_{{}_{T}}\) at the LHC is predicted with a single free parameter proportional to the vorticity. As a result, the heavy quark polarization is the same whether it is derived from the spin alignment of heavy vector mesons or the polarization of heavy baryons. We also predict that the parameter does not differ much between charm and bottom quarks.
## I Introduction
Perhaps one of the most interesting recent observations in the study of heavy-ion physics has been the spin polarization of hadrons. Until now there have been measurements of the polarization of \(\Lambda\), \(\Xi\), and \(\Omega\) baryons [1; 2; 3] as well as spin alignment of \(\rho\) and \(K^{*}\) vector mesons [4; 5]. The spin polarization (or alignment) has to be due to an ordering axial vector field, which could be electromagnetic fields produced by the two charged nuclei travelling parallel to each other with relativistic speeds, or the overall angular momentum of the fireball.
Parallely, a substantial improvement has come about in the understanding of the interaction of heavy quarks with thermal QCD matter, largely by exploiting the hierarchy of scale between the heavy quark mass, \(m\), and the temperature, \(T\), of matter. On the theoretical side, there has been steady progress over time as details of heavy quark thermalization, and the relevant transport coefficients, were worked out in studies using weak coupling theory [6; 7], the lattice [8; 9; 10], and effective field theories (EFTs) [11; 12]. Experiments at the LHC have studied \(R_{AA}\), the ratio of cross sections in AA and pp collisions as functions of transverse momentum, \(p_{ T}\), and impact parameter at central rapidity, i.e., \(y<0.5\)-\(1\), for multiple nuclei. It turns out that both \(R_{AA}\) and the azimuthal flow, \(v_{2}\), of charm can be described in transport theory, and yield values for the charm quark momentum diffusion constant [13; 14] in agreement with lattice results. The momentum diffusion constant yields charm thermalization time in the range of 3-8 fm. Since this time scales as the heavy quark mass, the bottom thermalization time is expected to be about 10 fm or more. As a result, bottom quarks are not expected to be thermalized. Similarly, the momentum of charm quarks in non-central collisions at the LHC, and for high-\(p_{ T}\) even in central collisions are also at best partially thermalized.
Here we suggest that spin polarization experiments with high \(p_{ T}\) heavy flavour hadrons, especially of bottom flavour, are of significant interest. The measurement is interesting in itself [15]. Moreover, by writing two bottom-up EFTs for the interaction of a heavy quark with matter, we demonstrate here that the measurement of polarization of heavy flavour hadrons would give theoretically controllable insights. EFTs are written as usual by choosing an appropriate cutoff, and using the symmetries of the problem to write all possible terms in the Lagrangian organized by mass dimension.
The light degrees of freedom in the fireball, gluons and light quarks, are characterised by the energy and momentum scale \(T\). In the early stages of the fireball one has \(T>T_{co}\), where \(T_{co}\approx\Lambda_{\overline{\Lambda_{\overline{\Lambda}}}}\). Realistically, \(T\) may be as large at 0.5 GeV at early times, but not much higher. One could take \(m=4.18\pm 0.03\) GeV for bottom quarks and \(m=1.280\pm 0.025\) GeV for charm [16]. By choosing a cutoff \(m>\Lambda\gg T\), one may write an EFT for the heavy quark which is not very different from the vacuum heavy quark effective theory (HQET). Clearly, the EFT is more likely to be under control for the bottom quark than the charm. Vacuum HQET finds quantitative use in computing the decays of hadrons with heavy quarks. For \(T>T_{co}\) there are no hadrons and HQET will be useful in finding the polarization of heavy hadrons. This polarization would carry through the hadronization, since heavy quark spin symmetry implies that dynamics at longer scales does not change it.
At distance larger than the transport length scale (in a weakly coupled medium this is the mean free path), matter can be described as a fluid. If \(g(T)\), the strong coupling at the scale \(T\), is small enough, then a power counting shows that the transport scale \(1/[Tg^{4}(T)\log g(T)]\) is longer than the magnetic screening length \(1/[Tg^{2}(T)]\), so the fluid would be colourless. However, the approximation \(g(T)\ll 1\) cannot be used, as becomes clear when we consider a dimensionless fluid variable called the liquidity, \(\ell\), which is the ratio of the transport length scale to the typical microscopic distance scale [17]. In the weak coupling regime \(\ell=1/[g^{4}(T)\log g(T)]\gg 1\). In this regime the fluid behaves like a gas, which has mean free path much larger than the interparticle spacing. However, one can also write \(\ell\approx\sqrt[3]{S/\eta}\). In the fireball one finds \(\ell\approx 1\), which means the fluid behaves like a liquid, so \(g(T)\) is not small, and the weak-coupling expansion is not applicable at energy scale of order \(T\). By choosing \(\Lambda\gg T\) in HQET we ensure that the weak coupling expansion is not pushed to a scale where it is not applicable.
However, experimental studies of heavy-ion collisions have a good description in terms of a colourless fluid [18; 19]. So at length scales where this fluid description works, one should use variables which describe fluid flow and its fluctuations i.e., phonons, as well as flavour current. Since QCD does not allow flavour mixing, the flavour current of
Figure 1: The major energy scales considered here are (lowest on the left) the inverse of the typical size or lifetime of the fireball, \(\mathcal{M}\), the temperature of the medium, \(T\), and the heavy quark mass, \(m\). Thermal HQET is defined with a UV cutoff \(\Lambda\) such that \(m>\Lambda\gg T\). HQSD is defined with a UV cutoff \(\overline{\Lambda}\) satisfying \(T>\overline{\Lambda}\gg\mathcal{M}\), so that the medium can be described by a colour neutral fluid.
the heavy quark has no direct current-current coupling to other flavour currents; at hydrodynamic scales the currents are only coupled via phonons, as a result of which the couplings appear at mass dimension greater than 6. Therefore, the leading couplings of a heavy quark current are to flow vectors and phonons. The momenta of phonons would be bounded, of course, with \(|{\bf k}|<\overline{\Lambda}\). Also, since the spin-symmetry of the heavy quark persists into the IR, a compact description of the heavy-flavour current and its spin is provided by a colour neutral heavy fermion field. So the hydrodynamic effective theory below the UV cutoff \(\overline{\Lambda}\) may be called Heavy Quark Sono-Dynamics (HQSD). Take the scale of the fireball (typical spatial size or lifetime) to be \(1/{\cal M}\). Since the Knudsen number \(\kappa={\cal M}/T\ll 1\), one may use a UV cutoff \(\overline{\Lambda}\) for HQSD, with \(T>\overline{\Lambda}\gg{\cal M}\).
In the Sections II and III we discuss the Lagrangians of HQET in matter and HQSD respectively. Applications to heavy quark spin thermalization and polarization in heavy-ion collisions at the LHC can be found in Section IV. Our conclusions and a summary of the observations which could give evidence of the physics discussed in this paper are in Section V. Technical material, such as details of the objects needed for the construction of EFTs, appears in the appendix.
## II Heavy quark effective theory in matter
In vacuum HQET [20; 21; 22] light degrees of freedom are integrated out to some scale \(\Lambda\) such that \(m>\Lambda\gg\Lambda_{\overline{MS}}\). The second inequality ensures that this can be done in weak coupling. The operator product expansion required for writing the EFT does not change in thermal matter, although the tensor composition of the local operators does (see Appendix A for details). Further, when \(\Lambda\gg T\), the weak coupling computations are also very similar. As a result, HQET in matter does not differ too much from the more familiar case of HQET in vacuum.
HQET in matter has two consequences. The first is heavy-quark velocity superselection, similar to that in the vacuum theory. The momentum of a heavy quark can be written in terms of its 4-velocity, \(v\), in the form \(p=mv+k\) where the momentum \(k\) comes from the soft kicks given to the heavy quark by the gluons in the medium. By demanding that \(v\gg\Lambda/m\), one controls these kicks when the transverse momentum of the heavy quark is larger than the cutoff, \(p_{T}\gg\Lambda\). Velocity superselection allows us to treat the heavy quark rest frame (HQRF) as an inertial frame and write Lorentz invariant quantities more simply in the HQRF (see Appendix A). The second is that choosing \(\Lambda\gg T\) allows the integration of hard modes to be done in weak coupling theory. This avoids the problem of having to use \(g(T)\), which, as we argued before is large. The choice of UV cutoff then allows us to match the LECs using weak coupling theory, at least for bottom.
For \(v\gg\Lambda/m\) one first factors the conserved quantity \(v\) by defining the Dirac field \(\chi=\exp(imv\cdot x)\psi\). Since the remainder of the heavy quark momentum is smaller than \(\Lambda\), it is useful to decompose \(\chi\) into large and small parts, the large part being \(Q=P_{+}\chi\) where \(P_{+}=(1+\not{v})/2\). Then using the methods of Appendix A, one finds that in the HQRF one can write
\[L_{HQET}=\frac{i}{2}\overline{Q}\partial_{t}Q-\frac{c_{1}}{2m}\overline{Q}{ \bf D}^{2}Q+\frac{c_{2}}{4m}\overline{Q}\Sigma_{\mu\nu}QgF^{\mu\nu}+{\cal O} \left(\frac{1}{m^{2}}\right) \tag{1}\]
where \(\partial_{t}\) is a time derivative, \({\bf D}\) is the spatial part of the covariant derivative, \(g\) is the strong coupling taken at the scale \(\Lambda\), and \(F^{\mu\nu}\) is the field strength tensor for gluons. The LECs \(c_{1,2}=1+{\cal O}(\alpha_{s})\) are computable in the weak-coupling expansion. Loop corrections to the heavy quark propagator can induce a mass correction of the form \(\Delta m={\cal O}(\alpha_{s})\). The form of eq. (1) is robust; when \(g\simeq 1\) it is the ability to compute the LECs in weak-coupling which is lost [23]. Note that the piece of the action with the spin operator, \(\Sigma_{\mu\nu}\), commutes with the leading kinetic piece of HQET, and gives rise to the emergent heavy quark spin symmetry. This property is retained in the presence of matter.
The mass dimension 5 interaction term between the heavy quark spin and the colour field is given by (see Appendix A),
\[L_{5}=\frac{c_{2}g}{2m}\times\begin{cases}\overline{Q}\mathbf{\sigma}\cdot{\bf B }Q&\text{(in HQRF)}\\ \gamma_{ CM}\overline{Q}({\bf v}\cdot{\bf E}+\mathbf{\sigma}\cdot{\bf B})Q&\text{(in CM frame)}\end{cases} \tag{2}\]
There is no surprise about the form of the interaction, but HQET gives the value of \(c_{2}\) to whatever precision is required. There is a source of large colour fields in the earliest stages of the collision. At this epoch the momenta of the gluons is mostly in the \(z\) direction, and a plasma mode called the Weibel instability is expected to produce strong colour fields in the medium [24; 25; 26; 27; 28]. Due to the Weibel instability, induced fields grow until non-Abelian effects stop the growth. An estimate in [26] is that the saturation value for mode number \(q\) is \(gB\simeq q^{2}\) (see however, [28]). At temperature \(T\) the most likely mode number \(q\simeq T\). So, by this argument, one would have \(gB\simeq T^{2}\). However, the direction of the colour magnetic field due to the Weibel instability is fixed by initial perturbations, and will be
different from one event to another. Since polarization cannot be measured event by event, unfortunately, this spin alignment cannot be used to yield experimental evidence of the Weibel instability. A different use of the effect of colour magnetic fields on the heavy quark spin is to estimate whether or not the spin degree of freedom can be thermalized independently of the momentum. We will present such an estimate in Section IV.
The coupling of heavy quarks to electromagnetic (EM) fields is described by simply adding the EM field tensor to eqs. (1) and (2). The EM field has coupling \(qe\) instead of \(g\), where \(q\) is the charge of the heavy quark. We do not add EM corrections to the LECs. This works since \(\alpha\simeq 0.007\), and with \(\alpha_{ S}\simeq 0.25\), the changes will be numerically significant only at the level of three-loop strong-interaction corrections.
In this discussion we have assumed that the cutoff \(\Lambda\) can be pushed to high enough values that weak coupling estimates are reliable. As we discuss in Section IV, this assumption may fail for the charm quark, since \(\Lambda\) has to be smaller than \(m\). It is useful therefore to note that a lattice computation of the LEC \(c_{2}\) is feasible, but hard. The spin polarization of the quark is clearly proportional to the magnetic polarization. So from a linear response perspective, one may simply define a magnetic susceptibility, \(\chi\), by the constitutive relation, \(M=\chi B_{ EM}\), where \(M\) is the magnetization of the heavy quark. Due to the analogy between an imaginary chemical potential for a fermion and an external U(1) magnetic field on the fermion, \(\chi\) is related to with the quark number susceptibility measured in lattice QCD, but for a heavy quark. Introducing such a chemical potential for the heavy quark, matching the lattice computation of the susceptibility to one obtainable from eq. (2) would give a non-perturbative matching of \(c_{2}\). This last step requires a knowledge of the charm \(g-2\), relating the magnetic moment and the spin. This is parametrically of the order of \(\alpha_{ S}\) at the charm quark mass, and hence also requires a lattice computation. This is the hardest part of the non-perturbative matching of \(c_{2}\) using the lattice.
## III Heavy quark sono-dynamics
Next we turn to the extreme IR and the effective theory at momentum scales smaller than typical transport time scales. This is the regime of hydrodynamics. In this regime the heavy quark can only interact with flavour currents, the conserved energy-momentum tensor, and their fluctuations. A colour bleached heavy flavour current is easily constructed using (the large components of) a colour neutral Dirac field. QCD forbids any direct interaction between currents of different flavours. Nor can such interactions be enabled by exchanging hadronic excitations, since they all have effective masses of order \(T\), which lie beyond the cutoff \(\overline{\Lambda}\). However, there is always one low-energy excitation in all hydrodynamic theories, namely sound waves. They are the Goldstone field of a broken conformal symmetry and are described by a scalar field, since they have no polarization. The theory of a heavy quark coupled to phonons is what we call Heavy Quark Sono-Dynamics (HQSD). We note that HQSD is equally valid for charm and bottom quarks since \(\overline{\Lambda}\ll m\). In neither case can the matching of LECs be done in weak coupling.
In order to control the construction of the theory, the effect of the viscosity, \(\eta\), has to be understood. Its major effect on sound is to exponentially attenuate the waves in a typical length scale which is called the Stokes length, \(\ell_{\rm St}\). In a theory of phonons this exponential decay may be modelled as an effective phonon mass, \(m_{\rm St}=1/\ell_{\rm St}\). This mass scale can be written in terms of \(\eta\), the energy density of the fluid, \(\epsilon\), and the speed of sound, \(c_{s}\), for a wave of frequency \(\omega\) by the expression
\[m_{\rm St}=\frac{2\eta\omega^{2}}{3\epsilon c_{s}^{3}}\approx\left(\frac{\eta }{S}\right)\,\frac{\overline{\Lambda}^{2}}{T}. \tag{3}\]
The second expression is an order of magnitude estimate obtained by setting \(\epsilon\approx TS\), where \(S\) is the entropy density of the fluid, and taking the maximum possible value for \(\omega\approx\overline{\Lambda}\). Then choosing \(T=500\) MeV and \(\overline{\Lambda}=50\) MeV, as we argue in Section IV, we find that \(m_{\rm St}\) is not larger than about 5 MeV. As a result, the phonon effective mass due to viscosity can be neglected within the lifetime of the fireball, and it may be treated as a true Goldstone boson.
Since HQSD cannot be obtained directly from the QCD Lagrangian, we will construct it bottom-up using the available symmetries. Taking into account the existence of two special frames, namely HQRF and the fluid local rest frame (FLRF), one can clearly write a Lorentz-invariant action using the projectors given in Appendix A. The simplest theory one can write is given in the HQRF by
\[L_{HQSD}=+\frac{1}{2}\left[\widetilde{\Pi}_{1}^{\mu\nu}+c_{s}^{2}\widetilde{ \Pi}_{2}^{\mu\nu}\right]\left(\partial_{\mu}\phi\right)\left(\partial_{\nu} \phi\right)+\Delta m\overline{Q}Q+\frac{i}{2}\overline{Q}\partial_{t}Q-\frac{ c_{5}}{2m}\overline{Q}\mathbf{\nabla}^{2}Q+\cdots \tag{4}\]
The first two terms are kinetic terms for the phonon (see Appendix A for an explanation of why this term splits), the next term accounts for all possible mass corrections in the theory, and the last two are kinetic terms for the heavy quark. All terms are of mass dimension 4, except the mass term, which has dimension 3, and the last, which has mass
dimension 5. As expected, there is no spin dependence here. Note also that there are no interaction terms between \(Q\) and \(\phi\) at dimension 4 since terms such as \(\overline{Q}Q\,\phi\) are not allowed for the Goldstone field \(\phi\).
However, in setting up the Lagrangian in eq. (4), we have neglected the crucial fact that the fluid will generally have a local vorticity pseudovector \(\mathbf{w}=\mathbf{\nabla}\times\mathbf{u}\). Since \(\mathbf{u}\) is dimensionless, \(\mathbf{w}\) has mass dimension 1. Before proceeding, it is useful to recall a formal property. The vorticity \(\mathbf{w}=\mathbf{\nabla}\times\mathbf{u}\) gives rise to a topological invariant called the vortex helicity [29]
\[C=\int d^{3}x\mathbf{u}\cdot\mathbf{w}=\int d^{3}x\epsilon^{ijk}u_{i}\partial_ {j}u_{k}. \tag{5}\]
From the last expression it is clear that \(C\) is similar to an Abelian Chern-Simons term. The Kevin-Helmholtz theorem shows that \(C\) is a conserved quantity in a perfect fluid. More intuition for such a term comes from electrodynamics where the 4-volume integral of the term \(\epsilon^{\mu\nu\lambda\rho}F_{\mu\nu}F_{\lambda\rho}\) can be reduced to the form in eq. (5) with the vector potential taking the place of \(\mathbf{u}\) and the magnetic field of \(\mathbf{w}\)[30]. What this means is that a good generalization of the vorticity is an antisymmetric rank-2 tensor, \(\omega_{\mu\nu}\). The part of this which is analogous to \(\mathbf{B}\) in \(F_{\mu\nu}\) is \(\mathbf{w}\). The part that is analogous to \(\mathbf{E}\) we will denote \(\mathbf{\varpi}\).
This generalization of the vorticity vector agrees with that used in [31; 32], with the definition used here being exactly a factor of two larger than that used in those references. Furthermore, in the context of the polarization of hadrons with light or strange quarks, two related quantities, the thermal vorticity and the chiral vorticity, have been investigated. These lie above the UV cutoff of HQSD, and so do not appear in the systematic coarse graining used to construct this theory.
There is another critical issue to consider before constructing a theory of heavy quarks coupled to the vorticity. For \(\mathbf{w}\) to be non-vanishing, \(\mathbf{u}\) must change from one point to another. In that case the fluid element coupled to the quark is accelerating. How is it then possible to neglect technical issues connected to defining quantum field theories in accelerated frames, such as the Unruh effect? One cannot, in principle. Instead one can ask how important this effect is likely to be in the current context. Since one expects \(\mathbf{w}\) to be similar in magnitude to the expansion rate \(\mathcal{M}\), both being macroscopic scales in the fluid, the acceleration gives rise to an Unruh temperature of magnitude \(\mathcal{M}/(2\pi)\ll\overline{\Lambda}<T\). So, in the thermal environment of the fireball, one should be able to neglect the effect of acceleration and use an EFT.
Since \(\mathbf{w}\) has mass dimension unity, it can lead to a new dimension 4 term in the Lagrangian,
\[L_{w}=\frac{c_{4}}{2}\,\overline{Q}\Sigma^{\mu\nu}Q\omega_{\mu\nu}=c_{4}\times \begin{cases}\overline{Q}\mathbf{\sigma}\cdot\mathbf{w}Q&\text{(in HQRF)}\\ \gamma_{ CM}\overline{Q}(\mathbf{v}\cdot\mathbf{\varpi}+\mathbf{\sigma}\cdot\mathbf{w})Q &\text{(in CM frame)}\end{cases} \tag{6}\]
where \(c_{4}\) is a dimensionless LEC of order unity. This term captures the non-commutation of a spin with rotational fluid motion, for which the term spin-orbit coupling has become standard in the context of heavy-ion physics. The term has no suppression by the UV cutoff, and is therefore the same for both the charm and bottom quark. If the fireball has a net angular momentum, then the vorticity vectors in different parts of the fireball must sum up to a non-vanishing value. So such a coupling of the quark spin to the local vorticity can give rise to net spin polarization.
It is interesting to ask what effect higher dimensional terms will have. They are suppressed by powers of \(k/\overline{\Lambda}\) where \(\mathbf{k}\) is the momentum of the phonon in the HQRF. Since there is another vector, namely the fluid velocity \(\mathbf{u}\), it is possible to construct an axial vector \(\mathbf{q}=\mathbf{u}\times\mathbf{k}\). As a result, the simplest spin-dependant coupling to a phonon is \(\mathbf{\sigma}\cdot\mathbf{q}\). This dimension 5 piece is
\[L_{5}=\frac{c_{5}^{\prime}}{\overline{\Lambda}}\epsilon^{\mu\nu\lambda\rho}v_{ \mu}\,\overline{Q}\gamma_{5}\gamma_{\nu\rho}Qu_{\lambda}\partial_{\rho} \phi\stackrel{{ H_{QRF}}}{{=}}\frac{c_{5}^{\prime}\gamma}{ \overline{\Lambda}}\,\overline{Q}\mathbf{\sigma}_{i}Q\,\epsilon^{ijk}\mathbf{u}_{ j}\mathbf{\nabla}_{k}\phi. \tag{7}\]
Since the direction of \(\mathbf{k}\) is random, averaging the resulting polarization over all possible directions of \(\mathbf{k}\) gives a vanishing result. This argument also implies that corrections to \(L_{w}\) can only come from terms which involve \(|\mathbf{k}|^{2}\) multiplying \(\mathbf{\sigma}\cdot\mathbf{w}\). These are dimension-6 terms, and are suppressed by \(k^{2}/\overline{\Lambda}^{2}\). So, the universality between bottom and charm polarization due to the coupling to vorticity is expected to be numerically accurate.
## IV Application to heavy-ion collisions
We will work with PDG values for the heavy quark masses [16], namely \(m=4.18\pm 0.03\) GeV for bottom quarks and \(m=1.280\pm 0.025\) GeV for charm. The best estimates of the possible energy density in the early stages of a heavy-ion collision currently come from Bayesian fits [18; 19]. We can take the temperature reached at LHC energies to be about 0.5 GeV, and in the top beam energy at RHIC to be about 0.3 GeV. This gives almost an order of magnitude
separation between the bottom quark mass and both initial fireball temperatures, but significantly less for the charm quark. So HQET in matter could be quantitatively predictive for the bottom, but marginal for the charm. Since we would like to satisfy the double inequality \(m<\Lambda\ll T\), a safe choice is to take \(\Lambda\approx\sqrt{mT}\). This can be satisfied with \(\Lambda=2\) GeV for the bottom. At this scale the QCD coupling, \(\alpha_{ S}\approx 0.3\), which could allow computation of the LECs in the weak coupling expansion. Pushing the UV cutoff to smaller values in order to accommodate charm will cause us to lose this advantage. However, as discussed previously, it is possible in this case to do a lattice computation to match the LECs of HQET in matter.
For the construction of HQSD we need to consider one more time scale, the fireball lifetime. For both top RHIC energy and LHC, one can take the inverse of the expansion rate, \({\cal M}\), to be the fireball lifetime, i.e., \({\cal O}(10)\) fm, giving \({\cal M}\approx 20\) MeV. Since we are using this as an order of magnitude, it covers the range between 6 fm and 20 fm, which is wide enough to accommodate the fireball lifetime in this range of energies. With the values of initial \(T\) already quoted, this implies that the choice \(\overline{\Lambda}=50\)-100 MeV is workable. It is possible that the inverse of the expansion rate at an early stage of the fireball is different from the lifetime. However, unless the inverse rate is as low as 1-2 fm, this choice \(\overline{\Lambda}=100\) MeV still works at early times.
In this section we will concentrate on the polarization of high-\(p_{ T}\) heavy quarks at central rapidity. When \(\beta_{ CM}\gamma_{ CM}=p_{ T}/m\) is large, the speed of the heavy-quark is larger than the speed of sound in the fireball. As a result, the radial expansion of the fireball may be neglected, and the heavy quark taken to leave the fireball at a time which is no more than the nuclear radius \(R_{A}\simeq 6\) fm. Conditions in the fireball, such as the temperature and the vorticity will evolve during this time. The values we use below should be considered to be path-averaged. Due to the uncertainties in initial values of various parameters we do not perform an actual averaging in this first estimate.
During the time that the fast heavy quark spends in the medium, the coupling between its spin and thermal colour magnetic gluons, see eq. (2), will cause the spin to relax. We present only an order of magnitude prediction for the spin relaxation rate here, leaving a detailed calculation to a later paper. Thermal fluctuations of the gluon field can give rise to field strengths \(gB\simeq T^{2}\). Their interaction with the spin will relax in a typical time of order \(m/T^{2}\), taking \(c_{2}=1\). For \(T=0.5\) GeV, this leads to bottom quark relaxation time of a little more than 3 fm, and the relaxation time approaches 6 fm for \(T=0.4\) GeV.
The relaxation of the spin has two consequences at the LHC. Firstly, at times of about 3 fm or so, one can use a thermal density matrix to a good approximation. Second, because of the rapid decrease of the relaxation rate as the temperature drops, the spin degree of freedom freezes out and becomes a thermal relic pretty early. For fast heavy quarks, then one can use a thermal density matrix with with a temperature of about 0.4-0.5 GeV after about 3 fm.
All the heavy-quark spin Hamiltonians that we will need to deal with are of the form \(H=a+b\mathbf{\sigma}\cdot\mathbf{\mathcal{A}}\) where \(a\) and \(b\) are scalars, and rotational symmetry is broken by the axial vector \(\mathbf{\mathcal{A}}\). In the leading order in the EFT we need only the single heavy-quark sector of Fock space. As a result, it is sufficient to consider the two dimensional Hilbert space spanning the spin states of the quark, and the treatment is an elementary exercise in statistical mechanics. Using a \(2\times 2\) density matrix of quark spin \(\rho=\exp(-H/T)\), with \(T\) the relic temperature, the polarization in a direction
Figure 2: In the CM frame of the colliding ions, the axes are oriented so that the beam is along the z-axis, and we align the x-axis so that the centers of the two colliding nuclei initially move along the xz plane. The initial magnetic field \(B_{ EM}\) is oriented in the \(y\) direction, as is the net angular momentum, \(\mathbf{J}\) of the fireball. As shown in the figure on the left, the heavy quark velocity, \(\mathbf{v}\) is orthogonal to the beam (since we assume it is at central rapidity) and at an angle with respect to \(B_{ EM}\). On the right we show a possible direction of the vorticity \(\mathbf{w}\) at the position of the heavy quark. Although \(\mathbf{w}\) may be oriented differently in different fluid elements, the heavy quark encounters a non-zero mean of \(w\) along its path if \(\mathbf{J}\) is non-vanishing.
specified by the unit 3-vector \({\bf n}\) is given by \(P={\rm Tr}(\mathbf{\sigma}\cdot{\bf n}\rho)/z\) where \(z={\rm Tr}\rho\). Quantizing along \({\bf n}\), one finds that
\[P=\frac{{\rm e}^{\Delta E/T}-{\rm e}^{-\Delta E/T}}{{\rm e}^{\Delta E/T}+{\rm e }^{-\Delta E/T}}=\tanh\left(\frac{\Delta E}{T}\right)\qquad{\rm where}\qquad \Delta E=2b{\bf n}\cdot\mathbf{\cal A}. \tag{8}\]
In heavy-ion collisions, we will take the Hamiltonian in the CM frame of the fireball, where the axes are chosen as shown in Figure 2. Since the initial magnetic field, \({\bf B}_{ EM}\), as well as the net angular momentum, \({\bf J}\), in non-central collisions is oriented in the \(y\) direction, this is the polarization predicted along this axis.
In the early stages of the fireball, it is estimated that \(cB_{ EM}=\zeta T_{co}^{2}\), where \(\zeta={\cal O}(1)\) in the CM frame of the colliding system (see [33; 34], for example), and points in the \(y\) direction. There is also an electric field in the \(x\) direction of similar magnitude; the numerical differences between the magnitudes of \(B_{ EM}\) and \(E_{ EM}\) can be captured in different values of \(\zeta\), with both of order unity. We can write
\[\Delta E=\frac{c\gamma_{ CM}T_{co}^{2}}{4m},\qquad{\rm where}\qquad c=c_{2}q\zeta, \tag{9}\]
where \(q=-1/3\) is the the charge of the bottom quark and \(c_{2}=1+{\cal O}(\alpha_{\bar{5}})\). The biggest uncertainty is in the value of \(\zeta\). This large magnetic field will certainly polarize heavy quarks, but the subsequent relaxation will wipe out the initial polarization. Any observable polarization must be due to the magnetic field available at the time that the spin freezes out. If this happens at a time of around 3-4 fm, then we need estimates of \(\zeta\) at that time. It has been estimated that after the first fm of lifetime, the EM fields decay extremely slowly [34], and one should be able to use \(\zeta=0.01\). Then, with \(c={\cal O}(0.01)\), \(\Delta E\) is of the order of a few tens of KeV for \(p_{ T}\) ranging from 5-50 GeV. For such a small value of \(\Delta E\), one expects \(P=\Delta E/T\) to good precision, which implies that the polarization is far below the level of a percent. We show detailed results in Figure 3 with the choice \(c=0.01\), for two values of \(T\). The bands of uncertainty shown come from uncertainties in \(m\) and \(T_{co}\).
It is useful to note a difference between thermal averaging and averaging over events. The result of eq. (8) is obtained by thermal averaging. For the polarization due to colour fields produced by the Weibel instability, one has \(\Delta E\propto{\bf B}_{y}\). This gives a spin polarization \(P=\Delta E({\bf B}_{y})/T\) in one event. Since polarization measurements average over events, the observed polarization will be
\[P=\frac{1}{N_{evt}}\left(\frac{c_{2}g}{2mT}\right)\sum_{i=1}^{N_{evt}}{\bf B}_ {y}=0, \tag{10}\]
where the number of events is \(N_{evt}\). The sum vanishes because from one event to another the strong field \({\bf B}\) has independent orientations and magnitudes. In the language of statistical mechanics \({\bf B}\) is a quenched random variable, and its effects could become measurable only if event-by-event spin measurements were possible. Another example of quenched randomness could occur if the magnitude of \({\bf B}_{ EM}\) varied strongly from one event to another. In that case an averaging over \(\zeta\) would have to be performed. The result would still be non-zero because \({\bf B}_{ EM}\) always points in the same direction. However, this is an academic discussion, since the polarization due to EM fields is predicted to be so small.
Figure 3: The \(p_{ T}\) dependence of bottom quark polarization, \(P\), due to (a) the remnant magnetic field, \({\bf B}_{ EM}\), at time of about 3 fm, and (b) the vorticity \({\bf w}\). In (a) have taken \(|{\bf B}_{ EM}|=\zeta T_{co}^{2}\), and used \(c_{2}q\zeta=0.01\); the result is unlikely to be observable. The bands include the uncertainty in the bottom quark mass and \(T_{co}\), which has been taken as \(155\pm 3\) MeV. In (b) we have chosen \(2c_{4}|{\bf w}_{y}|=10\) MeV (full lines). The uncertainty comes from changing this combination from 3 to 30 MeV (dashed lines).
On the other hand, for the coupling of the spin to the vorticity we have
\[\Delta E=2c_{4}\gamma_{ CM}{\bf w}_{y}, \tag{11}\]
where we can take \(c_{4}\approx 1\), and the polarization is measured with respect to the \(y\) direction, as before. Typical values of \({\bf w}_{y}\) at the LHC are expected to be [32] 10 MeV or so, but can depend significantly on impact parameter and time. Given this, we assume a central value of \(2c_{4}{\bf w}_{y}=10\) MeV, and use a generous margin of uncertainty, 3-30 MeV. The results are shown in Figure 3. The polarization of the heavy quark is predicted to be large, and would definitely be observable. The value of \(\Delta E\) is expected to be similar for bottom and charm quarks, and differences between their polarizations, if any, would be evidence for a change in their spin-freezeout temperature, \(T\), which enters through eq. (8).
## V Conclusions
In this paper we investigated heavy quark spin dynamics in strongly interacting matter using heavy quark effective field theories (EFT). These offer a controlled but extreme simplification of the problem by reducing the Fock space of heavy quarks to the Hilbert space of a single heavy quark in the leading order of the EFT. In particular, for the polarization, this means that we need only to work within the two dimensional space of a single heavy quark spin. We used two EFTs to investigate the coupling of a heavy quark spin to the fireball at two different energy and momentum scales.
The first is the analogue of HQET in matter. The HQET in vacuum is widely used and tested in heavy-hadron phenomenology. The operator product expansion that is used to write down the terms changes mildly in matter as long as the UV cutoff of HQET maintains the hierarchy \(m>\Lambda\gg T_{co}\) (since \(T_{co}\approx\Lambda_{\overline{MS}}\)). We discussed that this may be a reasonable approximation for bottom quarks, but is more problematic for charm. We also discussed how the HQET for charm would differ, and how its LECs can be matched to lattice QCD computations.
The second EFT is HQSD, and is valid at hydrodynamic scales, i.e., below a UV cutoff \(T>\overline{\Lambda}\gg{\cal M}\), where \({\cal M}\) is the inverse of a typical size (or time) scale of the fireball. This theory would be equally valid for charm and bottom quarks. We gave a bottom-up construction of HQSD which yielded a mass dimension-4 coupling between heavy quark spin and the local vorticity of the fluid. This term does not have an explicit dependence on the mass of the heavy quark, and should impart equal polarization to the charm and bottom. We checked that the contribution of dimension-5 terms to the polarization would vanish, so any corrections to the universality of charm and bottom would come at best at dimension-6, and hence is expected to be small.
We argued that HQET in matter leads to a thermal relaxation of the heavy quark spin even when the quark is fast enough that its momentum is not thermalized. An estimate of the relaxation time showed that a bottom quark spin could be in thermal equilibrium for a relatively short time. It is interesting to note that, depending on the value of the LEC, the relaxation time of a charm quark could be shorter. As a result, the spin of a fast charm quark might stay in thermal equilibrium for longer. We showed that a consequence of the long time scale of bottom quark spin relaxation is that the polarization due to the remnant EM field would be small, and below the threshold of measurement. The observable spin polarization of the heavy quarks would then be due to the fluid vorticity, and is expected to be large (see Figure 3).
We also discussed two semi-quantitative predictions. The most basic of these is that the spin polarization increases as \(\gamma_{ CM}\) as predicted by eq. (8). This dependence on \(\gamma_{ CM}\) indicates that the heavy quark spin is coupled to a material property that is consistent in the CM frame of the collision, i.e., in the rest frame of the fireball, and is independent of heavy quark \(p_{ T}\). As one can see from the formalism of Appendix A, this is a kinematic argument that lies at the base of heavy quark EFTs, and constitutes a test of this approach.
The second prediction is our fairly robust estimate that the coupling of the fluid vorticity to the heavy quark produces a polarization of a few tens of percent. There is a single parameter that underlies the prediction, which is the combination \(2c_{4}{\bf w}_{y}\) of eq. (11). A test is that the heavy quark polarization derived from spin alignment of vector mesons such as \(D^{*}\) and \(B^{*}\) should agree with those derived from the spin polarization of bottom and charm baryons. Furthermore, by appropriate binning in collision centrality, one may further check that the centrality dependence of this parameter is the same as that expected of the angular momentum of the fireball.
Before concluding we would like to point out some directions which have not been discussed in this work. We have not explicitly discussed the role of three space-time symmetries which are interesting, namely C, P, and T. In this work we have taken the thermal matter part of the action to be symmetric in each. However, at finite chemical potential CP is violated, and this will effect HQET and HQSD. The violation of parity in the fireball has also attracted much attention. This symmetry breaking certainly would have consequences for both HQET and HQSD. Both effects can introduce new spin alignment terms, and will be of interest. We leave such a study for a future paper. Two aspects of hydrodynamics we have not remarked on are of spin hydrodynamics and MHD. Both can affect observables. However,
their treatment requires detailed computation of flows, and lies beyond the scope of this paper. We plan to investigate them separately.
I would like to thank Saumen Datta, Subrata Pal, and Rishi Sharma for discussions.
## Appendix A Frames, matter, and HQET
Since the heavy quark 4-velocity, \(v\), is a constant timelike vector in EFTs of heavy quarks, we have a special inertial frame, the heavy quark rest frame (HQRF), in which one can write \(v=(1,{\bf 0})\). Many of the Lorentz invariant arguments that we use simplify in the HQRF. However, one can make their Lorentz invariance explicit by defining two projection operators
\[{\cal P}_{1}^{\mu\nu}=v^{\mu}v^{\nu},\qquad\text{and}\qquad{\cal P}_{2}^{\mu \nu}=g^{\mu\nu}-{\cal P}_{1}^{\mu\nu}. \tag{10}\]
We use the mostly negative metric \(g=\text{diag}(1,-1,-1,-1)\). One can use the above partition of the metric tensor to decompose any 4-vector, \(a\), into parts parallel and orthogonal to \(v\), namely the timelike piece \(a_{\nu}{\cal P}_{1}^{\mu\nu}=(a\cdot v)v^{\mu}=a^{0}v^{\mu}\), and the spacelike part \(a_{\nu}{\cal P}_{2}^{\mu\nu}=a_{\mu}-a^{0}v^{\mu}\). This implies that the decomposition \(a=(a^{0},{\bf a})\) in the HQRF is Lorentz invariant. The decomposition of a derivative operator in the HQRF is therefore also invariant
\[\partial\stackrel{{ HQRF}}{{=}}\left(\frac{\partial}{\partial t },\mathbf{\nabla}\right),\qquad\text{where}\qquad\frac{\partial}{\partial t}=v \cdot\partial,\qquad\text{and}\qquad\mathbf{\nabla}=v(v\cdot\partial)-\partial. \tag{11}\]
Furthermore, every scalar product of two 4-vectors, \(a\) and \(b\), can be invariantly decomposed into two pieces, namely \({\cal P}_{1}^{\mu\nu}a_{\mu}b_{\nu}=a^{0}b^{0}\) and \({\cal P}_{2}^{\mu\nu}a_{\mu}b_{\nu}={\bf a}\cdot{\bf b}\).
The heavy quark subspace \(Q=P_{+}\psi\) (where \(\psi\) is the full Dirac spinor) is easily seen to correspond to the "large components" of the spinor. The antiquark sits in the complementary subspace \(q=P_{-}\psi\) of the "small components". These are down by a factor \(1/M\). As a result, physics insights come easily by writing heavy-quark bilinears in the HQRF in the representation of the Dirac matrices in which
\[\gamma_{0}=\begin{pmatrix}\mathbb{I}&0\\ 0&-\mathbb{I}\end{pmatrix},\qquad\text{so that}\qquad\gamma_{5}=\begin{pmatrix} 0&\mathbb{I}\\ -\mathbb{I}&0\end{pmatrix}. \tag{12}\]
In this representation, we find
\[\overline{Q}\gamma_{5}Q = \overline{Q}\gamma_{i}Q=\overline{Q}\gamma_{5}\gamma_{0}Q= \overline{Q}\Sigma_{0i}Q=0,\] \[\overline{Q}\gamma_{0}Q = \overline{Q}Q,\quad\overline{Q}\gamma_{5}\gamma_{i}Q=\overline{Q }\sigma_{i}Q,\quad\overline{Q}\Sigma_{ij}Q=\overline{Q}\sigma_{i}Q=\epsilon_{ ijk}\overline{Q}\sigma_{k}Q, \tag{13}\]
where \(i,\ j\) are spatial indices and \(\sigma_{i}\) are the Pauli matrices. The bilinears which vanish are those with (block) off-diagonal structure to the Dirac matrices, for example, \(\gamma_{5}\) as shown in eq. (12). Of course, physics does not depend on the representation of the Dirac matrices, as one can see from checking that these statements simply reflect whether or not \(P_{+}\Gamma P_{+}\) vanishes.
The identities in eq. (13) may be surprising at first, because they seem to imply that \(\overline{\psi}\not{\partial}\psi\stackrel{{ HQRF}}{{=}}\overline{Q} \gamma_{0}\partial_{t}Q=\overline{Q}\partial tQ\). This is clearly an over-simplified form of the kinetic term for heavy quarks. The explanation is that \(\overline{Q}\not{\nabla}Q\) vanishes only to leading order in \(M\), so that the integration over diagrams containing virtual anti-quark propagators gives rise to the remainder of the kinetic term, namely \(\overline{Q}\nabla^{2}Q/(2M)\). Note that this is suppressed by one power of the UV cutoff \(M\) (see [22] for a detailed pedagogical discussion). Similarly, the other bilinears in eq. (13) get quantum corrections at higher orders in \(1/M\).
In the application to heavy-ion collisions, we will need to use the CM frame of the fireball. We use the notation \(u_{ CM}\) to denote the 4-velocity of the CM in any inertial frame. In the CM frame \(u_{ CM}=(1,{\bf 0})\), so that \(\not{\pi}_{ CM}=\gamma_{0}\). The boost between HQRF and the CM frame is \(v\cdot u_{ CM}=\gamma_{ CM}\). If the heavy quark is produced at central rapidity with transverse momentum \(p_{ T}\), then, since \(v=\gamma_{ CM}(1,{\bf v}_{ CM})\), one has \(\beta_{ CM}\gamma_{ CM}=p_{ T}/m\), which implies that \(\gamma_{ CM}=\sqrt{1+p_{ T}^{2}/m^{2}}\).
For applications, it will also be useful to write the Lagrangians in the CM frame rather than in the usual HQRF. Then, instead of directly writing the bilinears as in eq. (13), it is simpler to use commutators with \(P_{+}\). For our purposes, the most useful of these is
\[[P_{+},\Sigma_{\mu\nu}]F^{\mu\nu}=\gamma_{ CM}({\bf v} \cdot{\bf E}\gamma_{0}+\mathbf{\sigma}\cdot{\bf B}), \tag{14}\]
since it directly allows us to write spin interactions in the CM frame. A straightforward way to see this is to write out the definition of \(\Sigma_{\mu}\nu\) in terms of the Dirac matrices, and in the CM frame, write \(\gamma_{0}=\not{\pi}_{ CM}\), and \(\epsilon^{ijk}\gamma_{i}\gamma_{j}=\gamma_{5}\not{\pi}_{ CM}\gamma_{k}\), where \(i,\,j,\,k\) are spatial indices.
In Section IV we present a consideration of scales which shows that one may not be able to construct a useful thermal HQET for charm quarks using weak coupling theory. If physics at the scale of the charm quark has to be isolated from that of thermal matter, then one has to push the UV cutoff of HQET closer to the temperature of matter, and do the matching of the LECs through a lattice computation. In constructing this version of thermal HQET, another ingredient is needed. A material at thermodynamic equilibrium requires the introduction of a velocity vector, \(u^{\mu}\), of the heat bath with respect to the observer. This gives the new projection operators in the direction of, and transverse to, \(u\), namely
\[\widetilde{\Pi}^{\mu\nu}_{1}=u^{\mu}u^{\nu},\qquad\text{and}\qquad\widetilde{ \Pi}^{\mu\nu}_{2}=g^{\mu\nu}-\widetilde{\Pi}^{\mu\nu}_{1}. \tag{10}\]
Since \(u\) and \(v\) cannot be taken to be equal in general, so the pair of projectors \(\mathcal{P}_{1,2}\) are distinct from \(\widetilde{\Pi}_{1,2}\). We introduce the invariant \(v\cdot u=\gamma\). This means that in the HQRF, since \(v=(1,\mathbf{0})\), one may write \(u=\gamma(1,\mathbf{u})\) with \(|\mathbf{u}|=\beta=1/\sqrt{1-\gamma^{2}}\). Similarly, in the FLRF, since \(u=(1,\mathbf{0})\), one can write \(v=\gamma(1,\mathbf{v})\), yielding \(|\mathbf{v}|=\beta\). Of course, \(\mathbf{u}\) and \(\mathbf{v}\) are oriented independently. As a result, one finds that \(\mathcal{P}_{a_{\mu}}^{\;\mu}\widetilde{\Pi}^{\nu\lambda}_{b}\propto\gamma\) for \(a,b=1,\,2\).
\(\widetilde{\Pi}_{1,2}\) allow us to define in an invariant way two invariant components of the field tensor \(F^{\mu\nu}\), which are
\[D^{\mu\nu}=\widetilde{\Pi}^{\mu\lambda}_{1}\widetilde{\Pi}^{\nu\rho}_{1}F_{ \lambda\rho},\qquad\text{and}\qquad H^{\mu\nu}=\widetilde{\Pi}^{\mu\lambda}_{2 }\widetilde{\Pi}^{\nu\rho}_{2}F_{\lambda\rho}. \tag{11}\]
The notation has been chosen to remind us that in the frame co-moving with the material, i.e., the fluid's local rest frame (FLRF), \(D\) corresponds to the electric field in matter, and \(H\) to the magnetic field. In other words, in the FLRF, since \(u=(1,\mathbf{0})\), the structure of the two tensors correspond exactly to the electric and magnetic components of \(F\). So \(D_{0i}=-D_{i0}\) are the only possible non-vanishing components of the tensor \(D\) and \(H_{ij}=-H_{ji}\) the only possible non-vanishing parts of \(H\), where \(i\) and \(j\) are spatial indices.
All this is perfectly in agreement with our knowledge of electrodynamics, and other gauge theories, in matter. Since there are two independent and orthogonal field tensors in matter, the Lagrangian of the gauge field in matter can be written using two independent coefficients, \(\epsilon\) and \(\mu\), as
\[L_{EM}=\frac{\epsilon}{2}D^{\mu\nu}D_{\mu\nu}+\frac{\mu}{2}H^{\mu\nu}H_{\mu \nu}. \tag{12}\]
Cross terms do not exist since the two tensors are orthogonal projections. The equations of motion from eq. (12) are easy to write, and give rise to a wave equation with the dispersion relation \(\omega^{2}=c^{2}|\mathbf{k}|^{2}\) with \(c^{2}=\mu/\epsilon\). It is also straightforward to use the equations of motion to show that the solutions of the wave equation come with three polarizations, two transverse and one longitudinal [35]. Using the microscopic theory of the medium, the coefficients \(\epsilon\) and \(\mu\) can be computed, as they have been for gauge theories.
The decomposition of eq. (11) has a consequence for HQET in matter. The dimension-5 term from vacuum HQET splits
\[c_{2}\overline{Q}\sigma_{\mu\nu}F^{\mu\nu}Q\longrightarrow c_{2}^{\prime} \overline{Q}\sigma_{\mu\nu}D^{\mu\nu}Q+c_{2}^{\prime}\overline{Q}\sigma_{\mu \nu}H^{\mu\nu}Q, \tag{13}\]
with two different coefficients to the two gauge theory tensors. Each of the coefficients, \(c_{2}^{\prime}\) and \(c_{2}^{\prime\prime}\), may be computed in a thermal weak coupling expansion. The "magnetic field" in the HQRF, i.e., the spatial components of the gauge fields in that frame, will have contributions from both \(D\) and \(H\). In a weak-coupling computation for HQET in thermal matter, both the coefficients \(c_{2}^{\prime}\) and \(c_{2}^{\prime\prime}\) can be written as \(1+\mathcal{O}(\alpha_{ S})\).
|
2306.12179 | Quasi-Hermitian formulation of quantum mechanics using two conjugate
Schrödinger equations | In an amended version of non-Hermitian interaction picture we propose to work
with the states $\psi(t)$ in a dyadic representation. The control of evolution
via two conjugate Schr\"{o}diner equations then renders the usual necessity of
the construction of the time-dependent inner-product-metric operator
$\Theta(t)$ redundant. The primary information about dynamics is assumed
carried by a non-Hamiltonian observable (say, $R(t)$). A specific realization
of phase transitions is then rendered possible via the Kato's exceptional-point
(EP) degeneracy of the eigenvalues of $R(t)$ at the EP time $t=t^{(EP)}$. For
illustration a cosmological model is proposed mimicking the unitary-evolution
birth of the Universe from an initial quantum Big Bang singularity. | Miloslav Znojil | 2023-06-21T11:18:40Z | http://arxiv.org/abs/2306.12179v1 | **Quasi-Hermitian formulation of quantum mechanics using two conjugate Schrodinger equations**
## Abstract
To the existing list of alternative formulations of quantum mechanics a new version of non-Hermitian interaction picture is added. What is new is that in contrast to the more conventional non-Hermitian model-building recipes, the primary information about the observable phenomena is provided not only by the Hamiltonian but also by an additional operator with real spectrum (say, \(R(t)\)) representing another observable. In the language of physics the information carried by \(R(t)\neq R^{\dagger}(t)\) opens the possibility of reaching the exceptional-point degeneracy of the real eigenvalues, i.e., a specific quantum phase transition. In parallel, the unitarity of the system remains guaranteed, as usual, via a time-dependent inner-product metric \(\Theta(t)\). From the point of view of mathematics, the control of evolution is provided by a pair of conjugate Schrodinger equations. This opens the possibility od an innovative dyadic representation of pure states by which the direct use of \(\Theta(t)\) is made redundant. The implementation of the formalism is illustrated via a schematic cosmological toy model in which the canonical quantization leads to the necessity of working with two conjugate Wheeler-DeWitt equations. From the point of view of physics, the "kinematical input" operator \(R(t)\) may represent either the radius of a homogeneous and isotropic expanding empty Universe or, if you wish, its Hubble radius, or the scale factor \(a(t)\) emerging in the popular Lemaitre-Friedmann-Robertson-Walker classical solutions, with the exceptional-point singularity of the spectrum of \(R(t)\) mimicking the birth of the Universe ("Big Bang") at \(t=0\).
## Keywords
quantum theory of unitary systems; non-Hermitian interaction representation; non-stationary physical inner products; dyadic representation of pure states; schematic quantum model of Big Bang;
Introduction
Around the turn of millennium it had widely been accepted that the various existing formulations of quantum mechanics (QM) "differ dramatically in mathematical and conceptual overview, yet each one makes identical predictions for all experimental results" [1]. In the cited review the authors emphasized the historical as well as methodical importance of the Heisenberg's _alias_ "matrix" formulation of QM (in which the states do not change in time) as well as the economy of the most common Schrodinger's _alias_ "wavefunction" formulation (which "shifts the focus from measurable quantity to state").
In _loc. cit._ the catalogue of formulations was not exhaustive. The authors did not mention the "universal" interaction picture (IP) in which the observables (say, \(\mathfrak{a}\)) and the states (say, \(\psi\)) are _both_ allowed to vary with time, \(\mathfrak{a}^{(IP)}=\mathfrak{a}^{(IP)}(t)\) and \(\psi^{(IP)}=\psi^{(IP)}(t)\). In the general Hermitian IP framework of conventional textbooks [2] one can easily re-derive both the Heisenberg-picture (HP) or Schrodinger-picture (SP) methodical extremes when setting \(\psi^{(HP)}(t)=\psi^{(HP)}(0)=\psi^{(HP)}\) or \(\mathfrak{a}^{(SP)}(t)=\mathfrak{a}^{(SP)}(0)=\mathfrak{a}^{(SP)}\), respectively.
The review also did not reflect the new quick developments in the field in the direction initiated by Bender with Boettcher [3]. The latter innovation turned attention to the overrestrictive role played by the Stone theorem [4]. By this theorem, indeed, the evolution described by Schrodinger equation
\[\mathrm{i}\frac{d}{dt}\,|\psi^{(SP)}(t)\!\!\succ\,=\mathfrak{h}\,|\psi^{(SP)} (t)\!\!\succ\,\,,\qquad|\psi^{(SP)}(t)\!\!\succ\,\in\mathcal{L}^{(SP)} \tag{1}\]
is unitary in \(\mathcal{L}^{(SP)}\) if and only if the Hamiltonian is self-adjoint in \(\mathcal{L}^{(SP)}\), \(\mathfrak{h}=\mathfrak{h}^{\dagger}\). In our present paper we intend to offer a further extension of the latter methodical developments in which the Hermiticity restrictions imposed by the Stone theorem were circumvented.
In introduction we have to remind the readers that the origin of the idea can in fact be traced back to the paper by Dyson [5]. Long before the turn of millennium this author revealed that the goal of having the theory non-Hermitian but still unitary can be achieved via a non-unitary time-independent preconditioning of the SP wavefunctions,
\[|\psi^{(SP)}(t)\!\!\succ\,\,\to\,\,|\psi_{(Dyson)}(t)\rangle=\Omega^{-1}_{( Dyson)}\,|\psi^{(SP)}(t)\!\!\succ\,\,\in\,\,\mathcal{H}_{(Dyson)}\,,\qquad \Omega^{-1}_{(Dyson)}\neq\Omega^{\dagger}_{(Dyson)}\,. \tag{2}\]
In applications, the non-unitarity of the stationary Dyson's map \(\Omega_{(Dyson)}\) led to an efficient description of correlations in various complicated many-body systems [5, 6, 7, 8].
Along this line the potentially user-unfriendly Hamiltonian \(\mathfrak{h}\) has been replaced by its user-friendlier isospectral avatar defined as acting in the new, potentially user-friendlier Hilbert space \(\mathcal{H}_{(Dyson)}\),
\[\mathfrak{h}\to H=\Omega^{-1}_{(Dyson)}\mathfrak{h}\,\Omega_{(Dyson)}\,. \tag{3}\]
The conventional Hermiticity gets lost (\(H\neq H^{\dagger}\)) so that Hilbert space \(\mathcal{H}_{(Dyson)}\) has to be declared unphysical. Importantly, due to the time-independence of \(\Omega_{(Dyson)}\), the loss is just formal, with the Hermiticity of \(\mathfrak{h}\) in \(\mathcal{L}^{(SP)}\) merely replaced by the Dieudonne's [9] metric-mediated quasi-Hermiticity
of \(H\) in \({\cal H}_{(Dyson)}\),
\[H^{\dagger}\,\Theta_{(Dyson)}=\Theta_{(Dyson)}\,H\,,\ \ \ \ \ \Theta_{(Dyson)}= \Omega^{\dagger}_{(Dyson)}\,\Omega_{(Dyson)}\neq I\,. \tag{4}\]
On a more abstract quantum-theoretical level the isospectrality between a pre-selected, sufficiently user-friendly non-Hermitian Hamiltonian \(H\neq H^{\dagger}\) and its self-adjoint reconstructed partner \(\mathfrak{h}=\mathfrak{h}^{\dagger}\) opened multiple new model-building strategies, first of all, in quantum field theory [10, 11]. The possibility of reconstruction of the "missing" physical inner-product metrics \(\Theta\) from a given non-Hermitian Hamiltonian \(H\) led also, in the framework of relativistic QM, to a completion of the years-long efforts of a consistent probabilistic interpretation of the Klein-Gordon fields [12, 13, 14, 15] and/or of the Proca fields [16, 17].
A few other successful applications can be found mentioned in the recent review of the field by Mostafazadeh [18]. Still, the author had to admit there that after a tentative transition from the Klein-Gordon equation to a formally not too different Wheeler-DeWitt (WDW) equation of quantum gravity [19, 20], the applicability of the reconstruction of \(\Theta=\Theta(H)\) appears to be limited. In the Mostafazadeh's own words, "the lack of a satisfactory solution of this problem has been one of the major obstacles in transforming canonical quantum gravity and quantum cosmology into genuine physical theories" (cf. p. 1291 in _loc. cit._).
For this reason the review [18] of quasi-Hermitian QM did only marginally mention the WDW models. An analogous scepticism can be also found expressed in the quantum-gravity-dedicated monographs [21, 22]). The main mathematical obstacle can be seen in the fact that the operators "that arise in quantum cosmological models" have to be manifestly time-dependent and that such a choice "requires a more careful examination" [18].
More recently, the problem has been reopened and the latter challenge was re-addressed in [23]. Still, the main methodical and conceptual challenges were, from our present point of view, circumvented. For this reason we felt urged to complement the theory, in our present paper, by a new analysis in which the manifest \(t-\)dependence of the quasi-Hermitian operators would prove tractable in a more satisfactory manner.
As we already indicated in Abstract, an important source of the inspiration of our present project was that the vast majority of the conventional applications of the non-Hermitian model-building recipes starts from the assumption of our knowledge of the Hamiltonian \(H\). In most cases, this operator is assumed observable, i.e., constrained by relation (4). At the same time, the more abstract theory of review [7] admits the existence of an "input information" knowledge of at least one other, independent operator (i.e., in our present notation, of \(R(t)\)) with real spectrum.
In some sense (cf. [24]), an attempt of a feasible and, at the same time, purposeful incorporation of \(R(t)\) in the formalism was one of the main driving forces behind the present work. On the side of physics we decided to motivate it by the needs of quantum cosmology in which the notions like Hubble radius or scale factor play a key role in the classical-physics toy-model descriptions of the empty, homogeneous and isotropically expanding Universe. Still, for our present purposes we found it sufficient to speak just about an entirely schematic observable "radius of an expanding
toy-model Universe" \(r(t)\) which is allowed to vary with the so called cosmological time \(t\).
The presentation of our results starts in section 2 where we briefly review the existing stationary and non-stationary versions of the quasi-Hermitian quantum mechanics. In section 3 we then turn attention to the WDW equation and review and emphasize the recent progress in its study. A deeper insight in its role is then provided in section 4 in which we introduce our present highly schematic but instructive toy model of the quantum Universe.
For the sake of simplicity, just the radius \(r(t)\) will be considered quantized, i.e., represented, just shortly after Big Bang, by a (quasi-Hermitian) operator \(R(t)\). Subsequently, the related basic technical questions of the construction of the physical Hilbert-space metric \(\Theta(t)\) and of the evolution equations in the non-Hermitian interaction picture (NIP) are addressed and reviewed in section 5. Several conceptual aspects of the theory are finally discussed in section 6 and summarized in section 7.
## 2 Two quasi-Hermitian formulations of quantum theory
The introduction in quantum mechanics usually starts, in textbooks, from its formulation in Schrodinger "representation" _alias_ "picture" (SP, [2]). In this language the states are represented by the ket-vector elements \(|\psi^{(SP)}(t)\!\!\succ\) of a suitable Hilbert space \({\cal L}^{(SP)}\). The unitary evolution of the system is prescribed by Eq. (1), i.e., by Schrodinger equation in which the Hamiltonian is required self-adjoint, \(\mathfrak{h}=\mathfrak{h}^{\dagger}\). In such a setting a decisive simplification of the solution of Eq. (1) can be achieved, in principle at least, via the (usually, just numerical) diagonalization of the Hamiltonian.
### Non-Hermitian Schrodinger picture (NSP)
In many realistic models the diagonalization of \(\mathfrak{h}^{(SP)}\) may happen to be prohibitively difficult. More than half a century ago, fortunately, Freeman Dyson [5] revealed that whenever the "maximal" simplification \(\mathfrak{h}\to\mathfrak{h}_{(diagonal)}\) remains, due to its complexity, unavailable, the underlying realistic Schrodinger Eq. (1) might still be made tractable via a "partial" simplification of the Hamiltonian. He just recommended that the "inaccessible" diagonalization is to be replaced by any other (i.e., just invertible) auxiliary time-independent isospectrality mapping \(\Omega:\mathfrak{h}\to H\) which need not even be required unitary.
One just has to redefine the states as well as, whenever necessary or useful, also the Hilbert space itself (cf. Eq. (2) above). The original conventional Schrodinger equation becomes replaced by its equivalent representation in \({\cal H}_{(Dyson)}\),
\[\mathrm{i}\,\frac{d}{dt}\,|\psi(t)\rangle=H\,|\psi(t)\rangle\,. \tag{5}\]
As long as \(H\neq H^{\dagger}\), it makes sense to introduce the "alternative ket vectors"
\[|\psi(t)\rangle\!\rangle=\Theta\,|\psi(t)\rangle\]
evolving, in \({\cal H}_{(Dyson)}\), according to a complementary Schrodinger equation,
\[{\rm i}\,\frac{d}{dt}\,|\psi(t)\rangle\!\rangle=H^{\dagger}\,|\psi(t)\rangle\! \rangle\,. \tag{6}\]
The main benefit of the resulting formalism using two Schrodinger equations and the two different Hamiltonians (viz., \(H\) and \(H^{\dagger}\neq H\)) it that for pure states, the predictions of the results of measurements have the fully analogous form in \({\cal L}\) and in \({\cal H}_{(Dyson)}\). Indeed, once one considers any stationary and self-adjoint operator \({\mathfrak{a}}^{(SP)}\) representing an observable in \({\cal L}^{(SP)}\), and once one defines its NSP avatar \(A=\Omega^{-1}\,{\mathfrak{a}}^{(SP)}\,\Omega\) in \({\cal H}_{(Dyson)}\), the validity of the elementary mathematical identity
\[\prec\!\!\psi^{(SP)}(t)|{\mathfrak{a}}^{(SP)}|\psi^{(SP)}(t)\!\!\succ\,=\langle \!\langle\psi(t)|A|\psi(t)\rangle \tag{7}\]
implies the coincidence of the predictions (i.e., of the probability densities) when computed via the single textbook Schrodinger Eq. (1) or via the conjugate pair (5) + (6).
### Non-Hermitian interaction picture (NIP)
Once we admit the dependence of the Dyson's mapping on time, \(\Omega=\Omega(t)\), it is sufficient to follow the description of the necessary (i.e., NIP) amendments of the theory as described in [25, 26]. After such a generalization of the formalism the main changes result from the emergence, in both of the non-Hermitian Schrodinger equations, of the non-vanishing Coriolis-force term
\[\Sigma(t)={\rm i}\,\Omega^{-1}(t)\,\dot{\Omega}(t)\,,\ \ \ \ \dot{\Omega}(t)=\frac{d}{dt} \Omega(t)\,. \tag{8}\]
The non-stationary generalization \(H(t)\) of the observable non-Hermitian Hamiltonian (or, in the standard language of mathematics, of the quasi-Hermitian Hamiltonian) remains, by definition, isospectral with the self-adjoint (though now, admissibly, non-stationary) Hamiltonian \({\mathfrak{h}}(t)\) of textbooks. Still, the price to pay for the non-stationarity is that the non-stationary upgrade of the two NIP Schrodinger equations reads
\[{\rm i}\,\frac{d}{dt}\,|\psi(t)\rangle=G(t)\,|\psi(t)\rangle\,,\ \ \ \ \ G(t)=H(t)-\Sigma(t) \tag{9}\]
\[{\rm i}\,\frac{d}{dt}\,|\psi(t)\rangle\!\rangle=G^{\dagger}(t)\,|\psi(t) \rangle\!\rangle\,,\ \ \ \ \ G^{\dagger}(t)=H^{\dagger}(t)-\Sigma^{\dagger}(t) \tag{10}\]
where the generator of evolution contains the Coriolis term \(\Sigma(t)\) and ceases to be observable, therefore (cf., e.g., p. 1272 in [18]).
The admissibility of the non-stationarity of the prototype textbook Hamiltonian \({\mathfrak{h}}(t)\) is, in some sense, exceptional [27]. For all of the other non-Hermitian and non-stationary operators \(A(t)\) of observables defined, in \({\cal H}_{(Dyson)}\), by their pull-down from \({\cal L}^{(SP)}\), the formalism would become prohibitively complicated unless we assume that \({\mathfrak{a}}(t)={\mathfrak{a}}(0)={\mathfrak{a}}^{(SP)}\), i.e., unless we require that
\[A(t)=\Omega^{-1}(t)\,{\mathfrak{a}}^{(SP)}\,\Omega(t) \tag{11}\]
where the self-adjoint avatars of the non-Hamiltonian observables remain time-independent (see also [23] for a detailed discussion of this subtlety).
Under the latter assumption one can re-establish a complete parallelism between the conventional Hermitian quantum mechanics in interaction picture and its non-Hermitian alternative in which the two NIP Schrodinger Eqs. (9) and (10) control the evolution of states. Naturally, a complete picture is only obtained when one also takes into consideration the manifest and necessary time-dependence of the observables. The most straightforward guarantee of the internal consistency of the theory is then provided by the following result.
**Lemma 1**: _The time-dependence of observables (11) can be reconstructed from their initial values by the solution of Heisenberg equation_
\[{\rm i}\,\frac{\partial}{\partial t}\,A(t)=A(t)\,\Sigma(t)-\Sigma(t)\,A(t)\,. \tag{12}\]
**Proof.** Definition (11) is equivalent to relation \(\Omega(t)\,A(t)={\mathfrak{a}}^{(SP)}\,\Omega(t)\) which is easily differentiated,
\[\Omega^{-1}(t)\,\frac{\partial}{\partial t}\,\Omega(t)+\frac{\partial}{ \partial t}\,\,A(t)=\Omega^{-1}(t)\,{\mathfrak{a}}^{(SP)}\,\frac{\partial}{ \partial t}\,\Omega(t)\,.\]
In the light of definition (8) of the Coriolis force this immediately yields formula (12). \(\square\)
One has to notice that the role of the NIP generator of evolution is played here by the Coriolis force. The solution of the equation specifies the (by definition, non-stationary) operator \(A(t)\) in consistent manner. At the same time, our knowledge of this solution immediately opens the possibility of an ultimate evaluation of the matrix elements entering the following nonstationary upgrade of Eq. (7),
\[\langle\!\langle\psi(t_{f})|A(t_{f})|\psi(t_{f})\rangle=\prec\!\!\psi^{(SP)}(t _{f})|{\mathfrak{a}}^{(SP)}|\psi^{(SP)}(t_{f})\!\!\succ\ . \tag{13}\]
This formula expresses the probabilistic contents of the non-stationary theory and quantifies the prediction of the results of the measurement at time \(t=t_{f}\).
## 3 Samples of application
In the preface to the ambitious theoretical monograph [22] we read that "despite an enormous effort of work by a vast amount of physicists over the past 70 years, we still do not have a credible quantum general relativity theory" (QGR). "What we do have today are candidate theories;...for each of them one still has to show...that it reduces to the presently known...classical general relativity at low energies" [22].
### Wheeler-DeWitt equation
The incompleteness of the candidate QGR theories is best illustrated by the canonical quantum gravity based on WDW equation. In this field, incidentally, the progress is significant. On p. 1291 of [18], for example, it is emphasized that "in the 1960's the discovery of the Hamiltonian formulation of the General Theory of Relativity...provided the necessary means to apply Dirac's method of constrained quantization". We believe that in this context also the NIP-based study of not too realistic WDW equation did not still tell us its last word yet.
The scepticism of the theoreticians is in a sharp contrast with the _experimental_ side of the QGR problem where the efforts of physicists were amazingly successful. For example, the age of our Universe is currently widely agreed to be finite and equal to cca 13.8 billion years [28]. It is worth adding that the determination of the latter value belongs to one of the most impressive recent experimental results in physics. Under a self-explanatory name "Cosmic Background Radiation Anisotropy Satellite/Satellite for Measurement of Background Anisotropies" (COBRAS/SAMBA, [28]) the measurement was initiated around 1996 and operated by the European Space Agency between the years 2009 to 2013. The necessary sensitivity and resolution were further improved by the NASA Wilkinson Microwave Anisotropy Probe (WMAP).
This resulted in the data summarized in the so called Lambda cold dark matter (\(\Lambda\)CDM) model _alias_ "standard" cosmological model" [29]. In the acronym the first, Greek letter refers to the cosmological constant while the use of the word "standard" emphasizes that its parameters fit not only the expansion of the universe or the distribution laws of the atomic nuclei and/or galaxies but also the fairly contradictory hypothesis of existence of the initial point-like Big-Bang singularity.
Once we reopen the question of compatibility of the \(\Lambda\)CDM hypotheses with the basic principles of quantum theory we only have to return to a moderate scepticism. The applicability of the underlying classical-physics-based concepts finds its first natural limitation in a restriction to its far-from-Big-Bang verifications. The experiments remain persuasively compatible with the classical GR theory as the correct theory of gravity at macroscopic distances.
In this context we found our basic theoretical inspiration and encouragement in the comprehensive review paper [18]. We read there that in quantum cosmology "the relevant...second order differential equations" (i.e., WDW equations) resemble the Klein-Gordon equations and "have the following general form"
\[\frac{d^{2}}{dt^{2}}\,\psi(t)+D(t)\,\psi(t)=0 \tag{14}\]
(cf. Eq. Nr. 377 and the related comments in [18]). The symbol \(\psi(t)\) denotes here a "wave function of the Universe" which would be "void of a physical meaning" without "an appropriate inner product" \(\Theta\)[18].
In the nearest future the quantum effects emerging at the singularities (as sampled by black holes or hypothetical Big Bang) have to be re-analyzed. In other words, there is still a broad gap between our understanding of the correspondence between the well-confirmed classical singularities
and their internally consistent quantum analogues. A model-based description of their mechanism and dynamics is still, in the light of Eq. (14), one of the most important subjects of research and one of the sources of open questions which motivated also our forthcoming considerations.
In a way explained in review [18] (cf., in particular, section Nr. 9.2) a full formal analogy between the Klein-Gordon and WDW equations only exists when the operator part of Eq.(14) remains time-independent, \(D(t)=D(0)=D\). In this case, indeed, the Klein-Gordon-type equation can be transformed into its NSP equivalent (5) of section 2.1 (cf. also equation Nr. 378 in _loc. cit._). The second NSP Schrodinger Eq. (6) of section 2.1 is then easily written in terms of the conjugate Hamiltonian operator \(H^{\dagger}\). The correct physical (i.e., probabilistic) interpretation of the evolution then follows from the one-to-one correspondences (2) and (3) between the states and operators in the respective Hilbert spaces \({\cal L}^{(SP)}\) and \({\cal H}_{(Dyson)}\).
After a transition (of our present interest) to the genuine WDW version of Eq. (14) the operator \(D(t)\) must necessarily be kept manifestly time-dependent. This forces us to make use of the non-stationary NIP formalism of section 2.2. The main innovation is that _any_ quasi-Hermitian observable of interest (say, \(A\) in Eq. (7)) becomes, by definition, time-dependent (cf. Eq. (11)). As a consequence, the prediction of _any_ measurement (i.e., the evaluation of the overlap (13)) requires, suddenly, not only the solution of the two comparatively friendly Schrodinger-like evolution equations for the state (i.e., the construction of the two _vectors_ in \({\cal H}_{(Dyson)}\)) but, first of all, also the solution \(A(t)\) of another, maximally user-unfriendly Heisenberg-like evolution equation (12) for the _operator_.
### Closed versus open quantum systems
Many authors proved discouraged by the latter technical obstacles and so they have redirected their attention, typically, to the exactly solvable models (later, we will pick up the letter [30] for illustration). Frequently, people also simplify the model-building process by giving up the unitarity. They declare their quantum system, in the spirit of Refs. [31, 32], "open". In some sense, the newly acquired freedom becomes abused because by definition of the open quantum systems (formulated, basically, in the Feshbach's spirit [33, 34]) the resulting "effective" non-Hermitian descriptions are, in the sense of fundamental theory, incomplete [35, 36].
This being said one should add that the use of the effective operators of observables really enables one to pay more attention to the ever-present noise and fluctuations in the quantum systems living in the real world (cf., e.g., [37, 38]). In the future, in this sense, it will certainly be necessary to try to move beyond the restrictive closed system models, indeed.
In our present paper the consequent fundamental-theory approach is not abandoned. In its framework, nevertheless, even the operator-evolution nature of the Heisenberg-like Eq. (12) need not be the main technical problem. Indeed, difficulties also emerge in connection with the corresponding non-stationary upgraded doublet of Schrodinger-like equations (9) and (10). As long as the generator \(G(t)\) of the evolution of the states becomes defined as the difference between
the Hamiltonian \(H(t)\) (which is, by definition, observable) and the Coriolis force \(\Sigma(t)\) (which is, in general, not observable), also the generator \(G(t)\) itself is not observable (once more, we may recall Theorem Nr. 2 in [18] for details). Moreover, many examples (cf., e.g., [39, 40]) show that the elements of its spectrum need not even form the complex conjugate pairs. For this reason, it makes also hardly any sense to try to simplify the model by imposing, upon this operator, the popular \({\cal PT}-\)symmetry constraint. Still, in the search for innovations this is what is often being done - see, e.g., [41, 42].
Recently, new light has been thrown upon these problems by the two studies of several solvable and manifestly time-dependent "wrong-sign" anharmonic oscillators [30, 43]. Surprisingly enough, it has been shown there that it may still make good sense to impose the \({\cal PT}-\)symmetry constraint directly upon the observable Hamiltonian \(H(t)\). At the first sight, the motivation seems missing because this operator only enters the pair of the NIP Schrodinger equations in combination with Coriolis force. Nevertheless, the toy-model studies reconfirmed that the main advantage of using the concept of \({\cal PT}-\)symmetry lies in its capability of a clear separation of the unitary dynamical regime (in which the symmetry remains unbroken) from its unphysical non-unitary complement (in which the \({\cal PT}-\)symmetry becomes spontaneously broken).
### Pure states in dyadic representation
A clarification of the slightly complicated NIP situation becomes provided when one imagines that the Schrodinger's, the Heisenberg's and the Dirac's intermediate _alias_ interaction pictures describe the same physics. All of them characterize the same evolution of a quantum system which is initiated by the preparation of a pure state \(\psi(t)\) at \(t=t_{i}=0\) and which finally leads to the prediction of the results of the measurement at \(t=t_{f}>0\). In the language of mathematics this means that once we complement the Dyson map (2) (or, more precisely, its non-stationary, time-dependent amendment) by a complementary, alternative replacement
\[|\psi^{(SP)}(t)\!\succ\ \rightarrow\ |\psi_{(Dyson)}(t)\rangle\!\rangle= \Omega^{\dagger}_{(Dyson)}(t)\,|\psi^{(SP)}(t)\!\succ\ \in\ {\cal H}_{(Dyson)}\,, \tag{15}\]
we may immediately deduce that \(|\psi_{(Dyson)}(t)\rangle\!\rangle=\Theta_{(Dyson)}(t)\,|\psi_{(Dyson)}(t)\rangle\). Thus, whenever we decide to work with the two _different_ state-vector elements \(|\psi_{(Dyson)}(t)\rangle\!\rangle\) and \(|\psi_{(Dyson)}(t)\rangle\) of the Hilbert space \({\cal H}_{(Dyson)}\), it appears sufficient to work just with the information about the metric encoded in the double-ket vector \(|\psi_{(Dyson)}(t)\rangle\!\rangle\).
The latter trick leads, in a way emphasized in review [23], to an enormous simplification of the NIP formalism. The statement may be also given a more compact mathematical form in which the pure state \(\psi(t)\) is represented, in \({\cal H}_{(Dyson)}\), by the rank-one (i.e., dyadic) projector
\[\pi_{\psi}(t)=|\psi^{(Dyson)}(t)\rangle\,\frac{1}{\langle\!\langle\psi^{(Dyson )}(t)|\psi^{(Dyson)}(t)\rangle\,\rangle}\,\langle\!\langle\psi^{(Dyson)}(t)|\,. \tag{16}\]
With (or even without) the conventional assumptions of the biorthonormality and bicompleteness
\[\langle\!\langle\psi^{(Dyson)}(t)|\phi^{(Dyson)}(t)\rangle=\delta_{\psi\phi} \,,\ \ \ \ \sum_{\psi}\,|\psi^{(Dyson)}(t)\rangle\langle\!\langle\psi^{(Dyson)}(t)|=I \tag{17}\]
this enables us to re-derive the fundamental measurement-prediction formula (13).
## 4 Quantum gravity in a toy model
In the preceding context it is worth adding that in spite of a certain resemblance of our state-representing formulae with the Aharonov's and Vaidman's time-symmetric two-state-vector formalism [44, 45], the parallels are purely formal because the present approach remains safely traditional and time-asymmetric (see also a few other comments in section 6.1 below). This means that we just stay in the framework of a traditional non-relativistic quantum cosmology in which the \(t-\)dependent state (and, say, the pure state) of the Universe would have to be prepared at a suitable time \(t=t_{initial}\). One can expect that the state of the Universe evolves and gets measured at another instant \(t=t_{final}>t_{initial}\). Now, our task is to explain how one might realize such an evolution scenario, in principle at least, when making the pure states represented by the rank-one projectors (16).
### Classical singularities
There are not too many results in which one would really succeed in making the space-time background of QGR quantized (i.e., represented by an operator) and, simultaneously, time-dependent. In our present considerations we decided to emphasize, therefore, just a few preselected methodical aspects of quantum gravity, with our attention paid, predominantly, to the requirement of the background independence of the theory in which even the measurements of the distances in an empty Universe would be of a strictly quantum, probabilistic nature [46].
We will test our ideas in the non-covariant kinematical regime in which the time \(t\) will still be treated as a parameter [47]. Moreover, even the strictly quantum Universe will be assumed simplified and existing just in a very small vicinity of its classical Big-Bang singularity. After such a specification of the simplified dynamical regime we will add several further, methodically motivated reductions of the picture.
* The classical space-time geometry of the Universe has to remain "next to trivial". We will employ the not exceedingly revolutionary kinematics working with the non-covariant concept of absolute time. The quantum-theory-controlled evolution of the Universe will be then assumed unitary, i.e., unitary in the language of the more or less conventional quantum mechanics of the so called closed systems.
* In both the classical and quantum settings, the naively physical non-relativistic parameter of time \(t\) will be assumed positive and set equal to zero at Big Bang. On the classical non-relativistic level also the 3D spatial coordinates will be assumed time-dependent, therefore, \(x=x(t)\), \(y=y(t)\) and \(z=z(t)\).
* All this would lead to a still nontrivial version of the background independence because the observable values of the spatial nodes \(x=x(t)\), \(y=y(t)\) and \(z=z(t)\) (i.e., say, point-particle positions [47]) have to be defined (i.e., prepared and/or measured) as eigenvalues of operators, in principle at least.
* The last three operators have to be self-adjoint in a physical Hilbert space \({\cal H}_{phys}\) in which the inner product has the property of being time-dependent and degenerating at \(t=0\). In other words, a "non-Hermitian" NIP version of QM will have to be used.
For the sake of simplicity of our toy-model-based considerations we will assume that the kinematics of the expanding Universe will be just one-parametric. The purpose of such a choice is twofold. In the context of mathematics a maximal simplicity of our methodical considerations has to be achieved. In this sense one can simply speak about a homogeneous and isotropic, centrally symmetric expanding empty Universe characterized, say, by its volume or radius \(r(t)\). Thus, in our present minimal project, just such a real function would have to be reinterpreted as an eigenvalue of an _ad hoc_ operator \(R(t)\).
In parallel, in the context of physics the interpretation of the parameter \(r(t)\) might be made more sophisticated, with the details to be found, e.g., in the dedicated monograph [48]. Thus, for example, one could identify its \(t-\)dependent value with the scale factor \(a(t)\) emerging in the popular Lemaitre-Friedmann-Robertson-Walker classical solvable model, or with the closely related function of \(t\) called Hubble radius, etc.
In any case, the reduction of the description of the classical dynamics to a single real parameter implies that in the centrally symmetric picture with \(r(t)=\sqrt{x^{2}(t)+y^{2}(t)+z^{2}(t)}\) we will have to replace, firstly, the three spatial Cartesian coordinates \(x=x(t)\), \(y=y(t)\) and \(z=z(t)\) by the equivalent spherical coordinates \(r=r(t)\), \(\theta=\theta(t)\) and \(\phi=\phi(t)\). Secondly, for the sake of simplicity this will enable us to assume, in another reasonable approximation, the stationarity \(\theta(t)=\theta(0)\) and \(\phi(t)=\phi(0)\) of the angles. Moreover, we will treat the latter two values fixed and not quantized. Thus, both of the spherical angular coordinates will be kept "frozen" and "irrelevant", i.e., classical and time-independent.
All of the latter simplifications have just a methodical motivation. In contrast, the radius of the Universe \(r(t)\) itself will be defined, after quantization, as one of the available real eigenvalues of a time-dependent "dynamical-geometry" operator \(R(t)\). At any suitable Hilbert-space dimension \(N\leq\infty\) we will have to write \(r(t)=r_{n}(t)\) where the "multiverse-counting" quantum number \(n=1,2,\ldots,N\) specifies the hypothetical "prepared" pure quantum state of the Universe.
The quantum radius of the Universe \(r_{n}(t)\) must be time-dependent. At Big Bang we have to guarantee the existence of an "unavoidable" degeneracy (also called "exceptional point", EP, [49]), \(\lim_{t\to 0}r_{n}(t)=0\) for all \(n\). Our time-dependent "dynamical-geometry" operator \(R(t)\) must be, in our working Hilbert space \({\cal H}_{math}\), non-Hermitian but Hermitizable _alias_ quasi-Hermitian, i.e., such that
\[R^{\dagger}(t)\,\Theta(t)=\Theta(t)\,R(t)\,. \tag{18}\]
In the literature, interested readers may find a number of the generic methodical comments on the latter equation (cf., e.g., [7, 18, 50]). In what follows, we intend to work just with an illustrative family of certain non-numerically tractable \(N\) by \(N\) matrices \(R(t)=R^{(N)}(t)\). This will enable us to keep also the related discussion sufficiently short and specific.
### The radius of the Universe in a solvable toy model
A mathematical inspiration of our present project of the realization of a schematic quantum model of the Universe dates back to the unpublished preprint [51]. We considered there, in an entirely different context, a one-parametric family of non-Hermitian (but Hermitizable, quasi-Hermitian) matrices with the real spectra which represented the discrete bound-state energies. The purpose of the preprint (to be cited as PI in what follows) was a study of the slow, adiabatic unitary-evolution process resulting in a fall of the \(N-\)level systems into an exceptional-point singularity (EPN, [49]).
Not the same but analogous matrices will be used here in another role, viz., in the role of a non-stationary operator \(R(t)\) with the eigenvalues \(r_{n}(t)\) representing the observable instantaneous radii of the Universe. For the sake of simplicity we will assume that the dimension \(N\) of our schematic Hilbert space \({\cal H}_{(Dyson)}\) is finite. We will consider \(N=2,3,\ldots\) and postulate that our "kinematical", geometric-background-representing input matrices \(R^{(N)}(t)\) have the following respective forms,
\[R^{(2)}(t)=\left[\begin{array}{cc}-1+\sigma^{(2)}(t)&\tau^{(2)}(t)\\ -\tau^{(2)}(t)&1+\sigma^{(2)}(t)\end{array}\right]\,,\quad R^{(3)}(t)=\left[ \begin{array}{cc}-2+\sigma^{(3)}(t)&\sqrt{2}\,\tau^{(3)}(t)&0\\ -\sqrt{2}\,\tau^{(3)}(t)&\sigma^{(3)}(t)&\sqrt{2}\,\tau^{(3)}(t)\\ 0&-\sqrt{2}\,\tau^{(3)}(t)&2+\sigma^{(3)}(t)\end{array}\right]\,,\]
Figure 1: The “multiverse” eigenvalues of the toy-model operator \(R^{(4)}(t)\) of Eq. (19) representing the eligible instantaneous size of the quantized Universe expanding after Big Bang.
\[R^{(4)}(t)=\left[\begin{array}{cccc}-3+\sigma^{(4)}(t)&\sqrt{3}\,\tau^{(4)}(t)&0& 0\\ -\sqrt{3}\,\tau^{(4)}(t)&-1+\sigma^{(4)}(t)&2\,\tau^{(4)}(t)&0\\ 0&-2\,\tau^{(4)}(t)&1+\sigma^{(4)}(t)&\sqrt{3}\,\tau^{(4)}(t)\\ 0&0&-\sqrt{3}\,\tau^{(4)}(t)&3+\sigma^{(4)}(t)\end{array}\right]\,,\ \ldots\,. \tag{19}\]
Here, \(\tau=\tau^{(N)}(t)\) and \(\sigma=\sigma^{(N)}(t)\) are suitable real and smooth functions of time \(t\). Thus, for illustration we may choose the shift \(\sigma^{(N)}(t)=2N\,\sqrt{1-[\tau^{(N)}(t)]^{2}}\), the dimension \(N=4\) and the parameter \(\tau^{(4)}(t)=1-t\,\). This would yield the spectrum \(\{r_{n}(t)\}\) as displayed in Figure 1. We may see that the model is "realistic" in the sense that at any quantum number \(n\) our toy-model empty Universe exhibits a point-like singularity (Big Bang) at \(t=0\) and a quick expansion at \(t>0\).
A number of comments is to be made in advance. First, we have to keep in mind that once we start from the hypothetical knowledge of the kinematics, it need not be easy to combine the underlying space-evolution ansatzs (i.e., in our toy-model case, the specification of parameters in Eq. (19)) with the requirements of the dynamics (sampled, in our case, by the WDW Eq. (14)).
## 5 The consistent model-building process
Our model-building philosophy is based on the Big-Bang-admitting ansatz (19). Thus, our very first task is to make the corresponding choice of the kinematics (i.e., of the time-dependent matrix \(R^{(N)}(t)\)) compatible with the unitarity of the evolution (in this sense we assume that the quantum system under consideration is a closed system). This means that we have to take into account, first of all, the Dieudonne's Hermitizability constraint (18).
### The first step: The construction of the metric
From the point of view of physics all of the sufficiently ambitious quantum-Big-Bang-related models have to mimic a quasi-static phase transition. Naturally, the concept of the phase transition itself is very broad (see, e.g., the comprehensive review paper [52] where the authors list more than 400 further relevant references). In comparison, the range of physics behind our present Big-Bang-related project is perceivably narrower. In a way discussed, more thoroughly, in papers [53, 54] we will only deal here with the more specific philosophy of the quantum phase transitions the realization of which is based on the presence, in the space of parameters, of a suitable Kato's [49] exceptional point. Moreover, just a marginal attention will be paid to the energy levels and to the Hamiltonians. Our study will be redirected to the background-representing operator (or rather non-Hermitian \(N\)-by-\(N\)-matrix) \(R^{(N)}(t)\). Depending on a real time-simulating parameter \(t\in(0,1)\) (such that \(t=t^{(Big Bang)}=0\)) and preceding the case of a more realistic time-dependent (i.e., non-relativistic) _triplet_ of \(N\) by \(N\) matrices \(X_{1}^{(N)}(t)\), \(Y_{2}^{(N)}(t)\), \(Z_{3}^{(N)}(t)\) representing a _dynamical, fully quantized_ (and, at finite \(N<\infty\), just discretized) three-dimensional space-time-grid background.
We will require that the spectra of all of these matrices are complex at the negative times \(t<0\) (this has to reflect the unobservable status of the space before Big Bang), real but EPN-degenerate at \(t=0\) (i.e., at the hypothetical non-relativistic Big-Bang instant) and real and non-degenerate at \(t>0\) (for pragmatic reasons we will just keep in mind the not too large times, i.e., say, \(t\leq 1\)). Moreover, in the three-dimensional space of the hypothetical expanding Universe we will also reparametrize the coordinates and replace their time-dependent and system-dependent Cartesian grid \(\{x(t),y(t),z(t)\}\) by the spherical triplet \(\{r(t),\theta(t),\phi(t)\}\). For the sake of simplicity, the angular coordinates will be assumed fixed, and just the radial one will be treated as the expanding-Universe spatial background and quantized, i.e., treated as one of the eligible eigenvalues of an _ad hoc_, kinematical-input operator \(R^{(N)}(t)\).
For the methodical purposes (as well as for the sake of definiteness) we will assume that the latter, geometry-representing operator is given in advance, having the form resembling closely the Hamiltonians in PI. Thus, once we abbreviate \(t=t(\tau)=1-\tau\) or introduce a new variable \(\tau=\tau(t)=1-t\), the parallels become complete.
All of our toy-model matrices will be chosen real, non-Hermitian and, whenever \(t=1-\tau(t)>0\), Hermitizable. This means that at any preselected matrix dimension \(N\) there exists an inner-product-metric operator \(\Theta=\Theta^{(N)}(t)\) (which need not be unique, see [7]) such that our spatial-grid-simulating operator \(R=R^{(N)}(t)\) satisfies the quasi-Hermiticity condition (18). In PI we worked with the close analogues of our present matrices (19) so that we can just recall and modify Theorem Nr 1 of PI and formulate the following result.
**Theorem 2**: _At every finite Hilbert-space dimension \(N<\infty\) the metric \(\Theta^{(N)}(t)\) compatible with the respective radii (19) may be sought in the following generic form_
\[\Theta^{(N)}(t)=\sum_{j=1}^{N}\,{\cal M}^{(N)}(j)\,[-\tau(t)]^{j-1} \tag{20}\]
_containing the sparse-matrix coefficients_
\[{\cal M}^{(N)}(1)=\left[\begin{array}{ccccc}\alpha_{11}(1)&0&\ldots&0\\ 0&\alpha_{12}(1)&\ddots&\vdots\\ \vdots&\ddots&\ddots&0\\ 0&\ldots&0&\alpha_{1N}(1)\end{array}\right]\,, \tag{21}\]
\[{\cal M}^{(N)}(2)=\left[\begin{array}{ccccc}0&\alpha_{11}(2)&0&\ldots&\ldots &0\\ \alpha_{21}(2)&0&\alpha_{12}(2)&0&\ldots&0\\ 0&\alpha_{22}(2)&0&\alpha_{13}(2)&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&0\\ 0&\ldots&0&\alpha_{2,N-2}(2)&0&\alpha_{1,N-1}(2)\\ 0&\ldots&\ldots&0&\alpha_{2,N-1}(2)&0\end{array}\right]\,, \tag{22}\]
\[{\cal M}^{(N)}(3)=\left[\begin{array}{cccccccc}0&0&\alpha_{11}(3)&0&\cdots& \cdots&0\\ 0&\alpha_{21}(3)&0&\alpha_{12}(3)&0&\cdots&0\\ \alpha_{31}(3)&0&\alpha_{22}(3)&0&\alpha_{13}(3)&\ddots&\vdots\\ 0&\alpha_{32}(3)&\ddots&\ddots&\ddots&\ddots&0\\ 0&\ddots&\ddots&0&\alpha_{2,N-3}(3)&0&\alpha_{1,N-2}(3)\\ \vdots&\cdots&0&\alpha_{3,N-3}(3)&0&\alpha_{2,N-2}(3)&0\\ 0&\cdots&\cdots&0&\alpha_{3,N-2}(3)&0&0\end{array}\right]\;, \tag{23}\]
_etc._
In PI we also recommended to arrange the non-vanishing matrix elements into the \(k\) by \((N-k+1)\) arrays,
\[\alpha(k)=\left[\begin{array}{ccccc}\alpha_{11}(k)&\alpha_{12}(k)&\alpha_{ 13}(k)&\ldots&\alpha_{1,N-k+1}(k)\\ \alpha_{21}(k)&\alpha_{22}(k)&\alpha_{23}(k)&\ldots&\alpha_{2,N-k+1}(k)\\ \vdots&\vdots&\vdots&&\vdots\\ \alpha_{k1}(k)&\alpha_{k2}(k)&\alpha_{k3}(k)&\ldots&\alpha_{k,N-k+1}(k)\end{array} \right]\;. \tag{24}\]
The values of these matrix elements had to be computed as solutions of Eq. (18) - for the first few Hilbert-space dimensions \(N\), the results may be found in PI.
### Coriolis force and the evolution equations
At a fixed Hilbert-space dimension \(N\) the general inner-product-metric solution of Eq. (18) is non-unique. It varies with an \(N-\)plet of free parameters, the variability of which being only restricted by the condition of the necessary positivity of the metric. In [54] we studied a special "zero-spectral-shift" sub-family of our present time-dependent models (19) with \(\sigma^{(N)}(t)=0\). At arbitrary \(N\), we described there certain "optimal" solutions (24) which exhibited a number of desirable features. From our present point of view the most important one was that the positivity of the metric (i.e., of all of its time-dependent eigenvalues \(\theta_{k}^{(N)}(t)\)) proved guaranteed at all of the dimensions \(N>0\) and at all of the times \(t>0\) of interest.
Easily, the latter result can be re-adapted to our present needs. Irrespectively of the radius-positivity-guaranteeing _ad hoc_ shift parameters \(\sigma^{(N)}(t)>0\), the following Theorem can be easily proved by mathematical induction.
**Theorem 3**: _All of the time-dependent eigenvalues \(\theta_{k}^{(N)}(t)\) of the optimal-radius-dependent inner-product metric \(\Theta^{(N)}(t)\) are given, at any matrix-dimension \(N\), by the following closed formula,_
\[\theta_{k}^{(N)}(t)=\sum_{m=1}^{N}\,C_{km}^{(N)}\left[\tau(t)\right]^{m-1}, \hskip 28.452756ptk=1,2,\ldots,N\]
_where \(C_{1n}^{(N)}=\left(\begin{array}{c}N-1\\ n-1\end{array}\right)\), \(C_{2n}^{(N)}=\left(\begin{array}{c}N-2\\ n-1\end{array}\right)-\left(\begin{array}{c}N-2\\ n-2\end{array}\right)\) and, in general,_
\[C_{kn}^{(N)}=\sum_{p=1}^{k}\,(-1)^{p-1}\,\left(\begin{array}{c}k-1\\ p-1\end{array}\right)\,\left(\begin{array}{c}N-k\\ n-p\end{array}\right)\,,\hskip 14.226378ptk,n=1,2,\ldots,N\,.\]
The main consequence of this result is that all of the eigenvalues of the metric are positive. This means that we may recall Eq. (4) (in which we reconstructed the "unknown" metric \(\Theta\) as a product of the two "known" Dyson maps) and that we may try to invert the recipe (assuming that the metric is "known" and that the Dyson map \(\Omega(t)\) is to be reconstructed, say, in the form of a real square root of \(\Theta\)[54]). This means that we can factorize the metric into the product
\[\Theta(t)=\Omega^{\dagger}(t)\,\Omega(t) \tag{25}\]
representing a time-dependent generalization of the stationary factorization formula of Eq. (4).
This enables us to treat also the Coriolis-force matrix (as defined by Eq. (8)) as known. One can conclude that the construction of the toy model is almost completed. Indeed, any \(J-\)plet of its other observable features can be represented by the respective operators (say, \(\Lambda_{j}(t)\), with, if needed, \(j=0\) assigned to the energy-representing Hamiltonian). All of these operators must be, in terms of the same "correct and physical" Hilbert-space metric, quasi-Hermitian,
\[\Lambda_{j}^{\dagger}(t)\,\Theta(t)=\Theta(t)\,\Lambda_{j}(t)\,,\hskip 14.226378ptj =0,1,\ldots,J\,. \tag{26}\]
Secondly, any one of them (and, in particular, the Hamiltonian \(H(t)=\Lambda_{0}(t)\)) can be used to define the basis which can be biorthonormalized (cf. [55] or Eq. (17)). The purpose may be served, in the real-spectrum dynamical regime, by the doublet of eigenvalue problems
\[H(t)\,|m(t)\rangle=E_{m}(t)\,|m(t)\rangle\,,\hskip 14.226378ptH^{\dagger}(t)\,|m (t)\rangle\hskip-14.226378pt\rangle=E_{m}(t)\,|m(t)\rangle\hskip-14.226378pt \rangle\,,\hskip 14.226378ptm=1,2,\ldots,N\,. \tag{27}\]
Thirdly, the knowledge of the metric also facilitates the search for the other candidates for the observables (denoted, say, as \(\Lambda(t)\) without a subscript). Indeed, once we consider the product \(\widetilde{\Lambda}(t)=\Theta(t)\,\Lambda(t)\), we immediately see that \(\widetilde{\Lambda}(t)=\widetilde{\Lambda}^{\dagger}(t)\) is Hermitian. Thus, _any_ Hermitian "input-information" matrix \(\widetilde{\Lambda}(t)\) can be treated as a set of free parameters defining a quasi-Hermitian operator \(\Lambda(t)=\Theta^{-1}(t)\,\widetilde{\Lambda}(t)\) eligible as an observable.
The observables of the latter type may be required to correspond to their conventional SP avatars \(\lambda^{(SP)}\) which are stationary, conserved and time-independent. In such a case the process of the definition of the operator (at all times) can be facilitated and replaced by the definition of the operator just at a single instant \(t=t_{initial}\), with the completion of the construction of \(\Lambda(t)\) (at all times) provided by the solution of the corresponding Heisenberg Eq. (12).
In the last step of our considerations we may preselect a Hermitian matrix \(A(t)\) and use it as the parameters defining the energy-representing observable Hamiltonian \(H(t)=\Theta^{-1}(t)\,A(t)\). Then we may immediately reconstruct the generator \(G(t)=H(t)-\Sigma(t)\) of the evolution of the states which enters, finally, the two conjugate Schrodinger Eqs. (9) and (10). The construction of the model is completed.
Discussion
In a way summarized in reviews [11, 18, 56] the recent theoretical developments in quantum mechanics threw new light on many traditional model-building strategies. The main idea of the innovation lies in an extension of the concept of the so called observable from its traditional form (i.e., from its self-adjoint representation) to an unconventional alternative which is non-Hermitian but which happens to be Hermitizable. The Hermitization is still needed, mediated by an amended Hilbert-space inner-product metric \(\Theta\) which, "it it exists" [7], varies with our choice of the observable.
In applications one often works with an observable Hamiltonian. Whenever its most standard self-adjoint SP version \(\mathfrak{h}\neq\mathfrak{h}(t)\) happens to be user-unfriendly, the desirable user-friendliness can be recovered after transition to its suitable non-Hermitian avatar. A full compatibility of the resulting hiddenly Hermitian NSP reformulation of quantum mechanics is achieved when, with a suitable \(\Theta=\Theta(H)\), the new, non-Hermitian Hamiltonian \(H\neq H^{\dagger}\) remains \(\Theta-\)quasi-Hermitian,
\[H^{\dagger}=\Theta\,H\,\Theta^{-1}\,. \tag{28}\]
We already reminded the readers that in section 9.2 of review [18] it has been pointed out that the NSP-based construction of the stationary inner-product metric \(\Theta=\Theta(H)\) plays a particularly important role in relativistic quantum mechanics (with \(H\neq H^{\dagger}\) being the Klein-Gordon operator) and in the various versions of application of the Dirac's method of constrained quantization to gravity (with \(H\neq H^{\dagger}\) being the Wheeler-DeWitt operator). It is only desirable to add now that after a transition to the more advanced NIP version of the theory in which one decides to work with the time-dependent Hilbert-space metrics \(\Theta(t)\), most of the above-cited statements must be thoroughly reformulated. In particular, the most general non-stationary version of the Klein-Gordon operator of the relativistic QM cannot remain consistently identified with the observable operator \(H(t)\) anymore.
In our present paper the same change of paradigm has been described and shown necessary also in the NIP approach to the genuine, non-stationary Wheeler-DeWitt equation of quantum gravity. For the purpose, naturally, multiple technical simplifying assumptions had to be accepted.
### Conventional time-asymmetric QM concept of the evolution
The description of a quantum system in which the observables are represented by operators is, certainly, richer than the description of its classical limit [57]. One of the related paradoxes is that a substantial part of the success of quantum theory is, in some sense, serendipitious, based on a lucky choice of one of many eligible "quantizations". In this sense we are currently not too lucky when trying to quantize the Einstein's general relativity (see, e.g., the Isham's foreword to the Thiemann's monograph [22]).
One of the problems is, in the Thiemann words, that the "quantum theory of the non-gravitational interactions...completely ignores General Relativity" while the latter classical the
ory _alias_ geometry "completely ignores quantum mechanics" (see p. 9 in _loc. cit._). In our present paper, in this sense, we tried to stay, firmly, in the framework of non-relativistic quantum mechanics.
After such a simplification the survival of the concept of time \(t\) enables one to order the evolution in a strictly causal manner. Incidentally, the "fixed-frame" restriction of such a type can be softened by a change of perspective working with another, "non-time" evolution parameter [46]. In an extreme case as presented and discussed in methodical study [47], one can even quantize the time itself, i.e., one can treat \(t\) as a "pure-state" eigenvalue of a "quantum clock" operator.
The idea of such a type is presented also in the Rovelli's monograph [21]. We can read there that only in the conventional approaches one believes that "the Schrodinger picture is only viable for theories where there is a global observable time variable \(t\)". Naturally, "this conflicts with GR [general relativity], where no such variable exists" (cf. pp. 10 and 11 in _loc. cit._). One has to conclude that a properly covariant formulation of the unitary quantum evolution near Big Bang is still not too well understood and formulated at present, especially because after the replacement of quantum mechanics by quantum field theory (QFT), one reveals that also "most of the conventional machinery of perturbative QFT is profoundly incompatible with the general-relativistic framework" [21]. Thus, only the traditional, perturbation-approximation-based pragmatic approaches to the predictive cosmology seem to be available at present [48].
In this sense we proposed here that one of the possible schematic keys to the puzzle might be sought in the quantization of the classical GR singularities (like Big Bang) using, on quantum level, the Kato's [49] concept of the exceptional-point degeneracy of the schematic, non-covariant Universe at \(t=0\).
### More realistic frameworks like loop quantum gravity
The current progress in experimental astronomy is amazing: we already mentioned the measurements of the cosmic microwave background [28]. This confirmed the Big Bang hypothesis experimentally. In parallel, its mathematically singular nature also motivated an intensification of the efforts of making the Einstein's classical general relativity (GR) compatible with the first principles of quantum mechanics (QM) [21, 22].
The recent progress in this direction is remarkable. We already mentioned the studies of the conventional canonical recipes aimed, according to Wheeler [19] and DeWitt [20], at the constructions of a "wave-function of the Universe". Among the more recent related theoretical results one must mention also the formalism of the so called loop quantum gravity (LQG, [58]). In this setting, one is really able to work with the modified QM called "relational", with some basic details mentioned in section Nr. 5.6 of monograph [21]. Still, we read there that the relational reformulation of QM "has aspects that need to be investigated further" (cf. p. 367 in [21]).
On these grounds our interest in the problem has been born. During one of the seminars on the subject (dedicated to the description of quantum Big Bang) we imagined that people very
often come to a quick conclusion that the classical GR singularities (like, typically, the Big-Bang-mediated "abrupt" birth of the Universe) must _necessarily_ get, according to the conventional wisdom, "smeared" (i.e., in the mathematical sense, "regularized") after quantization.
For a long time, the latter intuitive expectation had been widely accepted. A replacement of the Big-Bang singularity by the so called Big Bounce was advocated by the widest LQG community [59, 60]. Only very recently the assertion has been reconsidered and opposed [61]. This means that the competition between the Big Bang and Big Bounce hypotheses may currently be considered reopened.
In our present toy model the quantum Big Bang instant remains singular. Counterintuitive as such a possibility seems to be, one could see its multiple analogues, say, in the physics of phase transitions. Naturally, many forms of the description of the conventional phase transitions are more or less standard, not requiring the use of the sophisticated mathematics of the LQG approach. At the same time, the newly emerging undecided status of the quantum Big Bang hypothesis represents a challenge. We believe that the new forms of insight were also provided by our present paper.
### A broader physical context
One of the main formal supports of optimism may be seen in the fact that one of the key formal features of our present NIP theory is in its richer representation of quantum dynamics. Indeed, in the conventional version of QM the flexibility of the model-building processes is strongly restricted by the fact that the (pure) state of a unitary quantum system of interest is merely represented by a ket-vector element \(|\psi^{(SP)}\!\succ\) of a preselected and time-independent Hilbert space \({\cal L}^{(SP)}\). In contrast, the mathematical and phenomenological roles of the ket-vectors in the NIP Hilbert space \({\cal H}_{(Dyson)}\) become separated. The amended theory works with the two non-equivalent versions of the latter space, viz., with \({\cal H}_{math}\) (where the inner product in elementary but unphysical) and with \({\cal H}_{phys}\). In the latter case one can say that either the definition of the correct, physical inner product contains the operator of metric, or that the operation of the physical Hermitian conjugation is realized as the less conventional antilinear map \(|\psi\rangle\ \rightarrow\ \langle\!\langle\psi|\). This, indeed, simplifies the formalism because the mathematically user-friendly space \({\cal H}_{math}\) (which must be declared "unphysical") can also serve as a representation space for \({\cal H}_{phys}\).
From such a perspective the NIP approach comes with the new possibility of making the family of the gravity-related quantum field theories "background-independent"(cf. p. 22 in [21], or the more detailed comments in [22]). From a purely pragmatic point of view this simply means that in the conventional models (i.e., say, in the point-particle wave functions \(\psi(\vec{x})\)) even the parameters (i.e., in this case, the coordinates \(\vec{x}\)) have to be perceived as eigenvalues of a suitable operator (let us note that many of the associated technical problems are discussed in the framework of the so called non-commutative-geometry [62]).
In our present paper, an innovative realization of the background-independence requirement
has been achieved by making the time-dependent radius of the expanding Universe quantized, i.e., identified, in the pure-state multiverse-philosophy spirit, with one of the eigenvalues of an _ad hoc_ quasi-Hermitian operator \(R(t)\).
## 7 Conclusions
At present, the use of non-Hermitian operators in quantum theory is remarkably diversified, ranging from the traditional and pragmatic effective-operator descriptions of the open and resonant quantum systems [31] up to the new horizons opened by the studies of the abstract mathematical aspects of the formalism [56].
In a narrower domain of the description of the closed (i.e., unitary) quantum systems using non-Hermitian operators the main division line is the one which separates the stationary and non-stationary theories. In the former subdomain the Coriolis forces vanish so that \(H=G\). There emerge no problems with calling the Schrodinger-equation generator a Hamiltonian [18].
In the latter, non-stationary-theory subdomain the situation is different. We have to work there with the less elementary relation
\[H(t)=G(t)+\Sigma(t) \tag{29}\]
(called, by some authors, the time-dependent Dyson equation [41, 63] - [67]). The term "Hamiltonian" must be then allocated, interpreted and used with much more care [68].
In the stationary NSP setting the idea of acceptability of the various non-Hermitian forms of quantum Hamiltonians has its origin in the Dyson's paper [5]. The knowledge of a standard stationary self-adjoint Hamiltonian \(\mathfrak{h}\) of textbooks (which is, by definition, safely self-adjoint in \({\cal L}^{(SP)}\)) was simply complemented there by a tentative, "trial and error" choice of \(\Omega\). Via the isospectrality constraint (3) one was immediately able to define a preconditioned, friendlier stationary representation \(H\) of the conventional Hamiltonian. This made the innovative "Dyson's picture" of QM complete.
The encouraging experience with the \(\Omega-\)mediated simplifications of multiple conventional Schrodinger equations (say, in nuclear physics [6]) inspired Scholtz et al [7] to invert the paradigm. They assumed that what we are given are just the "tractable" time-independent operators of the observables (including, first of all, the Hamiltonian \(H\)) which are non-Hermitian but which possess the real spectra. The core of the idea (i.e., of the "quasi-Hermitian" reformulation of quantum mechanics called non-Hermitian Schrodinger picture (NSP)) was that once we recall the respective quasi-Hermiticity constraints (cf., e.g., Eqs. (4) or (18) above), we may reconstruct (not always uniquely) and factorize (also not always uniquely) the correct physical Hilbert-space metric \(\Theta=\Omega^{\dagger}\,\Omega\) "if it exists" (cf. p. 74 in [7]). The resulting "quasi-Hermitian-input" version of the NSP formalism is then again a consistent theory.
The authors of paper [7] were well aware of the main weaknesses of their NSP recipe. They identified them as lying, in the sufficiently realistic models, not only in the ambiguity of the
assignment of \(\Theta\) to a given Hamiltonian \(H\) but also in the technically rather complicated nature of an explicit construction of any such a metric (cf. also a few related comments in [18]). Fortunately, a way out of the dead end has been found by Bender with coauthors [3, 11] who proposed to narrow the class of the eligible non-Hermitian stationary Hamiltonians \(H\). The more user-friendly subfamily of the Hamiltonians was required \({\cal PT}-\)symmetric, i.e., such that \(H^{\dagger}{\cal PT}={\cal PT}\,H\). Originally, the symbol \({\cal P}\) denoted here just the operator of parity while the antilinear operator \({\cal T}\) mediated the time reversal. Later, it became clear that after a suitable generalization of these concepts, the physics-motivated property of the \({\cal PT}-\)symmetry of \(H\) can be also perceived as mathematically equivalent to the self-adjointness of \(H\) with respect to a suitable pseudo-metric, i.e., as the self-adjointness of \(H\) in Krein space [69, 70].
The success of the \({\cal PT}-\)symmetric models was enormous [11]. Paradoxically, it also appeared to have the two not entirely pleasant consequences. The first one was that around the year 2007 the mainstream research left the rather narrow area of quantum physics. Beyond this area (i.e., typically, in classical optics) the idea of \({\cal PT}-\)symmetry found a large number of new and exciting applications (for reviews see [71] or the recent monographs [72, 73]). The second paradox connected with the deep appeal of the idea of the \({\cal PT}-\)symmetry of \(H\) can be seen in the above-mentioned narrowing of the scope and perspective. In the words written on p. 1198 of review [18], "the adopted terminology is rather unfortunate" because the "\({\cal PT}-\)symmetric QM is an example of a more general class of theories...in which \({\cal PT}-\)symmetry does not play a basic role".
As another unwanted consequence of the reduction of the scope of the \({\cal PT}-\)symmetric version of the theory there emerged (and, for a long time, survived) several "no-go" theorems (sampled, e.g., by Theorem Nr. 2 in [18]) which claimed the impossibility of a sufficiently satisfactory non-stationary extension of the quasi-Hermitian quantum mechanics. It took several years before the correct and consistent non-stationary extension of the quasi-Hermitian quantum mechanics as described in [25, 26] has finally been accepted as correct (cf., e.g., [74]). The process of acceptance was also slowed down by certain purely terminological misunderstandings (cf., e.g., their brief account in [23]). At present, fortunately, the situation seems clarified. Different groups of authors (using still very different notation conventions, cf., e.g., papers [75] or [42]) accepted, ultimately, the same (or at least practically the same) interpretation of the non-stationary NIP theory.
The related developments enriched the field by a number of the new and highly relevant applications. Virtually all of them can be characterized by the role played by the time-dependent Dyson equation (29) (cf., e.g., section Nr. 5 in [43]). The build-up of the theory may then start either from the knowledge of \(H(t)\) (so that one can speak about a "dynamical-information" (DI) input), or from the knowledge of \(\Sigma(t)\) (one then relies upon a purely kinematical or "Coriolis-force" (CF) input information), or, finally, from \(G(t)\) (let us call this option a "Schrodinger-generator" (SG) input knowledge).
In all of these alternative approaches their users decided to call their preferred preselected component of Eq. (29) "the Hamiltonian". In fact, the above-cited words that "the adopted terminology is rather unfortunate" applied again. The main reason is that even in the unitary
evolution dynamical regime the spectra of \(\Sigma(t)\) and/or of \(G(t)\) need not be real or even complex conjugate in general [39, 40, 68]. In this sense, calling the generator \(G(t)\) a Hamiltonian (which was, originally, the proposal by one of my PhD students [76, 77]) is far from optimal because only the spectrum of the observable-energy component \(H(t)\) of \(G(t)=H(t)-\Sigma(t)\) can consistently be assumed real.
On these grounds the most natural implementations of the NIP approach seems to be provided by its DI model-building realization. In our recent paper [43] such a conjecture has been tested using the exactly solvable wrong-sign-oscillator model of Fring and Tenney [30]. We came to a not quite expected conclusion that for the model in question, by far the most convenient and efficient construction strategy appeared to be the innocent-looking "kinematical" CF approach.
This observation can be perceived as one of the sources of inspiration of our present paper. It forced us to reconsider the theory and to re-read one of the oldest studies in the field, viz., paper [7] in which the authors always kept in mind the need of working with a complete set of observables rather than just with a Hamiltonian. We imagined that precisely this idea offers also the "missing source" of a deeper understanding of the non-stationary NIP theory.
The return to the roots helped us to resolve at least some of the paradoxes. For example, once one starts thinking about the unitary systems characterized by more than one observable [7, 24], the build-up of the theory starting from the mere single operator \(H(t)\) appears to be conceptually less satisfactory. During the build-up of a more satisfactory theory one must keep in mind both the dynamics (i.e., the influence of \(H(t)\) upon the states \(\psi(t)\) as mediated by Schrodinger equation(s)) and the kinematics (due to the fact that \(H(t)\) only appears in Schrodinger equation(s) in combination with Coriolis force).
In our present paper we managed to show that the initial choice of a "non-dynamical" observable (i.e., in our present notation, of \(R(t)\)) simplifies the constructions significantly. This is, after all, our present main methodical message. We saw that our innovative strategy does not only simplify, decisively, the "introductory-step" reconstruction of the kinematics (i.e., of the metric as well as of the Dyson map and of \(\Sigma(t)\) from \(R(t)\)), but that it also leaves an entirely unrestricted space for the subsequent choice of the "dynamics", i.e., for an independent specification of the instantaneous energy \(H(t)\), etc.
We may only add that our other, serendipitious, physicists-addressing message is that the independence of the initial choice of the non-dynamical observable \(R(t)\) might very well serve the purpose of the extension of the applicability of the unitary NIP quantum theory to the "exotic", exceptional-point-related dynamical regimes. This is sampled, in our schematic cosmological toy model, by the demonstration of the possibility of an internal consistence of the hypothetical point-like Big Bang singularity even after the quantization. |
2305.13737 | Symplectic Symmetry and Radial Symmetry Either Persistence or Breaking
of Incompressible Fluid | The incompressible Navier-Stokes equations are considered. We find that these
equations have symplectic symmetry structures. Two linearly independent
symplectic symmetries form moving frame. The velocity vector possesses
symplectic representation in a moving frame. The symplectic representation of
two-dimensional Navier-Stokes equations holds radial symmetry persistence. On
the other hand, we establish some results of radial symmetry either persistence
or breaking for the symplectic representations of three-dimensional
Navier-Stokes equations. Thanks radial symmetry persistence, we construct
infinite non-trivial solutions of static Euler equations with given boundary
condition. | Yongqian Han | 2023-05-23T06:44:13Z | http://arxiv.org/abs/2305.13737v3 | # Symplectic symmetry and radial symmetry either persistence or breaking of incompressible fluid
###### Abstract.
The incompressible Navier-Stokes equations are considered. We find that these equations have symplectic symmetry structures. Two linearly independent symplectic symmetries form moving frame. The velocity vectors possess symplectic representations in this moving frame. The symplectic representations of two-dimensional Navier-Stokes equations hold radial symmetry persistence. On the other hand, we establish some results of radial symmetry either persistence or breaking for the symplectic representations of three-dimensional Navier-Stokes equations. Thanks radial symmetry persistence, we construct infinite non-trivial solutions of static Euler equations with given boundary condition. Therefore the randomness and turbulence of incompressible fluid appear provided Navier-Stokes flow converges to static Euler flow.
Key words and phrases:Incompressible Navier-Stokes Equation, Symplectic Symmetry, Radial Symmetry Persistence, Radial Symmetry Breaking, Randomness and Turbulence of Incompressible Fluid 2000 Mathematics Subject Classification: 35Q30, 76D05, 76F02, 37L20
## 1. Introduction
The Navier-Stokes equations in \(\mathbb{R}^{3}\) with initial data are given by
\[u_{t}-\nu\Delta u+(u\cdot\nabla)u+\nabla P=0, \tag{1.1}\]
\[\nabla\cdot u=0, \tag{1.2}\]
\[u|_{t=0}=u_{0}, \tag{1.3}\]
where \(u=u(t,x)=\left(u^{1}(t,x),u^{2}(t,x),u^{3}(t,x)\right)\) and \(P=P(t,x)\) stand for the unknown velocity vector field of fluid and its pressure, \(u_{0}=u_{0}(x)=\left(u_{0}^{1}(x),u_{0}^{2}(x),u_{0}^{3}(x)\right)\) is the given initial velocity vector field satisfying \(\nabla\cdot u_{0}=0\), \(\nu>0\) is the coefficient of viscosity. Here \(\partial_{x_{j}}\) denotes by \(\partial_{j}\) (\(j=1,2,3\)).
For the mathematical setting of this problem, we introduce Hilbert space
\[H(\mathbb{R}^{3})=\left\{u\in\left(L^{2}(\mathbb{R}^{3})\right)^{3}\middle| \nabla\cdot u=0\right\}\]
endowed with \(\left(L^{2}(\mathbb{R}^{3})\right)^{3}\) norm (resp. scalar product denoted by \((\cdot,\cdot)\) ). For simplicity of presentation, space \(\left(H^{m}(\mathbb{R}^{3})\right)^{3}\) denotes by \(H^{m}\), where \(m\geq 0\). In what follows we use the usual convention and sum over the repeated indices.
The Fourier transformation of \(u(t,x)\) with respect to \(x\) denotes by \(\hat{u}(t,\xi)\). Then
\[\xi_{j}\hat{u}^{j}=\xi_{1}\hat{u}^{1}+\xi_{2}\hat{u}^{2}+\xi_{3}\hat{u}^{3}=0. \tag{1.4}\]
It is that \(\hat{u}(t,\xi)\) is perpendicular to \(\xi\). Denote by \(\xi\bot\hat{u}\). Equivalently \(\hat{u}(t,\xi)\in T_{\xi}\mathbb{S}^{2}\).
Let \(A=(a,b,c)\in\mathbb{R}^{3}-\{0\}\) and \(\xi\in\mathbb{S}^{2}\). The vector \(\{A\times\xi\}\in T_{\xi}\mathbb{S}^{2}\) is called the 1st order incompressible symplectic symmetry. Take \(3\times 3\) matrix \(M=\left(m^{ij}\right)\). The
vector \(\{\xi\times(M\xi)\}\in T_{\xi}\mathbb{S}^{2}\) is called the 2nd order incompressible symplectic symmetry. Generally let \(T=\big{(}T^{i_{1}\cdots i_{n}}\big{)}\) be \(n\)th order tensor and \(T(\xi\cdots\xi)=T^{i_{1}i_{2}\cdots i_{n}}\xi_{i_{2}}\cdots\xi_{i_{n}}\). The vector \(\{\xi\times T(\xi\cdots\xi)\}\in T_{\xi}\mathbb{S}^{2}\) is called the \(n\)th order incompressible symplectic symmetry.
Let vectors \(A\in\mathbb{R}^{3}-\{0\}\) and \(B\in\mathbb{R}^{3}-\{0\}\) be linear independent. Then \(A\times\xi\) and \(B\times\xi\) are linear independent for any \(\xi\in\mathbb{R}^{3}-\{0\}\), and form the basis of space \(T_{\xi}\mathbb{S}^{2}\) which is so-called moving frame. There exist \(\hat{\phi}(\xi)\) and \(\hat{\psi}(\xi)\) such that
\[\hat{u}(\xi)=\hat{\phi}(\xi)\cdot\{A\times\xi\}+\hat{\psi}(\xi)\cdot\{B\times \xi\},\ \ \forall\hat{u}(\xi)\in T_{\xi}\mathbb{S}^{2}.\]
Therefore for any velocity vector \(u\) satisfying the equation (1.2), there exist linear independent vectors \(A,B\in\mathbb{R}^{3}-\{0\}\) and real scalar functions \(\phi\) and \(\psi\) such that
\[u(t,x)=\{A\times\nabla\}\phi(t,x)+\{B\times\nabla\}\psi(t,x). \tag{1.5}\]
We call that the formulation (1.5) is (1,1)-symplectic representation of velocity vector \(u\).
Similarly we call that the following formulation (1.6)
\[u(t,x)=\{(A\times\nabla)\times\nabla\}\phi(t,x)+\{(B\times\nabla)\times\nabla \}\psi(t,x), \tag{1.6}\]
is (2,2)-symplectic representation of velocity vector \(u\). And the following formulation (1.7)
\[u(t,x)=\{A\times\nabla\}\phi(t,x)+\{(B\times\nabla)\times\nabla\}\psi(t,x), \tag{1.7}\]
is called (1,2)-symplectic representation of velocity vector \(u\).
There is a large literature studying the incompressible Navier-Stokes equations. In 1934 Leray [31] proved that there exists a global weak solution to the problem (1.1)-(1.3) with initial data in \(L^{2}\). In 1951 Hopf [21] extended this result to bounded smooth domain. Moreover Leray-Hopf weak solutions satisfy energy inequality [50]
\[\|u(t,\cdot)\|_{L^{2}}^{2}+2\int_{0}^{t}\|\nabla u(\tau,\cdot)\|_{L^{2}}^{2}d \tau\leq\|u_{0}\|_{L^{2}}^{2},\ \ \ \forall t>0. \tag{1.8}\]
The uniqueness and regularity of Leray-Hopf weak solution is a famous open question. Numerous regularity criteria were proved [12, 15, 26, 27, 29, 39, 47, 53].
Local existence and uniqueness of \(H^{m}\) solution can be established by using analytic semigroup [35] with initial data in \(H^{m}(\mathbb{R}^{3})\), \(m\geq 1\). This result is stated as follows.
**Proposition 1.1** (Local \(H^{m}\) Solution).: _Let \(u_{0}\in H^{m}(\mathbb{R}^{3})\cap H(\mathbb{R}^{3})\) and \(m\geq 1\). Then there exist \(T_{max}=T_{max}\big{(}\|u_{0}\|_{H^{m}}\big{)}>0\) and a unique solution \(u\) of the problem (1.1)-(1.3) such that \(u\in C\big{(}[0,T_{max});H^{m}(\mathbb{R}^{3})\cap H(\mathbb{R}^{3})\big{)}\)._
Local existence and uniqueness of mild solution or strong solution were established [4, 17, 23, 24, 25, 56] with initial data in \(L^{p}(\mathbb{R}^{3})\), \(p>3\). The main result is as follows.
**Proposition 1.2** (Local Mild Solution).: _Let \(u_{0}\in L^{p}(\mathbb{R}^{3})\) satisfy (1.2) in distribution and \(p>3\). Then there exist \(T_{max}=T_{max}\big{(}\|u_{0}\|_{L^{p}}\big{)}>0\) and a unique solution \(u\) of the problem (1.1)-(1.3) such that \(u\in C([0,T_{max});L^{p}(\mathbb{R}^{3}))\)._
The uniqueness in Proposition 1.1 and 1.2 ensures that the symplectic symmetries which are corresponding to initial data \(u_{0}\) can be kept by the solution \(u\).
Besides the local-posedness, the lower bounds of possible blowup solutions were considered [8, 9, 11, 15, 31, 40]. The concentration phenomena of possible blowup solutions was studied [32].
It is well-known that the equations (1.1) (1.2) are scaling-invariant in the sense that if \(u\) solves (1.1) (1.2) with initial data \(u_{0}\), so dose \(u_{\lambda}(t,x)=\lambda u(\lambda^{2}t,\lambda x)\) with initial data \(\lambda u_{0}(\lambda x)\). A space \(X\) defined on \(\mathbb{R}^{3}\) is so-called to be critical provided \(\|u_{0}\|_{X}=\|\lambda u_{0}(\lambda\cdot)\|_{X}\) for any \(\lambda>0\). \(L^{3}(\mathbb{R}^{3})\) is one of critical spaces. For the initial data in critical spaces, the posedness of global solution of the equations (1.1)-(1.3) is obtained [5, 10, 28, 38] with small initial data. The regularity criterion was established [12, 13, 26, 33, 46]. On the other hand, the ill-posedness was showed [2, 14, 54, 57].
It is also studied that solutions of the problem (1.1)-(1.3) are in various function spaces [6, 16, 18, 22, 28, 49]. Partial regularity of suitable weak solutions was established [3, 30, 34, 42, 43, 52, 55]. Non-existence of self-similar solutions was proved [37, 51]. Decay of the solutions can be found in [7, 36, 44, 45], etc.
In this paper, we study the radial symmetry of symplectic representation for the solutions of the problem (1.1)- (1.3).
Firstly we consider two-dimensional Navier-Stokes equations. Here \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\). The main result is as follows.
**Theorem 1.3** (Radial Symmetry Persistence).: _For the problem (1.1)-(1.3), let \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\), the velocity vectors \(u\) and \(u_{0}\) respectively hold the following (1,1)-symplectic representation_
\[u(t,x)= \big{(}-\partial_{2}\phi(t,x),\partial_{1}\phi(t,x),0\big{)}, \tag{1.10}\] \[u_{0}(x)= \big{(}-\partial_{2}\phi_{0}(x),\partial_{1}\phi_{0}(x),0\big{)}. \tag{1.9}\]
_Assume that the initial data \(\phi_{0}(x)\) is radial symmetric function and regular enough. Then there exists a unique global solution \(u\) of the problem (1.1)-(1.3) such that \(u\) satisfies (1.9) and_
\[\phi(t,x)= \frac{1}{4\pi\nu t}\int_{\mathbb{R}^{2}}\exp\{-\frac{|y|^{2}}{4 \nu t}\}\phi_{0}(x-y)dy. \tag{1.11}\]
_Moreover the function \(\phi\) is radial symmetric function._
By the proof of Theorem 1.3, equation (2.3) means that \(u_{0}\) defined by (1.10) is static Euler flow provided \(\phi_{0}(x)\) is radial symmetric function. Let \(\Phi(s)\in C^{\infty}_{c}(0,1)\) and \(\phi_{0}(x)=\phi_{0}(r)=\Phi(r/R_{a})\) for any \(R_{a}>0\), where \(x\in\mathbb{R}^{2}\) and \(r^{2}=x\cdot x\). Then \(u=\big{(}-\partial_{2}\phi_{0}(r),\partial_{1}\phi_{0}(r),0\big{)}\) is the solution of static two dimensional Euler equations
\[(u\cdot\nabla)u+\nabla P=0,\] \[\nabla\cdot u=0, \tag{1.12}\]
with Dirichlet boundary condition
\[u|_{\partial B^{2}(r<R_{a})}=0. \tag{1.13}\]
It is obvious that the number of solution of static two dimensional Euler equations (1.12) (1.13) is infinite.
Considering two dimensional Navier-Stokes equations (1.1)-(1.3) with symplectic representation (1.9) (1.10), boundary condition (1.13) and initial data \(\phi_{0}=\Phi(r/R_{a})\) for given \(R_{a}\), we have there exists a unique solution \(u\) of this problem.
Provided Navier-Stokes flow (1.1)-(1.3) (1.13) converges to static Euler flow (1.12) (1.13) as \(t\to\infty\) and \(\nu\to 0\), formally the randomness and turbulence of incompressible fluid appear, which is due to there exist infinite non-trivial solutions of static two dimensional Euler equations (1.12) with Dirichlet boundary condition (1.13).
For instance, there exists a unique Navier-Stokes flow
\[\begin{split} u(t,x)=&\big{(}-\partial_{2}\phi(t,x), \partial_{1}\phi(t,x),0\big{)},\\ \phi(t,x)=&\frac{1}{4\pi\nu t}\int_{\mathbb{R}^{2}} \exp\{-\frac{|y|^{2}}{4\nu t}\}\Phi\big{(}|x-y|/R_{a}\big{)}dy.\end{split} \tag{1.14}\]
Assume that
\[\lim_{t\to\infty,\nu\to 0}t\nu=w,\ \ w\ \text{is random number}. \tag{1.15}\]
We solve the limit of Navier-Stokes flow defined by (1.14)
\[\begin{split} u_{e}(r)=&\lim_{t\to\infty,\nu\to 0}u_{ns}(t,x)\\ =&\big{(}-\partial_{2}\phi_{c}(r),\partial_{1}\phi_ {c}(r),0\big{)},\\ \phi_{c}(r)=&\phi_{c}(x)=\frac{1}{4\pi w}\int_{ \mathbb{R}^{2}}\exp\{-\frac{|y|^{2}}{4w}\}\Phi\big{(}|x-y|/R_{a}\big{)}dy.\end{split} \tag{1.16}\]
It is obvious that random \(u_{e}\) is the solution of static Euler equations (1.12).
Similarly let \(\Phi_{j}(r)=\alpha\sin(jr)+\beta\cos(jr)\), \(j=1,2,\cdots\), \(\alpha\) and \(\beta\) be any constants. Then
\[u(x)= \big{(}-\partial_{2}\Phi_{j}(r),\partial_{1}\Phi_{j}(r),0\big{)}\]
is the solution of static two dimensional Euler equations (1.12) with symplectic representation
\[u(x)= \big{(}-\partial_{2}\phi(x),\partial_{1}\phi(x),0\big{)},\]
and periodic boundary condition
\[\phi(r+2\pi)=\phi(r),\ \ \forall r\geq 0 \tag{1.17}\]
for any \(j=1,2,\cdots\).
Considering two dimensional Navier-Stokes equations (1.1)-(1.3) with symplectic representation (1.9) (1.10), periodic boundary condition (1.17) and initial data \(\phi_{0}=\Phi_{k}\) for given \(k\), \(\alpha\) and \(\beta\), we have there exists a unique solution \(u\) of this problem.
Provided Navier-Stokes flow (1.1)-(1.3) (1.17) converges to static Euler flow (1.12) (1.17) as \(t\to\infty\) and \(\nu\to 0\), formally the randomness and turbulence of incompressible fluid appear, which is due to there exist infinite non-trivial solutions of static two dimensional Euler equations (1.12) with periodic boundary condition (1.17).
For instance, there exists a unique Navier-Stokes flow
\[\begin{split} u(t,x)=&\big{(}-\partial_{2}\phi_{k} (t,r),\partial_{1}\phi_{k}(t,r),0\big{)},\\ \phi_{k}(t,r)=&\phi_{k}(t,x)=\frac{1}{4\pi\nu t} \int_{\mathbb{R}^{2}}\exp\{-\frac{|y|^{2}}{4\nu t}\}\Phi_{k}(|x-y|)dy.\end{split} \tag{1.18}\]
Provided assumption (1.15) is satisfied, we solve the limit of Navier-Stokes flow defined by (1.18)
\[\begin{split} u_{e}(r)=&\lim_{t\to\infty,\nu\to 0}u_{ ns}(t,x)\\ =&\big{(}-\partial_{2}\phi_{k}(r),\partial_{1}\phi_ {k}(r),0\big{)},\\ \phi_{k}(r)=&\phi_{k}(x)=\frac{1}{4\pi w}\int_{ \mathbb{R}^{2}}\exp\{-\frac{|y|^{2}}{4w}\}\Phi_{k}(|x-y|)dy.\end{split} \tag{1.19}\]
It is obvious that random \(u_{e}\) is the solution of static Euler equations (1.12).
For example \(\phi_{0}(r)=r^{-\frac{2}{p}+1}\). By Theorem 1.3, there exists a unique global solution \(u\) although \(\|u_{0}\|_{L^{p}(\mathbb{R}^{2})}=\infty\) for any \(1\leq p\leq\infty\). The problem is whether this solution \(u\) has continuous dependence on initial data or not.
Contrast to the radial symmetry persistence of two-dimensional Navier-Stokes equations, there appears more complicate and more interesting phenomena for three-dimensional Navier-Stokes equations.
We find that the following two equations
\[\frac{1}{r}\partial_{r}\psi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}-\frac{1}{r}\partial_{r}\phi\cdot\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\psi\Big{)}=0, \tag{1.21}\] \[\frac{1}{r}\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}+\frac{1}{r}\partial_{r}\psi\cdot\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\Delta\psi\Big{)}=0, \tag{1.20}\]
play key role to solve three-dimensional Navier-Stokes equations and static three dimensional Euler equations.
**Theorem 1.4** (Radial Symmetry Either Persistence or Breaking).: _For the problem (1.1)-(1.3), assume that the velocity vectors \(u\) and \(u_{0}\neq 0\) respectively hold the following (1,2)-symplectic representation_
\[u(t,x)= (A\times\nabla)\phi(t,x)+\{(A\times\nabla)\times\nabla\}\psi(t,x), \tag{1.23}\] \[u_{0}(x)= (A\times\nabla)\phi_{0}(x)+\{(A\times\nabla)\times\nabla\}\psi_{ 0}(x), \tag{1.22}\]
_where vector \(A\in\mathbb{R}^{3}-\{0\}\). Let \(\phi_{0}(x)\) and \(\psi_{0}(x)\) be regular enough._
_Provided \((\phi,\psi)=\big{(}\Phi(r),\Psi(r)\big{)}\) satisfies equations (1.20) (1.21)._
(I) (Static Euler Flow) \(u=(A\times\nabla)\Phi+\{(A\times\nabla)\times\nabla\}\Psi\) _satisfies static three dimensional Euler equations (1.12)._
(II) (Radial Symmetry Persistence) _Provided \((\phi_{0},\psi_{0})=\big{(}\Phi(r),\Psi(r)\big{)}\). Let us define_
\[\phi(t,x)=\phi(t,r)=e^{-\nu\Delta t}\phi_{0}=e^{-\nu\Delta t}\Phi,\] \[\psi(t,x)=\psi(t,r)=e^{-\nu\Delta t}\psi_{0}=e^{-\nu\Delta t}\Psi. \tag{1.24}\]
_Then \(u\) defined by (1.22) (1.24) is solution of the problem (1.1)-(1.3)._
(III) (Radial Symmetry Breaking) _Provided \((\phi_{0},\psi_{0})\neq\big{(}\Phi(r),\Psi(r)\big{)}\), \(u\) defined by (1.22) is solution of the problem (1.1)-(1.3), and \([0,T_{max})\) is the maximum existent interval of \(t\) for this solution \(u\). Assume that there exists \(t_{0}\in(0,T_{max})\) such that \(\big{(}\phi(t,x),\psi(t,x)\big{)}\) defined by (1.22) is radial with respect to \(x\) at \(t=t_{0}\). Then \(T_{max}=\infty\), and there exists \(T_{r}\in(0,t_{0}]\) such that this vector \(\big{(}\phi(t,x),\psi(t,x)\big{)}\) is radial with respect to \(x\) for any \(t\geq T_{r}\), but it is not radial with respect to \(x\) for any \(t\in(0,T_{r})\). Otherwise this vector \(\big{(}\phi(t,x),\psi(t,x)\big{)}\) can not be radial with respect to \(x\) for any \(t\in(0,T_{max})\) although the initial data \(\phi_{0}(x)\) and \(\psi_{0}(x)\) are radial symmetric functions._
**Corollary 1.5**.: _Let us define_
\[\Phi_{\lambda\alpha\beta}(r) =\lambda\Psi_{\lambda\alpha\beta}(r),\] \[\Psi_{\lambda\alpha\beta}(r) =\alpha\frac{1}{r}\sin(\lambda r)+\beta\frac{1}{r}\cos(\lambda r), \tag{1.25}\]
_where \(\lambda,\alpha\) and \(\beta\) are any real constants. Then \((\phi,\psi)=\Big{(}\Phi_{\lambda\alpha\beta}(r),\Psi_{\lambda\alpha\beta}(r) \Big{)}\) satisfies equations (1.20) (1.21). \(u=(A\times\nabla)\Phi_{\lambda\alpha\beta}+\{(A\times\nabla)\times\nabla\}\Psi_ {\lambda\alpha\beta}\) is solution of static three dimensional Euler equations (1.12)._
_Let us take_
\[\begin{split}&\phi(t,x)=\phi(t,r)=e^{-\nu\lambda^{2}t}\Phi_{\lambda \alpha\beta}=\lambda\psi(t,x),\\ &\psi(t,x)=\psi(t,r)=e^{-\nu\lambda^{2}t}\Psi_{\lambda\alpha\beta}.\end{split} \tag{1.26}\]
_Then \(u\) defined by (1.22) (1.26) is solution of the problem (1.1)-(1.3)._
In Corollary 1.5, let \(\lambda=j=1,2,\cdots\). Then
\[u=(A\times\nabla)\Phi_{j\alpha\beta}+\{(A\times\nabla)\times\nabla\}\Psi_{j \alpha\beta}\]
is the solution of static three dimensional Euler equations (1.12) with symplectic representation
\[u=(A\times\nabla)\phi+\{(A\times\nabla)\times\nabla\}\psi \tag{1.27}\]
and periodic boundary condition
\[\{\xi\phi(\xi)\}\big{|}_{\xi=r+2\pi}=r\phi(r),\ \ \{\xi\psi(\xi)\}\big{|}_{ \xi=r+2\pi}=r\psi(r),\ \ \forall r\geq 0 \tag{1.28}\]
for any \(j=1,2,\cdots\).
Considering three dimensional Navier-Stokes equations (1.1)-(1.3) with symplectic representation (1.22) (1.23), periodic boundary condition (1.28) and initial data \((\phi_{0},\psi_{0})=\Big{(}\Phi_{k\alpha\beta},\Psi_{k\alpha\beta}\Big{)}\) for given \(k\), \(\alpha\) and \(\beta\), we have there exists a unique solution \(u\) of this problem.
Provided Navier-Stokes flow (1.1)-(1.3) (1.28) converges to static Euler flow (1.12) (1.28) as \(t\to\infty\) and \(\nu\to 0\), formally the randomness and turbulence of incompressible fluid appear, which is due to there exist infinite non-trivial solutions of static three dimensional Euler equations (1.12) with periodic boundary condition (1.28).
For instance, provided assumption (1.15) is satisfied, we solve the limit of Navier-Stokes flow defined by (1.22) (1.26)
\[\begin{split} u_{e}(r)=&\lim_{t\to\infty,\nu\to 0 }u_{ns}(t,x)\\ =&(A\times\nabla)\phi_{\lambda\alpha\beta}(r)+\{(A \times\nabla)\times\nabla\}\psi_{\lambda\alpha\beta}(r),\\ \phi_{\lambda\alpha\beta}(r)=& e^{-\lambda^{2}w} \Phi_{\lambda\alpha\beta}(r),\\ \psi_{\lambda\alpha\beta}(r)=& e^{-\lambda^{2}w} \Psi_{\lambda\alpha\beta}(r).\end{split} \tag{1.29}\]
It is obvious that random \(u_{e}\) is the solution of static Euler equations (1.12).
**Remark 1.6**.: _The equation (1.27) is so-called B\(\ddot{a}\)cklund transformation which changes equations (1.20) (1.21) into static three dimensional Euler equations (1.12)._
_Provided \(\big{(}\Phi(r),\Psi(r)\big{)}\) satisfies equations (1.20) (1.21). It is that_
\[\frac{1}{r}\partial_{r}\Psi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Phi\Big{)}-\frac{1}{r}\partial_{r}\Phi\cdot\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\Psi\Big{)}=0, \tag{1.31}\] \[\frac{1}{r}\partial_{r}\Phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Phi\Big{)}+\frac{1}{r}\partial_{r}\Psi\cdot\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\Delta\Psi\Big{)}=0. \tag{1.30}\]
_Let_
\[\phi(t,x)=e^{-\nu\Delta t}\Phi, \tag{1.32}\]
\[\psi(t,x)=e^{-\nu\Delta t}\Psi.\]
_The equations (1.22) (1.32) are also B\(\ddot{a}\)cklund transformations which change equations (1.30) (1.31) into Navier-Stokes equations (1.1) (1.2)._
_More B\(\ddot{a}\)cklund transformations can be found in [1, 41]._
**Theorem 1.7** (Radial Symmetry Breaking).: _Let \(u\) be solution of the problem (1.1)-(1.3). Assume that the velocity vectors \(u\) and \(u_{0}\) respectively hold the following (1,2)-symplectic representation_
\[u(t,x)= (A\times\nabla)\phi(t,x)+\{(B\times\nabla)\times\nabla\}\psi(t,x), \tag{1.34}\] \[u_{0}(x)= (A\times\nabla)\phi_{0}(x)+\{(B\times\nabla)\times\nabla\}\psi_{ 0}(x), \tag{1.33}\]
_where vectors \(A\in\mathbb{R}^{3}-\{0\}\) and \(B\in\mathbb{R}^{3}-\{0\}\) are perpendicular each other \(A\bot B\). Let \(\phi_{0}(x)\) and \(\psi_{0}(x)\) be regular enough. Then the functions \(\phi(t,x)\) and \(\psi(t,x)\) can not be radial symmetric functions of \(x\) for any \(t>0\) although the initial data \(\phi_{0}(x)\) and \(\psi_{0}(x)\) are radial symmetric functions, except that \(\phi_{0}=f_{2}r^{2}+f_{0}\), \(\psi_{0}=g_{4}r^{4}+g_{2}r^{2}+g_{0}\) and \(f_{2}g_{4}=0\). Here \(f_{j}\) and \(g_{j}\) (j=0, 2, 4) are arbitrary constants._
_Especially provided \(u_{0}\neq 0\) and \(u_{0}\in L^{2}(\mathbb{R}^{3})\), then the functions \(\phi(t,x)\) and \(\psi(t,x)\) can not be radial symmetric functions of \(x\) for any \(t>0\) although the initial data \(\phi_{0}(x)\) and \(\psi_{0}(x)\) are radial symmetric functions._
**Theorem 1.8** (Radial Symmetry Breaking).: _Let \(u\) be solution of the problem (1.1)-(1.3). Assume that the velocity vectors \(u\) and \(u_{0}\neq 0\) respectively hold the following (1,1)-symplectic representation_
\[u(t,x)= (A\times\nabla)\phi(t,x)+(B\times\nabla)\psi(t,x), \tag{1.36}\] \[u_{0}(x)= (A\times\nabla)\phi_{0}(x)+(B\times\nabla)\psi_{0}(x), \tag{1.35}\]
_where vectors \(A\in\mathbb{R}^{3}-\{0\}\) and \(B\in\mathbb{R}^{3}-\{0\}\) are linearly independent. Let \(\phi_{0}(x)\) and \(\psi_{0}(x)\) be regular enough. Then the functions \(\phi(t,x)\) and \(\psi(t,x)\) can not be radial symmetric functions of \(x\) for any \(t>0\) although the initial data \(\phi_{0}(x)\) and \(\psi_{0}(x)\) are radial symmetric functions, except that \(\phi_{0}=f_{2}r^{2}+f_{0}\) and \(\psi_{0}=g_{2}r^{2}+g_{0}\). Here \(f_{j}\) and \(g_{j}\) (j=0, 2) are arbitrary constants._
_Especially provided \(u_{0}\neq 0\) and \(u_{0}\in L^{2}(\mathbb{R}^{3})\), then the functions \(\phi(t,x)\) and \(\psi(t,x)\) can not be radial symmetric functions of \(x\) for any \(t>0\) although the initial data \(\phi_{0}(x)\) and \(\psi_{0}(x)\) are radial symmetric functions._
**Theorem 1.9** (Radial Symmetry Breaking).: _Let \(u\) be solution of the problem (1.1)-(1.3). Assume that the velocity vectors \(u\) and \(u_{0}\) respectively hold the following (2,2)-symplectic representation_
\[u(t,x)= \{(A\times\nabla)\times\nabla\}\phi(t,x)+\{(B\times\nabla)\times \nabla\}\psi(t,x), \tag{1.38}\] \[u_{0}(x)= \{(A\times\nabla)\times\nabla\}\phi_{0}(x)+\{(B\times\nabla) \times\nabla\}\psi_{0}(x), \tag{1.37}\]
_where vectors \(A\in\mathbb{R}^{3}-\{0\}\) and \(B\in\mathbb{R}^{3}-\{0\}\) are linearly independent. Let \(\phi_{0}(x)\) and \(\psi_{0}(x)\) be regular enough. Then the functions \(\phi(t,x)\) and \(\psi(t,x)\) can not be radial symmetric functions of \(x\) for any \(t>0\) although the initial data \(\phi_{0}(x)\) and \(\psi_{0}(x)\) are radial symmetric functions, except that \(\phi_{0}=f_{4}r^{4}+f_{2}r^{2}+f_{0}\), \(\psi_{0}=g_{4}r^{4}+g_{2}r^{2}+g_{0}\) and \(f_{2}g_{4}=f_{4}g_{2}\). Here \(f_{j}\) and \(g_{j}\) (j=0, 2, 4) are arbitrary constants._
_Especially provided \(u_{0}\neq 0\) and \(u_{0}\in L^{2}(\mathbb{R}^{3})\), then the functions \(\phi(t,x)\) and \(\psi(t,x)\) can not be radial symmetric functions of \(x\) for any \(t>0\) although the initial data \(\phi_{0}(x)\) and \(\psi_{0}(x)\) are radial symmetric functions._
Here radial symmetry breaking is a kind of singularity of structure. Appropriate orthogonal transformations are the main ingredient of proving these theorems.
The plan of this paper is as follows. Section 2 is devoted to study radial symmetry persistence of two-dimensional Navier-Stokes equations. Section 3 is devoted to show radial symmetry either persistence or breaking of three-dimensional Navier-Stokes equations with (1,2)-symplectic representation. Section 4 is devoted to establish radial symmetry breaking of three-dimensional Navier-Stokes equations with (1,1)-symplectic representation. Section 5 is devoted to investigate radial symmetry breaking of three-dimensional Navier-Stokes equations with (2,2)-symplectic representation.
## 2. (1,1)-Symplectic Representation and
Radial Symmetry Persistence in \(\mathbb{R}^{2}\)
In this section, we assume that \(x=(x_{1},x_{2})\in\mathbb{R}^{2}\) and the velocity vector \(u\) holds the following (1,1)-symplectic representation
\[\begin{split}& u(t,x)=u(t,x_{1},x_{2})=\big{(}u^{1}(t,x_{1},x_{2}),u ^{2}(t,x_{1},x_{2}),0\big{)},\\ & u^{1}(t,x_{1},x_{2})=-\partial_{2}\phi(t,x_{1},x_{2}),\\ & u^{2}(t,x_{1},x_{2})=\partial_{1}\phi(t,x_{1},x_{2}).\end{split} \tag{2.1}\]
Putting together (1.1) and (2.1), we have
\[\begin{split}\partial_{t}\Delta\phi-\nu\Delta^{2}\phi=& -\partial_{1}(u\cdot\nabla)u^{2}+\partial_{2}(u\cdot\nabla)u^{1}\\ =&-\{\phi,\Delta\phi\},\end{split} \tag{2.2}\]
where \(\Delta=\partial_{1}^{2}+\partial_{2}^{2}\), Poisson bracket \(\{\cdot,\cdot\}\) is given by
\[\{f,g\}=\partial_{1}f\partial_{2}g-\partial_{2}f\partial_{1}g.\]
The equation (2.2) is well known Hasegawa-Mima equation ([19, 20]).
Now we assume that \(\phi\) is radial symmetric function with respect to space variable \(x\in\mathbb{R}^{2}\). It is that \(\phi(t,x)=\phi(t,r)\) and \(r^{2}=x_{1}^{2}+x_{2}^{2}\). Then we have
\[\{\phi,\Delta\phi\}=\frac{x_{1}}{r}\partial_{r}\phi\cdot\frac{x_{2}}{r} \partial_{r}\Delta\phi-\frac{x_{2}}{r}\partial_{r}\phi\cdot\frac{x_{1}}{r} \partial_{r}\Delta\phi=0, \tag{2.3}\]
and the equation (2.2) is equivalent to
\[\partial_{t}\phi-\nu\Delta\phi= w_{0}(t), \tag{2.4}\]
where \(w_{0}\) is any function of \(t\).
(2.3) means that \(u=(-\partial_{2}\phi,\partial_{1}\phi,0)\) is the solution of two dimensional static Euler equation (1.12).
There exists a global solution of equation (2.4)
\[\phi(t,x)=\frac{1}{4\pi\nu t}\int_{\mathbb{R}^{2}}\exp\{-\frac{|y|^{2}}{4\nu t }\}\phi_{0}(x-y)dy+\int_{0}^{t}w_{0}(s)ds, \tag{2.5}\]
where \(\phi_{0}(r)=\phi(t,r)|_{t=0}\).
\(\phi(t,x)\) is radial symmetric function of \(x\) since \(\phi_{0}\) is radial.
Because \(w_{0}(t)\) has no contribution on velocity \(u\), we select \(w_{0}=0\).
For any point \(\xi\in\mathbb{S}^{1}\), the unit tangent vector \(v\in T_{\xi}\mathbb{S}^{1}\) is unique corresponding to given positive direction. Then the moving frame on \(T\mathbb{S}^{1}\) is also unique, and the solution \(u\) is unique.
Theorem 1.3 is proved.
(1,2)-Symplectic Representation and Radial Symmetry Either Persistence or Breaking in \(\mathbb{R}^{3}\)
In this section, we assume that the velocity vector \(u\) holds the following (1,2)-symplectic representation
\[u(t,x)= (A\times\nabla)\phi(t,x)+\{(B\times\nabla)\times\nabla\}\psi(t,x), \tag{3.1}\]
where vectors \(A=(a_{1},a_{2},a_{3})\in\mathbb{R}^{3}-\{0\}\) and \(B=(b_{1},b_{2},b_{3})\in\mathbb{R}^{3}-\{0\}\).
Let
\[\begin{split}\omega(t,x)=&\nabla\times u(t,x)\\ =&-\{(A\times\nabla)\times\nabla\}\phi(t,x)+(B \times\nabla)\Delta\psi(t,x),\end{split} \tag{3.2}\]
Taking curl with equation (1.1), we have
\[\omega_{t}-\nu\Delta\omega+(u\cdot\nabla)\omega-(\omega\cdot\nabla)u=0. \tag{3.3}\]
Thanks the following observations
\[\begin{split}(B\times\nabla)\cdot u(t,x)&=(A\times \nabla)\cdot(B\times\nabla)\phi(t,x)\\ &=\big{\{}(A\cdot B)\Delta-(A\cdot\nabla)(B\cdot\nabla)\big{\}} \phi(t,x),\end{split} \tag{3.4}\]
\[\begin{split}(A\times\nabla)\cdot\omega(t,x)&=(A \times\nabla)\cdot(B\times\nabla)\Delta\psi(t,x)\\ &=\big{\{}(A\cdot B)\Delta-(A\cdot\nabla)(B\cdot\nabla)\big{\}} \Delta\psi(t,x),\end{split} \tag{3.5}\]
taking scalar product with \(B\times\nabla\) and equation (1.1), we have
\[\big{\{}(A\cdot B)\Delta-(A\cdot\nabla)(B\cdot\nabla)\big{\}}\{\phi_{t}-\nu \Delta\phi\}+(B\times\nabla)\cdot\{(u\cdot\nabla)u\}=0. \tag{3.6}\]
And taking scalar product with \(A\times\nabla\) and equation (3.3), we derive
\[\big{\{}(A\cdot B)\Delta-(A\cdot\nabla)(B\cdot\nabla)\big{\}}\Delta\{\psi_{t }-\nu\Delta\psi\}+(A\times\nabla)\cdot\{(u\cdot\nabla)\omega-(\omega\cdot \nabla)u\}=0. \tag{3.7}\]
Now we assume that \(\phi\) and \(\psi\) are radial symmetric functions with respect to space variable \(x\in\mathbb{R}^{3}\). It is that \(\phi(t,x)=\phi(t,r)\), \(\psi(t,x)=\psi(t,r)\) and \(r^{2}=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\). Then we have
\[\begin{split} u(t,x)=&(A\times x)\frac{1}{r}\partial _{r}\phi+(B\cdot x)x\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\\ &+B\frac{1}{r}\partial_{r}\psi-B\Delta\psi,\end{split} \tag{3.8}\]
\[\begin{split}\omega(t,x)=&-(A\cdot x)x\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}-A\frac{1}{r}\partial_{r }\phi\\ &+A\Delta\phi+(B\times x)\frac{1}{r}\partial_{r}\Delta\psi,\end{split} \tag{3.9}\]
\[\begin{split}(u\cdot\nabla)u=&\Big{\{}\frac{1}{r} \partial_{r}\phi\{(A\times x)\cdot\nabla\}+(B\cdot x)\frac{1}{r}\partial_{r }\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\{x\cdot\nabla\}\\ &+\frac{1}{r}\partial_{r}\psi\{B\cdot\nabla\}-\Delta\psi\{B \cdot\nabla\}\Big{\}}\Big{\{}(A\times x)\frac{1}{r}\partial_{r}\phi\\ &+(B\cdot x)x\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r }\psi\Big{)}+B\frac{1}{r}\partial_{r}\psi-B\Delta\psi\Big{\}}\end{split} \tag{3.10}\]
\[= \{A\times(A\times x)\}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\phi\] \[-x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+2(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+(A\times x)(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+2x(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial _{r}\psi\Big{)}\] \[+x(B\cdot x)^{2}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\cdot\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\] \[+B(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)} \cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-B(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)} \cdot\partial_{r}\Delta\psi\] \[+(A\times B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\psi\] \[+(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+x(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+B(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+x(B\cdot x)^{2}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+B(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-B(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Delta\psi\] \[-(A\times B)\Delta\psi\cdot\frac{1}{r}\partial_{r}\phi\] \[-(A\times x)(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\phi\Big{)}\] \[-x(B\cdot B)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\psi\Big{)}\] \[-B(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\] \[-B(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\] \[+B(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Delta\psi,\]
\[(B\times\nabla)\cdot\{(u\cdot\nabla)u\}\] \[= (B\times\nabla)\cdot\Big{\{}\{A(A\cdot x)-x(A\cdot A)\}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\phi\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}x\{(A\times B)\cdot x\}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}}\] \[+2(B\times\nabla)\cdot\Big{\{}(A\times x)(B\cdot x)\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}(A\times x)(B\cdot x)\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\] \[+2(B\times\nabla)\cdot\Big{\{}x(B\cdot x)^{2}\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}x(B\cdot x)^{2}\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\psi\Big{)}\cdot\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\partial_{r}\Big{(}\frac{ 1}{r}\partial_{r}\psi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\partial_{r}\Big{(}\frac{ 1}{r}\partial_{r}\psi\Big{)}\cdot\partial_{r}\Delta\psi\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}(A\times B)\frac{1}{r}\partial_{r} \phi\cdot\frac{1}{r}\partial_{r}\psi\Big{\}} \tag{3.11}\] \[+(B\times\nabla)\cdot\Big{\{}(A\times x)(B\cdot x)\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}x(B\cdot B)\frac{1}{r}\partial_{r} \psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\frac{1}{r}\partial_{r} \psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\frac{1}{r}\partial_{r} \psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\frac{1}{r}\partial_{r} \psi\cdot\frac{1}{r}\partial_{r}\Delta\psi\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}(A\times B)\Delta\psi\cdot\frac{1}{r} \partial_{r}\phi\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}(A\times x)(B\cdot x)\Delta\psi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}x(B\cdot B)\Delta\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\Delta\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\Delta\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\Delta\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\Delta\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+(B\times\nabla)\cdot\Big{\{}B(B\cdot x)\Delta\psi\cdot\frac{1}{r} \partial_{r}\Delta\psi\Big{\}}\]
\[(u\cdot\nabla)\omega= \Big{\{}\frac{1}{r}\partial_{r}\phi\{(A\times x)\cdot\nabla\}+(B \cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\{x \cdot\nabla\} \tag{3.12}\] \[+\frac{1}{r}\partial_{r}\psi\{B\cdot\nabla\}-\Delta\psi\{B\cdot \nabla\}\Big{\}}\] \[\Big{\{}-(A\cdot x)x\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}-A\frac{1}{r}\partial_{r}\phi+A\Delta\phi+(B\times x) \frac{1}{r}\partial_{r}\Delta\psi\Big{\}}\]
\[= -(A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+(B\times(A\times x))\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Delta\psi\] \[-2x(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_ {r}\phi\Big{)}\] \[-x(A\cdot x)(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\cdot\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\] \[-A(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)} \cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+A(B\times x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)} \cdot\partial_{r}\Delta\phi\] \[+(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi\] \[+(B\times x)(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\] \[-x(A\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r }\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-B(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-x(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[-A(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+A(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Delta\phi\] \[+x(A\cdot B)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\phi\Big{)}\] \[+B(A\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\phi\Big{)}\] \[+x(A\cdot x)(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+A(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\phi\Big{)}\] \[-A(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Delta\phi\] \[-(B\times x)(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\Delta\psi\Big{)}\]
\[(\omega\cdot\nabla)u= \Big{\{}-(A\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\{x\cdot\nabla\}-\frac{1}{r}\partial_{r}\phi\{A\cdot\nabla\} \tag{3.13}\] \[+\Delta\phi\{A\cdot\nabla\}+\frac{1}{r}\partial_{r}\Delta\psi\{( B\times x)\cdot\nabla\}\Big{\}}\] \[\Big{\{}(A\times x)\frac{1}{r}\partial_{r}\phi+(B\cdot x)x\frac{ 1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}+B\frac{1}{r} \partial_{r}\psi-B\Delta\psi\Big{\}}\]
\[= -(A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\cdot\frac{1}{r}\partial_{r}\phi\] \[-(A\times x)(A\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2x(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\] \[-x(A\cdot x)(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\cdot\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\] \[-B(A\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+B(A\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \cdot\partial_{r}\Delta\psi\] \[-(A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-x(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-A(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-x(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}}\] \[-B(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+B(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Delta\psi\] \[+(A\times x)(A\cdot x)\Delta\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+x(A\cdot B)\Delta\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\psi\Big{)}\] \[+A(B\cdot x)\Delta\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\] \[+A(A\times(B\times x))\frac{1}{r}\partial_{r}\Delta\psi\cdot\frac{ 1}{r}\partial_{r}\phi\] \[+(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\Delta\psi\cdot\frac{ 1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\]
\[= (A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-A(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+A(B\cdot x)\Delta\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\] \[-2x(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial _{r}\psi\Big{)}\] \[+x(A\cdot x)(B\cdot x)\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}}\] \[-x(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+x(A\cdot B)\Delta\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\] \[-x(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Delta\psi\] \[+B(A\cdot x)\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-B(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Delta\psi\] \[+(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\Delta\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)},\]
\[(A\times\nabla)\cdot\{(u\cdot\nabla)\omega-(\omega\cdot\nabla)u\}\] \[= (A\times\nabla)\cdot\Big{\{}-2(A\times x)(A\cdot x)\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\] \[+A(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Delta\psi\] \[+A(B\cdot x)\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-A(B\cdot x)\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Delta\phi\] \[+A(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-A(B\cdot x)\Delta\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{ 1}{r}\partial_{r}\psi\Big{)}\] \[+x(A\cdot x)(B\cdot x)\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\Big{\}}\] \[-x(A\cdot x)(B\cdot x)\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}} \tag{3.14}\] \[-x(A\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+x(A\cdot B)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\phi\Big{)}\] \[+x(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-x(A\cdot B)\Delta\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\] \[-B(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+B(A\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\phi\Big{)}\] \[-B(A\cdot x)\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+B(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Delta\psi\] \[-(B\times x)(B\cdot x)\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\Big{\}}\]
\[= -(A\cdot A)(A\cdot x)\frac{4}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-\{(A\cdot A)r^{2}-(A\cdot x)^{2}\}(A\cdot x)\frac{2}{r}\partial_{ r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\] \[+\{(A\times B)\cdot x\}(A\cdot x)\frac{2}{r}\partial_{r}\psi \cdot\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\] \[-\{(A\times B)\cdot x\}(A\cdot x)\frac{2}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\] \[+\{(A\times B)\cdot x\}(A\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\] \[-\{(A\times B)\cdot x\}(A\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \Big{\}}\] \[+\{(A\times B)\cdot x\}(A\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\] \[-\{(A\times B)\cdot x\}(A\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Delta\psi\Big{\}}\] \[-(A\cdot B)(B\cdot x)\frac{4}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\] \[-\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{2}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Delta\psi\Big{)}\] \[-\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{2}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\Big{\}}.\]
Putting (3.11) into (3.6), we have
\[\begin{split}&\Big{\{}(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\Big{)}-(A\cdot B)\{\frac{2}{r}\partial_{r}+r \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Big{)}\}\Big{\}}\big{\{}\phi_{t}- \nu\Delta\phi\big{\}}\\ =&\{(A\times B)\cdot x\}(A\cdot x)\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\phi \Big{)}\\ &+\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\\ &+4(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+2(A\cdot B)(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r }\phi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \psi\Big{\}}\\ &+2(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\Big{\{}\Delta\psi\cdot\frac{1}{r}\partial_{r}\phi\Big{\}}\\ &-2(A\cdot B)(B\cdot x)\Delta\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\Delta\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}.\end{split} \tag{3.15}\]
Putting (3.14) into (3.7), we derive
\[\begin{split}&\Big{\{}(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\Big{)}-(A\cdot B)\{\frac{2}{r}\partial_{r}+r \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Big{)}\}\Big{\}}\Delta\{\psi_{t}- \nu\Delta\psi\}\\ =&-(A\cdot A)(A\cdot x)\frac{4}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-\{(A\cdot A)r^{2}-(A\cdot x)^{2}\}(A\cdot x)\frac{2}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+\{(A\times B)\cdot x\}(A\cdot x)\frac{2}{r}\partial_{r}\psi \cdot\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\\ &-\{(A\times B)\cdot x\}(A\cdot x)\frac{2}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\\ &+\{(A\times B)\cdot x\}(A\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\\ &-\{(A\times B)\cdot x\}(A\cdot x)\frac{4}{r}\partial_{r}\psi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\\ &-\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{2}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Delta\psi\Big{)}\\ &-\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{2}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\Big{\}}.\end{split} \tag{3.16}\]
Proof of Theorem 1.4.: Firstly we consider the case which vectors \(A\) and \(B\) are linearly dependent. Without loss of generality, we assume \(A=B\). The equations (3.15) (3.16) are as follows
\[\begin{split}&\Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A \cdot A)\{2+r\partial_{r}\}\Big{\}}\frac{1}{r}\partial_{r}\{\phi_{t}-\nu \Delta\phi\}\\ =&-(A\cdot x)\Big{\{}(A\cdot x)^{2}\frac{1}{r} \partial_{r}-(A\cdot A)\{2+r\partial_{r}\}\Big{\}}\frac{2}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(A\cdot x)\Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A \cdot A)\{2+r\partial_{r}\}\Big{\}}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(A\cdot x)\Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A \cdot A)\{2+r\partial_{r}\}\Big{\}}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+(A\cdot x)\Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A \cdot A)\{2+r\partial_{r}\}\Big{\}}\Delta\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ =&\Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A \cdot A)\{2+r\partial_{r}\}\Big{\}}\frac{1}{r}\partial_{r}\Big{(}(A\cdot x) \cdot\\ &\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\psi\cdot\frac{1}{s} \partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}-\frac{2}{s}\partial_{s }\phi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\psi\Big{)} \Big{\}}ds\Big{)},\end{split} \tag{3.17}\]
\[\Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A\cdot A)\{2+r \partial_{r}\}\Big{\}}\frac{1}{r}\partial_{r}\Delta\{\psi_{t}-\nu\Delta\psi\}\] \[= \Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A\cdot A)\{2+r \partial_{r}\}\Big{\}}\frac{1}{r}\partial_{r}\Big{(}(A\cdot x)\cdot\] \[\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\phi\cdot\frac{1}{s} \partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}+\frac{2}{s}\partial_{s} \psi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\Delta\psi\Big{)} \Big{\}}ds\Big{)}, \tag{3.18}\]
where we have used the following facts
\[(A\times\nabla)(A\cdot x)=A\times A=0, \tag{3.20}\] \[(A\times\nabla)\cdot(A\times\nabla)\varphi(r)\] \[= \Big{\{}(A\cdot x)^{2}\frac{1}{r}\partial_{r}-(A\cdot A)\{2+r \partial_{r}\}\Big{\}}\frac{1}{r}\partial_{r}\varphi(r) \tag{3.19}\]
for any radial function \(\varphi(r)\). The equations (3.17) (3.18) imply that
\[\phi_{t}-\nu\Delta\phi\] \[= (A\cdot x)\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\psi\cdot \frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}-\frac{2}{s} \partial_{s}\phi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s} \psi\Big{)}\Big{\}}ds, \tag{3.22}\] \[\Delta\{\psi_{t}-\nu\Delta\psi\}\] \[= (A\cdot x)\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\phi\cdot \frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}+\frac{2}{s} \partial_{s}\psi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s} \Delta\psi\Big{)}\Big{\}}ds. \tag{3.21}\]
Let us select orthogonal transformation \(\rho\) as follows
\[y=\rho x=x\left(\begin{array}{ccc}0&0&1\\ 1&0&0\\ 0&1&0\end{array}\right)=(x_{2},x_{3},x_{1}). \tag{3.23}\]
Then \(r^{2}=y\cdot y=\rho x\cdot\rho x=x\cdot x\).
Applying the orthogonal transformation (3.23) in the equations (3.21) (3.22), we obtain
\[\phi_{t}-\nu\Delta\phi\] \[= (A\cdot y)\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\psi\cdot \frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}-\frac{2}{s} \partial_{s}\phi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s} \psi\Big{)}\Big{\}}ds, \tag{3.25}\] \[\Delta\{\psi_{t}-\nu\Delta\psi\}\] \[= (A\cdot y)\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\phi\cdot \frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}+\frac{2}{s} \partial_{s}\psi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s} \Delta\psi\Big{)}\Big{\}}ds, \tag{3.24}\]
where \(y=\rho x\). Employing the equations (3.21) (3.24), we get
\[(A\cdot(\rho x-x))\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\psi\cdot\frac{1} {s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}-\frac{2}{s} \partial_{s}\phi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s} \psi\Big{)}\Big{\}}ds=0. \tag{3.26}\]
Similarly using (3.22) (3.25), we derive
\[(A\cdot(\rho x-x))\int_{0}^{r}s\Big{\{}\frac{2}{s}\partial_{s}\phi\cdot\frac{1 }{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s}\phi\Big{)}+\frac{2}{s} \partial_{s}\psi\cdot\frac{1}{s}\partial_{s}\Big{(}\frac{1}{s}\partial_{s} \Delta\psi\Big{)}\Big{\}}ds=0. \tag{3.27}\]
Given \(r\), thanks \(x\in\mathbb{S}_{r}^{2}\) is arbitrary, the equations (3.26) and (3.27) imply that
\[\frac{1}{r}\partial_{r}\psi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}-\frac{1}{r}\partial_{r}\phi\cdot\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\psi\Big{)}=0, \tag{3.29}\] \[\frac{1}{r}\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}+\frac{1}{r}\partial_{r}\psi\cdot\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\Delta\psi\Big{)}=0. \tag{3.28}\]
Putting (3.28) into (3.21) and putting (3.29) into (3.22), we derive that
\[\phi_{t}-\nu\Delta\phi=0, \tag{3.30}\]
\[\Delta\{\psi_{t}-\nu\Delta\psi\}=0. \tag{3.31}\]
Let \((\phi,\psi)=\big{(}\Phi(r),\Psi(r)\big{)}\) be solution of (3.28) (3.29). Then
\[u=(A\times\nabla)\Phi+\{(A\times\nabla)\times\nabla\}\Psi\]
satisfies static three dimensional Euler equation
\[(u\cdot\nabla)u+\nabla P=0,\] \[\nabla\cdot u=0\]
by all calculations of (3.1)-(3.29).
Result (I) is proved.
Provided
\[(\partial_{r}\phi,\partial_{r}\psi)=(0,0), \tag{3.32}\]
then equations (3.28)-(3.31) are satisfied. Here the velocity \(u=0\). This is trivial.
Provided \((\partial_{r}\phi,\partial_{r}\psi)\neq(0,0)\), then the equation (3.28) means that
\[\phi=h(t)\psi, \tag{3.33}\]
where \(h(t)\) is any function of \(t\). Putting (3.33) into (3.29), we have
\[h^{2}(t)\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}+\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}=0. \tag{3.34}\]
Provided \(\partial_{r}\phi=0\) and \(\partial_{r}\psi\neq 0\), the equation (3.28) is satisfied. The equation (3.29) means that
\[\Big{(}\partial_{r}\phi,\partial_{r}(\frac{1}{r}\partial_{r}\Delta\psi)\Big{)} =(0,0). \tag{3.35}\]
Provided \(\partial_{r}\phi\neq 0\) and \(\partial_{r}\psi=0\), the equation (3.28) is satisfied. The equation (3.29) means that
\[\Big{(}\partial_{r}(\frac{1}{r}\partial_{r}\phi),\partial_{r}\psi\Big{)}=(0,0). \tag{3.36}\]
There exists a unique solution
\[\phi(t,r)=e^{-\nu\Delta t}\phi_{0}(r),\] \[\psi(t,r)=e^{-\nu\Delta t}\psi_{0}(r), \tag{3.37}\]
of equations (3.30) (3.31) with initial data
\[\phi(t,r)|_{t=0}=\phi_{0}(r),\] \[\psi(t,r)|_{t=0}=\psi_{0}(r).\]
**Lemma 3.1**.: _Let \(V=f(t)r^{2}+g(t)\), \(f(t)\) and \(g(t)\) be any functions of \(t\). Then \(V\) is solution of the following equation_
\[\partial_{r}\Big{(}\frac{1}{r}\partial_{r}V\Big{)}=0. \tag{3.38}\]
_Moreover we have_
\[\tilde{V}=e^{-\nu\Delta t}\{f(0)r^{2}+g(0)\}=f(0)r^{2}+6\nu f(0)t+g(0),\]
_and \(\tilde{V}\) is also solution of the equation (3.38)._
Take \((\phi_{0},\psi_{0})=\big{(}\Phi(r),\Psi(r)\big{)}\). Then
\[\begin{split}\phi(t,r)&=e^{-\nu\Delta t}\phi_{0}(r)=e ^{-\nu\Delta t}\Phi(r),\\ \psi(t,r)&=e^{-\nu\Delta t}\psi_{0}(r)=e^{-\nu\Delta t }\Psi(r).\end{split} \tag{3.39}\]
Since \((\phi,\psi)=(\phi_{0},\psi_{0})\) satisfies equations (3.28) (3.29), this vector satisfies equation (3.32) or equations (3.33) (3.34) with \(h(t)=h(0)\) or (3.35) or (3.36). By using Lemma 3.1 and the following fact
\[(\lambda-\Delta)e^{-\nu\Delta t}=e^{-\nu\Delta t}(\lambda-\Delta),\ \ \lambda\ \text{ is constant},\]
we derive that \(\big{(}\phi(t,r),\psi(t,r)\big{)}\) defined by (3.39) also satisfies respective equation of (3.32)-(3.36). Thus \(\big{(}\phi(t,r),\psi(t,r)\big{)}\) satisfies equations (3.28) (3.29). This means that \(\big{(}\phi(t,r),\psi(t,r)\big{)}\) is solution of the equations (3.17) (3.18).
Result (II) is proved.
On the other hand, let \((\phi_{0},\psi_{0})\neq\big{(}\Phi(r),\Psi(r)\big{)}\) and \((\phi_{0},\psi_{0})\) be regular enough if it is necessary. By theory of local well-posedness of Navier-Stokes equations, there exist \(T_{max}>0\) and a unique solution \(\big{(}\phi(t,x),\psi(t,x)\big{)}\) of equations (3.6) (3.7) with \(A=B\) such that
\[\phi(t,x)\in C([0,T_{max});H^{m}),\] \[\psi(t,x)\in C([0,T_{max});H^{m+1}),\]
where and \(m>0\) large enough.
Assume that there exist \(t_{n}>0\) such that \(t_{n}\to 0\) as \(n\to\infty\) and \(\big{(}\phi(t_{n},x),\psi(t_{n},x)\big{)}\) is radial function of \(x\). Then \(\big{(}\phi(t_{n},x),\psi(t_{n},x)\big{)}\) satisfies equations (3.28) (3.29) for any \(t_{n}\). There exists a unique solution
\[\begin{split}\phi^{\prime}(t,x)&=\phi^{\prime}(t,r) =e^{-\nu\Delta t}\phi(t_{n},x)=e^{-\nu\Delta t}\phi(t_{n},r),\ \ \forall t\geq 0,\\ \psi^{\prime}(t,x)&=\psi^{\prime}(t,r)=e^{-\nu \Delta t}\psi(t_{n},x)=e^{-\nu\Delta t}\psi(t_{n},r),\ \ \forall t\geq 0\end{split}\]
of equations (3.6) (3.7) with initial data \(\big{(}\phi(t_{n},x),\psi(t_{n},x)\big{)}\). This solution \(\phi^{\prime}\) is global and radial. Thanks the uniqueness of solution \(\big{(}\phi(t,x),\psi(t,x)\big{)}\), then
\[\begin{split}\phi(t_{n}+t,x)&=\phi(t_{n}+t,r)=e^{- \nu\Delta t}\phi(t_{n},x)=e^{-\nu\Delta t}\phi(t_{n},r),\ \ \forall t\geq 0,\\ \psi(t_{n}+t,x)&=\psi(t_{n}+t,r)=e^{-\nu\Delta t} \psi(t_{n},x)=e^{-\nu\Delta t}\psi(t_{n},r),\ \ \forall t\geq 0.\end{split} \tag{3.40}\]
Therefore this solution \(\big{(}\phi(t,x),\psi(t,x)\big{)}\) is global with respect to \(t>0\) and radial with respect to \(x\) for any \(t\geq t_{n}\).
Letting \(n\to\infty\) and \(t_{n}\to 0\), we derive \((\phi,\psi)=(\phi_{0},\psi_{0})\) satisfies equations (3.28) (3.29) since \(\big{(}\phi(t,x),\psi(t,x)\big{)}\) is continuous with respect to \(t\geq 0\). This is contradictory.
In summary, provided there exists \(t_{0}\in(0,T_{max})\) such that solution \(\big{(}\phi(t,x),\psi(t,x)\big{)}\) is radial with respect to \(x\) at \(t=t_{0}\), then \(T_{max}=\infty\) and this solution is radial with respect to \(x\) for any \(t\geq t_{0}\). Let \(T_{r}=\min\{t_{0}\}\). Thus this solution is radial with respect to \(x\) for any \(t\geq T_{r}\), but this solution is not radial with respect to \(x\) for any \(t\in(0,T_{r})\). Otherwise this solution can not be radial with respect to \(x\) for any \(t\in(0,T_{max})\).
Theorem 1.4 is proved.
Proof of Corollary 1.5.: Now we consider the following equations
\[\phi=\lambda\psi, \tag{3.42}\] \[\lambda^{2}\psi+\Delta\psi=0, \tag{3.41}\]
where \(\lambda\) is any real constant. Equations (3.41) (3.42) are the special case of equations (3.33) (3.34).
Equation (3.42) can be written
\[\lambda^{2}(r\psi)+\partial_{r}^{2}(r\psi)=0. \tag{3.43}\]
There exists solution
\[\psi=\Psi_{\lambda\alpha\beta}(r)=\alpha\frac{1}{r}\sin(\lambda r)+\beta\frac{ 1}{r}\cos(\lambda r)\]
of equation (3.43) for any real constants \(\lambda,\alpha,\beta\).
Let \(\Phi_{\lambda\alpha\beta}(r)=\lambda\Psi_{\lambda\alpha\beta}(r)\). Then \((\phi,\psi)=\Big{(}\Phi_{\lambda\alpha\beta}(r),\Psi_{\lambda\alpha\beta}(r) \Big{)}\) satisfies equations (3.28) (3.29).
By Theorem 1.4, this corollary is proved.
Proof of Theorem 1.7.: Now we consider the case which vectors \(A\) and \(B\) are perpendicular \(A\bot B\). The equations (3.15) (3.16) are as follows
\[\begin{split}&(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Big{)}\{\phi_{t}-\nu\Delta\phi\}\\ =&\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_ {r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r }\phi\cdot\frac{1}{r}\partial_{r}\psi\Big{\}}\\ &+(B\cdot B)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{ r}\phi\cdot\Delta\psi\Big{\}}\\ &-(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\\ &+(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{2}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}},\end{split} \tag{3.44}\]
\[(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Big{)}\Delta\{\psi_{t}-\nu\Delta\psi\}\] \[= -(A\cdot A)\frac{4}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-\{(A\cdot A)r^{2}-(A\cdot x)^{2}\}\frac{2}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\] \[+\{(A\times B)\cdot x\}\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{ r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\Big{\}}\] \[-\{(A\times B)\cdot x\}\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{ r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}} \tag{3.45}\] \[+\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r }\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[-\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\Big{\{}\frac{2}{r }\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\] \[-\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r }\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Delta\psi\Big{\}}\] \[+(B\cdot B)\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r }\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\] \[+(B\cdot x)^{2}\frac{2}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial _{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi \Big{)}\Big{\}}.\]
Choose the orthogonal transformation \(\rho_{b}\) which rotation axis is vector \(B\), such that
\[B\cdot(\rho_{b}x)=(\rho_{b}^{t}B)\cdot x=B\cdot x, \tag{3.46}\]
where \(\rho_{b}^{t}\) is the adjoint operator of \(\rho_{b}\). For example
\[y=\rho_{b}x=xM_{b}\left(\begin{array}{ccc}0&1&0\\ -1&0&0\\ 0&0&1\end{array}\right)M_{b}^{t}, \tag{3.47}\]
\[M_{b}=\left(\begin{array}{ccc}0&-\frac{b_{2}^{2}+b_{3}^{2}}{|B|^{2}}&\frac{ b_{1}}{|B|}\\ \frac{b_{3}}{|B|}&\frac{b_{1}b_{2}}{|B|^{2}}&\frac{b_{2}}{|B|}\\ -\frac{b_{2}}{|B|}&\frac{b_{1}b_{2}}{|B|^{2}}&\frac{b_{3}}{|B|}\end{array} \right),\ \ M_{b}^{t}=\left(\begin{array}{ccc}0&\frac{b_{3}}{|B|}&-\frac{b_{2}}{|B|}\\ -\frac{b_{2}^{2}+b_{3}^{2}}{|B|^{2}}&\frac{b_{1}b_{2}}{|B|^{2}}&\frac{b_{1}b_{ 3}}{|B|^{2}}\\ \frac{b_{1}}{|B|}&\frac{b_{2}}{|B|}&\frac{b_{3}}{|B|}\end{array}\right). \tag{3.48}\]
Applying the orthogonal transformation \(y=\rho_{b}x\) in the equation (3.44) (3.45), we obtain
\[\begin{split}&(B\cdot y)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Big{)}\{\phi_{t}-\nu\Delta\phi\}\\ =&\{(A\times B)\cdot y\}\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{ r}\phi\cdot\frac{1}{r}\partial_{r}\psi\Big{\}}\\ &+(B\cdot B)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{ r}\phi\cdot\Delta\psi\Big{\}}\\ &-(B\cdot y)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\\ &+(B\cdot y)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{2}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}},\\ &(B\cdot y)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Big{)}\Delta\{\psi_{t}-\nu\Delta\psi\}\\ =&-(A\cdot A)\frac{4}{r}\partial_{r}\phi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-\{(A\cdot A)r^{2}-(A\cdot y)^{2}\}\frac{2}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\\ &+\{(A\times B)\cdot y\}\frac{2}{r}\partial_{r}\psi\cdot\frac{1 }{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &-\{(A\times B)\cdot y\}\frac{2}{r}\partial_{r}\phi\cdot\frac{1 }{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\\ &+\{(A\times B)\cdot y\}\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\\ &-\{(A\times B)\cdot y\}\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Delta\psi\Big{\}}\\ &+(B\cdot B)\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\\ &+(B\cdot y)^{2}\frac{2}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Delta\psi\Big{)}\Big{\}}.\end{split} \tag{3.49}\]
Putting (3.44) (3.46) (3.49) together, we get
\[\{(A\times B)\cdot(x-\rho_{b}x)\}\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\phi\Big{)}=0. \tag{3.51}\]
Given \(r\), thanks \(x\in\mathbb{S}^{2}_{r}\) is arbitrary, we have
\[\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \phi\Big{)}=0. \tag{3.52}\]
This equation (3.52) implies that
\[\phi=f_{2}r^{2}+f_{0}, \tag{3.53}\]
where \(f_{0}\) and \(f_{2}\) are any functions of \(t\).
Provided \(f_{2}=0\), putting (3.53) into (3.44), the equation (3.44) is satisfied. Putting (3.53) into (3.45), we obtain that
\[\begin{split}&(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Big{)}\Delta\{\psi_{t}-\nu\Delta\psi\}\\ =&(B\cdot B)\frac{2}{r}\partial_{r}\psi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\\ &+(B\cdot x)^{2}\frac{2}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Delta\psi\Big{)}\Big{\}}.\end{split} \tag{3.54}\]
Applying the orthogonal transformation \(y=\rho x\) defined by (3.23) in the equation (3.54), we have
\[\begin{split}&(B\cdot y)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Big{)}\Delta\{\psi_{t}-\nu\Delta\psi\}\\ =&(B\cdot B)\frac{2}{r}\partial_{r}\psi\cdot\frac{ 1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\\ &+(B\cdot y)^{2}\frac{2}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Delta\psi\Big{)}\Big{\}}.\end{split} \tag{3.55}\]
Solving difference of (3.54) and (3.55), we derive
\[\begin{split}&\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Big{)}\Delta\{\psi_{t}-\nu\Delta\psi\}\\ =&\{B\cdot(x+\rho x)\}\frac{2}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Delta\psi\Big{)}\Big{\}}.\end{split} \tag{3.56}\]
Firstly (3.56) is satisfied provided \(x\in\mathbb{R}^{3}-\{x|\rho x=x\}\). Finally for \(x\in\{x|\rho x=x\}\), selecting \(x_{n}\in\mathbb{R}^{3}-\{x|\rho x=x\}\) such that \(x_{n}\to x\) as \(n\to\infty\), we can prove that (3.56) is also satisfied by \(n\to\infty\).
Let us select another orthogonal transformation \(O_{r}\) as follows
\[z=O_{r}x=x\left(\begin{array}{ccc}0&1&0\\ 0&0&1\\ 1&0&0\end{array}\right)=(x_{3},x_{1},x_{2}). \tag{3.57}\]
Applying the orthogonal transformation \(z=O_{r}x\) in the equation (3.54), by the same arguments as in the proof of (3.56), we have
\[\begin{split}&\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\Big{)}\Delta\{\psi_{t}-\nu\Delta\psi\}\\ =&\{B\cdot(x+O_{r}x)\}\frac{2}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \Delta\psi\Big{)}\Big{\}}.\end{split} \tag{3.58}\]
Solving difference of (3.58) and (3.56), we derive
\[\{B\cdot(\rho x-O_{r}x)\}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\Big{\}} =0. \tag{3.59}\]
Given \(r\), thanks \(x\in\mathbb{S}_{r}^{2}\) is arbitrary, we have
\[\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}\Big{\}}=0. \tag{3.60}\]
Inserting (3.60) into (3.58), we obtain
\[\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Big{)}\Delta\{\psi_{t}-\nu\Delta \psi\}=0. \tag{3.61}\]
Putting (3.60) (3.61) into (3.54), we get
\[\partial_{r}\psi\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}=0. \tag{3.62}\]
If \(\partial_{r}\psi=0\), then equations (3.44) (3.45) are satisfied by \((\phi,\psi)\) which is the solution of \((\partial_{r}\phi,\partial_{r}\psi)=0\). In this case, the corresponding velocity \(u=0\). It is trivial.
Now provided \(\partial_{r}\psi\neq 0\), then the equation (3.62) implies that \(\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\Delta\psi\Big{)}=0\) and
\[\Delta\psi=20g_{4}r^{2}+6g_{2},\] \[\psi=g_{4}r^{4}+g_{2}r^{2}+g_{0}, \tag{3.63}\]
where \(g_{0}\), \(g_{2}\) and \(g_{4}\) are any functions of \(t\).
Putting (3.53) (3.63) into (3.44) (3.45), the equations (3.44) and (3.45) are satisfied provided \(f_{2}=0\).
Provided \(f_{2}\neq 0\), putting (3.45) (3.46) (3.50) (3.53) together, we derive
\[\{(A\times B)\cdot(x-\rho_{b}x)\}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Delta\psi\Big{\}}=0. \tag{3.64}\]
Given \(r\), since \(x\in\mathbb{S}_{r}^{2}\) is arbitrary, we have
\[\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Delta\psi\Big{\}}=0. \tag{3.65}\]
We can also derive (3.63) from the equation (3.65).
Putting (3.53) (3.63) into (3.44) (3.45), the equations (3.44) and (3.45) are satisfied provided \(f_{2}g_{4}=0\).
The velocity \(u\) corresponding to \((\phi,\psi)\) defined by (3.53)(3.63) is as follows
\[u(t,x)= (A\times\nabla)\phi+\{(B\times\nabla)\times\nabla\}\psi\] \[= 2f_{2}(A\times x)+8g_{4}(B\cdot x)x-B(16g_{4}r^{2}+4g_{2}). \tag{3.66}\]
It is obvious that
\[\int_{\mathbb{R}^{3}}|u(t,x)|^{2}dx=\infty,\ \ \forall t\geq 0.\]
On the other hand, provided that at least one of (3.53) and (3.63) is not satisfied, or \(f_{2}g_{4}\neq 0\) although (3.53) and (3.63) are satisfied, then the equations (3.15) (3.16) can not be satisfied by any radial symmetry functions \(\phi\) and \(\psi\).
In summary, Theorem 1.7 is proved.
## 4. (1,1)-Symplectic Representation and
Radial Symmetry Breaking in \(\mathbb{R}^{3}\)
In this section, we assume that the velocity vector \(u\) holds the following (1,1)-symplectic representation
\[u(t,x)= \{A\times\nabla\}\phi(t,x)+\{B\times\nabla\}\psi(t,x), \tag{4.1}\]
where vectors \(A=(a_{1},a_{2},a_{3})\in\mathbb{R}^{3}-\{0\}\) and \(B=(b_{1},b_{2},b_{3})\in\mathbb{R}^{3}-\{0\}\) are linearly independent.
Let
\[\omega(t,x)= \nabla\times u(t,x)\] \[= -\{(A\times\nabla)\times\nabla\}\phi(t,x)-\{(B\times\nabla) \times\nabla\}\psi(t,x). \tag{4.2}\]
Here vectors \((A\times\nabla)\times\nabla=(A\cdot\nabla)\nabla-A\Delta\) and \((B\times\nabla)\times\nabla=(B\cdot\nabla)\nabla-B\Delta\).
Taking curl with equation (1.1), we have
\[\omega_{t}-\nu\Delta\omega+(u\cdot\nabla)\omega-(\omega\cdot\nabla)u=0. \tag{4.3}\]
Thanks the following observations
\[\begin{split}(B\times\nabla)\cdot\omega(t,x)&=\Big{(} \{(A\times\nabla)\times(B\times\nabla)\}\cdot\nabla\Big{)}\phi(t,x)\\ &=(A\times B)\cdot\nabla\Delta\phi(t,x),\end{split} \tag{4.4}\]
\[\begin{split}(A\times\nabla)\cdot\omega(t,x)&=- \Big{(}\{(A\times\nabla)\times(B\times\nabla)\}\cdot\nabla\Big{)}\psi(t,x)\\ &=-(A\times B)\cdot\nabla\Delta\psi(t,x),\end{split} \tag{4.5}\]
taking scalar product with \(B\times\nabla\) and equation (4.3), we have
\[(A\times B)\cdot\nabla\Delta\{\phi_{t}-\nu\Delta\phi\}+(B\times\nabla)\cdot\{( u\cdot\nabla)\omega-(\omega\cdot\nabla)u\}=0. \tag{4.6}\]
And taking scalar product with \(A\times\nabla\) and equation (4.3), we derive
\[(A\times B)\cdot\nabla\Delta\{\psi_{t}-\nu\Delta\psi\}-(A\times\nabla)\cdot \{(u\cdot\nabla)\omega-(\omega\cdot\nabla)u\}=0. \tag{4.7}\]
Now we assume that \(\phi\) and \(\psi\) are radial symmetric functions with respect to space variable \(x\in\mathbb{R}^{3}\). It is that \(\phi(t,x)=\phi(t,r)\), \(\psi(t,x)=\psi(t,r)\) and \(r^{2}=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\). Then we have
\[\begin{split} u(t,x)=&(A\times x)\frac{1}{r}\partial _{r}\phi+(B\times x)\frac{1}{r}\partial_{r}\psi,\\ \omega(t,x)=& A\Big{(}\Delta\phi-\frac{1}{r}\partial _{r}\phi\Big{)}-x(A\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_ {r}\phi\Big{)}\\ &+B\Big{(}\Delta\psi-\frac{1}{r}\partial_{r}\psi\Big{)}-x(B \cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}, \end{split} \tag{4.8}\]
\[\begin{split}(u\cdot\nabla)\omega=&\Big{(}\frac{1 }{r}\partial_{r}\phi(A\times x)\cdot\nabla+\frac{1}{r}\partial_{r}\psi(B\times x )\cdot\nabla\Big{)}\\ &\Big{\{}A\Big{(}\Delta\phi-\frac{1}{r}\partial_{r}\phi\Big{)}-x (A\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+B\Big{(}\Delta\psi-\frac{1}{r}\partial_{r}\psi\Big{)}-x(B \cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)} \Big{\}}\\ =&-(A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\phi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\psi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\times x)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)},\end{split} \tag{4.9}\]
\[(\omega\cdot\nabla)u= \Big{\{}\Big{(}\Delta\phi-\frac{1}{r}\partial_{r}\phi\Big{)}A\cdot \nabla-(A\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} x\cdot\nabla\] \[+\Big{(}\Delta\psi-\frac{1}{r}\partial_{r}\psi\Big{)}B\cdot \nabla-(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}x\cdot\nabla\Big{\}}\] \[\Big{\{}(A\times x)\frac{1}{r}\partial_{r}\phi+(B\times x)\frac{ 1}{r}\partial_{r}\psi\Big{\}}\] \[= (A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\cdot\Big{(}\Delta\phi-\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+(B\times x)(A\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\cdot\Big{(}\Delta\phi-\frac{1}{r}\partial_{r}\phi \Big{)}\] \[-(A\times B)\frac{1}{r}\partial_{r}\psi\cdot\Big{(}\Delta\phi- \frac{1}{r}\partial_{r}\phi\Big{)}\] \[-(A\times x)(A\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \tag{4.11}\] \[-(B\times x)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\cdot\Big{(}\Delta\psi-\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-(A\times x)(B\cdot x)\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\]
\[= (A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+(A\times x)(B\cdot x)\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+(A\times B)\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\] \[-(A\times B)\partial_{r}\psi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\] \[+(B\times x)(A\cdot x)\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-(B\times x)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)},\]
\[(u\cdot\nabla)\omega-(\omega\cdot\nabla)u\] \[= -2(A\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\phi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)} \tag{4.12}\] \[-(A\times B)\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\] \[+(A\times B)\partial_{r}\psi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\] \[-x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\psi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2(B\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)},\]
\[(B\times\nabla)\cdot\{(u\cdot\nabla)\omega-(\omega\cdot\nabla)u\}\] \[= (B\times\nabla)\cdot\Big{\{}-2(A\times x)(A\cdot x)\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\] \[-2(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+x\{(A\times B)\cdot x)\}\frac{1}{r}\partial_{r}\phi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-(A\times B)\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\] \[+(A\times B)\partial_{r}\psi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\] \[-x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\psi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1 }{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2(B\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[= -4(A\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+2\{(A\times B)\times A\}\cdot x\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(A\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-4(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[+\{(A\times B)\times B\}\cdot x\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-\{(A\times B)\times B\}\cdot x\frac{1}{r}\partial_{r}\Big{\{} \partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+\{(A\times B)\times B\}\cdot x\frac{1}{r}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-4|B|^{2}(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{|B|^{2}r^{2}-(B\cdot x)^{2}\}(B\cdot x)\frac{1}{r}\partial_{r} \Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\psi\Big{\}}\] \[-4|B|^{2}(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{|B|^{2}r^{2}-(B\cdot x)^{2}\}(A\cdot x)\frac{1}{r}\partial_{r} \Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\psi\Big{)}\]
\[(A\times\nabla)\cdot\{(u\cdot\nabla)\omega-(\omega\cdot\nabla)u\}\] \[= (A\times\nabla)\cdot\Big{\{}-2(A\times x)(A\cdot x)\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\] \[-2(A\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-(A\times B)\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\] \[+(A\times B)\partial_{r}\psi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\] \[-x\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\psi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2(B\times x)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2(B\times x)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[= -4|A|^{2}(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \tag{4.14}\] \[-2\{|A|^{2}r^{2}-(A\cdot x)^{2}\}(A\cdot x)\frac{1}{r}\partial_{r }\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-4|A|^{2}(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2\{|A|^{2}r^{2}-(A\cdot x)^{2}\}(B\cdot x)\frac{1}{r}\partial_{r }\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-\{(A\times B)\times A\}\cdot x\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-\{(A\times B)\times A\}\cdot x\frac{1}{r}\partial_{r}\Big{\{} \partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+\{(A\times B)\times A\}\cdot x\frac{1}{r}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-\{(A\times B)\times A\}\cdot x\frac{1}{r}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-4(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{(A\times B)\times B\}\cdot x\frac{1}{r}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-4(A\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(A\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}},\]
where
\[\Delta f=\frac{3}{r}\partial_{r}f+r\partial_{r}(\frac{1}{r}\partial_{r}f).\]
Employing (4.6) and (4.13), we obtain that
\[\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\Delta\{\phi_{t}-\nu \Delta\phi\}\] \[-4(A\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+2\{(A\cdot A)(B\cdot x)-(A\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\] \[-2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(A\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-4(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[+\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\] \[-\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\Big{\{}\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\] \[+\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\] \[-4|B|^{2}(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{|B|^{2}r^{2}-(B\cdot x)^{2}\}(B\cdot x)\frac{1}{r}\partial_{ r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-4|B|^{2}(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{|B|^{2}r^{2}-(B\cdot x)^{2}\}(A\cdot x)\frac{1}{r}\partial_{ r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(} \frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\]
\[= \{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\Delta\{\phi_{t}-\nu \Delta\phi\}\] \[-6(A\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2(A\cdot B)(A\cdot x)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{r }\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \Big{\}}\] \[-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-(B\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-(B\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} \tag{4.15}\] \[+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-(A\cdot B)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-4(B\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{(B\cdot B)r^{2}-(B\cdot x)^{2}\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+2(A\cdot x)^{2}(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[+2(A\cdot x)(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[+2(A\cdot x)(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\] \[= 0.\]
Applying (4.7) and (4.14), we get that
\[\{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\Delta\{\psi_{t}-\nu \Delta\psi\}\] \[+4|A|^{2}(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+2\{|A|^{2}r^{2}-(A\cdot x)^{2}\}(A\cdot x)\frac{1}{r}\partial_{r }\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[+4|A|^{2}(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+2\{|A|^{2}r^{2}-(A\cdot x)^{2}\}(B\cdot x)\frac{1}{r}\partial_{r }\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[+\{(A\cdot A)(B\cdot x)-(A\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\] \[+\{(A\cdot A)(B\cdot x)-(A\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\Big{\{}\partial_{r}\phi\cdot\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}\] \[+4(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+2\{(A\cdot B)(B\cdot x)-(B\cdot B)(A\cdot x)\}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\] \[+2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+4(A\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+2\{(A\cdot B)r^{2}-(A\cdot x)(B\cdot x)\}(A\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\]
\[= \{(A\times B)\cdot x\}\frac{1}{r}\partial_{r}\Delta\{\psi_{t}-\nu \Delta\psi\}\] \[+4(A\cdot A)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[+2\{(A\cdot A)r^{2}-(A\cdot x)^{2}\}(A\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[+(A\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+(A\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \tag{4.16}\] \[+(A\cdot A)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+(A\cdot A)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+6(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[+2(A\cdot B)(B\cdot x)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{ r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-2(A\cdot x)^{2}(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[-2(A\cdot x)^{2}(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\] \[-2(A\cdot x)(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\] \[= 0.\]
Proof of Theorem 1.8.: Since vectors \(A\) and \(B\) are linearly independent, they can span a plane \(\Gamma_{AB}=\mathrm{span}\{A,B\}\). We select the reflection transformation \(\rho_{ab}\), that \(\Gamma_{AB}\) is invariant plane, such that
\[\begin{split}& r^{2}=x\cdot x=\rho_{ab}x\cdot\rho_{ab}x,\\ & A\cdot\rho_{ab}x=A\cdot x,\\ & B\cdot\rho_{ab}x=B\cdot x.\end{split} \tag{4.17}\]
Here \(|A|^{2}=A\cdot A\), \(B^{\prime}=B-\frac{A\cdot B}{A\cdot A}A=(b^{\prime}_{1},b^{\prime}_{2},b^{ \prime}_{3})\),
\[\begin{split}& M_{ab}=\left(\begin{array}{cc}\frac{a_{1}}{|A|} &\frac{b^{\prime}_{1}}{|B^{\prime}|}&\frac{m_{1}}{|A||B|}\\ \frac{a_{2}}{|A|}&\frac{b^{\prime}_{2}}{|B^{\prime}|}&\frac{m_{2}}{|A||B|}\\ &\frac{a_{3}}{|A|}&\frac{b^{\prime}_{3}}{|B^{\prime}|}&\frac{m_{3}}{|A||B|} \end{array}\right),\\ & m_{1}=a_{2}b_{3}-a_{3}b_{2},\ \ m_{2}=a_{3}b_{1}-a_{1}b_{3},\ \ m_{3}=a_{1}b_{2}-a_{2}b_{1}, \end{split} \tag{4.18}\]
\[y=\rho_{ab}x=xM_{ab}\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right)M_{ab}^{t}, \tag{4.19}\]
where \(M_{ab}^{t}\) is the adjoint matrix of \(M_{ab}\).
Applying the orthogonal transformation \(y=\rho_{ab}x\) in the equation (4.15), we obtain
\[\{(A\times B)\cdot y\}\frac{1}{r}\partial_{r}\Delta\{\phi_{t}- \nu\Delta\phi\}\] \[-6(A\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-2(A\cdot B)(A\cdot x)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{r }\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \Big{\}}\] \[-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-(B\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-(B\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\] \[-(A\cdot B)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\] \[-4(B\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r }\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\] \[-2\{(B\cdot B)r^{2}-(B\cdot x)^{2}\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\] \[+2(A\cdot x)^{2}(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[+2(A\cdot x)(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\] \[= 0, \tag{4.20}\]
where we have used (4.17).
The equations (4.15) (4.20) imply
\[\{(A\times B)\cdot(x-\rho_{ab}x)\}\frac{1}{r}\partial_{r}\Delta\{\phi_{t}-\nu \Delta\phi\}= 0. \tag{4.21}\]
Given \(r\), thanks \(x\in\mathbb{S}_{r}^{2}\) is arbitrary, we get
\[\partial_{r}\Delta\{\phi_{t}-\nu\Delta\phi\}=0. \tag{4.22}\]
Similarly using the same arguments as in the proof of (4.22), for example employing the orthogonal transformation \(y=\rho_{ab}x\) in the equation (4.16) etc., we derive
\[\partial_{r}\Delta\{\psi_{t}-\nu\Delta\psi\}=0. \tag{4.23}\]
Putting (4.22) into (4.15), we have
\[\begin{split}&-6(A\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-2(A\cdot B)(A\cdot x)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{ r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \Big{\}}\\ &-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(A\cdot B)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-(A\cdot B)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-4(B\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-2\{(B\cdot B)r^{2}-(B\cdot x)^{2}\}(B\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+2(A\cdot x)^{2}(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1 }{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+2(A\cdot x)(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1 }{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+2(A\cdot x)(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1 }{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}=0.\end{split} \tag{4.24}\]
Putting (4.23) into (4.16), we have
\[\begin{split}& 4(A\cdot A)(A\cdot x)\frac{1}{r}\partial_{r} \phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+2\{(A\cdot A)r^{2}-(A\cdot x)^{2}\}(A\cdot x)\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+(A\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+(A\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+(A\cdot A)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+(A\cdot A)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+6(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+2(A\cdot B)(B\cdot x)r\partial_{r}\Big{\{}\frac{1}{r}\partial _{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)} \Big{\}}\\ &-2(A\cdot x)^{2}(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1 }{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r }\psi\Big{)}\Big{\}}\\ &-2(A\cdot x)(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1 }{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r }\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.25}\]
Observe that \((A\cdot x)(B\cdot x)\) is equal to either the scalar product of vectors \((B\cdot x)A\) and \(x\), or the scalar product of vectors \((A\cdot x)B\) and \(x\). Use this observation, the equations
(4.24) can be rewritten as follows
\[\begin{split}&-6(A\cdot B)A\frac{1}{r}\partial_{r}\phi\cdot\frac{1} {r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-2(A\cdot B)Ar\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} \\ &-2(B\cdot B)A\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)A\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-2(B\cdot B)A\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)A\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2A\xi(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1 }{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r }\phi\Big{)}\Big{\}}\\ &+2A\eta(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\Big{\}}\\ &+2A\zeta(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}\\ &+2(A\cdot A)B\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(A\cdot B)B\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-(A\cdot B)B\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-4(B\cdot B)B\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-2\{(B\cdot B)r^{2}-(B\cdot x)^{2}\}B\frac{1}{r}\partial_{r} \Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+2B(1-\xi)(A\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{ r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+2B(1-\eta)(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\\ &+2B(1-\zeta)(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}=0,\end{split} \tag{4.26}\]
where parameters \(\xi,\eta,\zeta\in\mathbb{R}\).
Since the vectors \(A\) and \(B\) are linearly independent, on the left hand of equation (4.26), all coefficients of \(A\) and \(B\) are zero. Thus we have
\[\begin{split}&-6(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{ 1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-2(A\cdot B)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} \\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2\xi(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1} {r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+2\eta(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+2\zeta(B\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}=0,\end{split} \tag{4.27}\]
\[\begin{split}& 2(A\cdot A)\frac{1}{r}\partial_{r}\phi\cdot\frac{ 1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-4(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-2\{(B\cdot B)r^{2}-(B\cdot x)^{2}\}\frac{1}{r}\partial_{r} \Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1 }{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+2(1-\xi)(A\cdot x)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+2(1-\eta)(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\\ &+2(1-\zeta)(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.28}\]
Let us select orthogonal transformation \(\rho_{a}\) which rotation axis is vector \(A\), and orthogonal transformation \(\rho_{b}\) which rotation axis is vector \(B\).
Applying the orthogonal transformation \(y=\rho_{b}x\) in (4.27), we obtain
\[\begin{split}&-6(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-2(A\cdot B)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} \\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2\xi(A\cdot y)(B\cdot y)\frac{1}{r}\partial_{r}\Big{\{}\frac{1 }{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r }\phi\Big{)}\Big{\}}\\ &+2\eta(B\cdot y)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\Big{\}}\\ &+2\zeta(B\cdot y)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}}=0.\end{split} \tag{4.29}\]
Calculating the difference of (4.27) and (4.29), we get
\[\xi\{A\cdot(x-\rho_{b}x)\}(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r }\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}=0, \tag{4.30}\]
\[\xi\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}=0. \tag{4.31}\]
In fact, firstly the equation (4.31) is satisfied for any \(x\in\mathbb{R}^{3}-\{x|x=\rho_{b}x\}\). Next for \(x\in\{x|x=\rho_{b}x\}\), choosing \(x_{n}\in\mathbb{R}^{3}-\{x|x=\rho_{b}x\}\) such that \(x_{n}\to x\) as \(n\to\infty\), we can prove that the equation (4.31) is also satisfied.
Putting (4.31) into (4.27), and using the orthogonal transformation \(z=\rho_{a}x\), we derive
\[\begin{split}&-6(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-2(A\cdot B)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} \\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2\eta(B\cdot z)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\Big{\}}\\ &+2\zeta(B\cdot z)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi \Big{)}\Big{\}}=0.\end{split} \tag{4.32}\]
Calculating the difference of (4.27) and (4.32), we get
\[\begin{split}&\eta\{B\cdot(x-\rho_{a}x)\}\{B\cdot(x+\rho_{a}x)\} \frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+\zeta\{B\cdot(x-\rho_{a}x)\}\{B\cdot(x+\rho_{a}x)\}\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}=0,\end{split} \tag{4.33}\]
\[\begin{split}&\eta\partial_{r}\Big{\{}\frac{1}{r}\partial_{r} \psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)} \Big{\}}\\ &+\zeta\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac {1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.34}\]
In fact, firstly the equation (4.34) is satisfied for any \(x\in\mathbb{R}^{3}-\{x|x=\pm\rho_{a}x\}\). Next for \(x\in\{x|x=\pm\rho_{a}x\}\), choosing \(x_{n}\in\mathbb{R}^{3}-\{x|x=\pm\rho_{a}x\}\) such that \(x_{n}\to x\) as \(n\to\infty\), we can prove that the equation (4.34) is also satisfied.
Putting (4.31) and (4.34) into (4.27), we have
\[\begin{split}&-6(A\cdot B)\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-2(A\cdot B)r\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} \\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(B\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}=0.\end{split} \tag{4.35}\]
Applying the orthogonal transformation \(y=\rho_{b}x\) in (4.28), we obtain
\[\begin{split}& 2(A\cdot A)\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-4(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-2\{(B\cdot B)r^{2}-(B\cdot y)^{2}\}\frac{1}{r}\partial_{r} \Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1} {r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+2(1-\xi)(A\cdot y)^{2}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi \Big{)}\Big{\}}\\ &+2(1-\eta)(A\cdot y)(B\cdot y)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\\ &+2(1-\zeta)(A\cdot y)(B\cdot y)\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.36}\]
Calculating the difference of (4.28) and (4.36), we get
\[\begin{split}&(1-\xi)\{A\cdot(x-\rho_{b}x)\}\{A\cdot(x+\rho_{b}x) \}\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+(1-\eta)\{A\cdot(x-\rho_{b}x)\}(B\cdot x)\frac{1}{r}\partial_{ r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\phi\Big{)}\Big{\}}\\ &+(1-\zeta)\{A\cdot(x-\rho_{b}x)\}(B\cdot x)\frac{1}{r}\partial_ {r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac {1}{r}\partial_{r}\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.37}\]
Using the same arguments as in the proof of (4.31), we have
\[\begin{split}&(1-\xi)\{A\cdot(x+\rho_{b}x)\}\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+(1-\eta)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+(1-\zeta)(B\cdot x)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}=0.\end{split} \tag{4.38}\]
Using the orthogonal transformation \(z=\rho_{a}x\) in (4.38), we derive
\[\begin{split}&(1-\xi)\{A\cdot(z+\rho_{b}z)\}\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+(1-\eta)(B\cdot z)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+(1-\zeta)(B\cdot z)\frac{1}{r}\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \psi\Big{)}\Big{\}}=0.\end{split} \tag{4.39}\]
Using \(\rho_{a}\rho_{b}=\rho_{b}\rho_{a}\) and solving the difference of (4.38) and (4.39), we get
\[\begin{split}&(1-\eta)\{B\cdot(x-\rho_{a}x)\}\frac{1}{r} \partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+(1-\zeta)\{B\cdot(x-\rho_{a}x)\}\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.40}\]
Given \(r\), thanks \(x\in\mathbb{S}^{2}\) is arbitrary, we obtain
\[\begin{split}&(1-\eta)\partial_{r}\Big{\{}\frac{1}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r} \phi\Big{)}\Big{\}}\\ &+(1-\zeta)\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}=0. \end{split} \tag{4.41}\]
Inserting (4.41) into (4.38), we have
\[(1-\xi)\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}=0. \tag{4.42}\]
In fact, firstly the equation (4.42) is satisfied for any \(x\in\mathbb{R}^{3}-\{x|x=-\rho_{b}x\}\). Next for \(x\in\{x|x=-\rho_{b}x\}\), choosing \(x_{n}\in\mathbb{R}^{3}-\{x|x=-\rho_{b}x\}\) such that \(x_{n}\to x\) as \(n\to\infty\), we can prove that the equation (4.42) is also satisfied.
Putting (4.42) (4.41) into (4.28), and employing the orthogonal transformation \(z=\rho_{a}x\), we derive
\[\begin{split}& 2(A\cdot A)\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-4(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &-2\{(B\cdot B)r^{2}-(B\cdot z)^{2}\}\frac{1}{r}\partial_{r} \Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{ 1}{r}\partial_{r}\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.43}\]
Calculating the difference of (4.28) and (4.43), we get
\[\{B\cdot(x-\rho_{a}x)\}\{B\cdot(x+\rho_{a}x)\}\frac{1}{r}\partial_{r}\Big{\{} \frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r} \partial_{r}\psi\Big{)}\Big{\}}=0. \tag{4.44}\]
By the same arguments as in the proof of (4.34), we prove
\[\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}=0. \tag{4.45}\]
Inserting (4.42) (4.41) (4.45) into (4.28), we have
\[\begin{split}& 2(A\cdot A)\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &-4(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}=0.\end{split} \tag{4.46}\]
Putting (4.31) (4.42) together, we derive
\[\begin{split}&\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} =0.\end{split} \tag{4.47}\]
Putting (4.34) (4.41) together, we derive
\[\begin{split}&\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\psi \cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}} \\ &+\partial_{r}\Big{\{}\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}=0.\end{split} \tag{4.48}\]
Inserting (4.47) (4.45) (4.48) into (4.25), we obtain
\[\begin{split}& 4(A\cdot A)(A\cdot x)\frac{1}{r}\partial_{r} \phi\cdot\frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+(A\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+(A\cdot B)(A\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-2(B\cdot B)(A\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+(A\cdot A)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2(A\cdot A)(B\cdot x)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+(A\cdot A)(B\cdot x)\partial_{r}\Big{\{}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+6(A\cdot B)(B\cdot x)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{ r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}=0.\end{split} \tag{4.49}\]
Thanks vectors \(A\) and \(B\) are linearly independent, the coefficients of \(A\) and \(B\) in the equation (4.49) are zero. Therefore we have
\[\begin{split}& 4(A\cdot A)\frac{1}{r}\partial_{r}\phi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+(A\cdot B)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &-2(B\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}=0,\end{split} \tag{4.50}\]
\[\begin{split}& 2(A\cdot A)\frac{1}{r}\partial_{r}\psi\cdot \frac{1}{r}\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\\ &+(A\cdot A)\partial_{r}\Big{\{}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}\Big{\}}\\ &+2(A\cdot A)\frac{1}{r}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\\ &+(A\cdot A)\partial_{r}\Big{\{}\partial_{r}\phi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}\Big{\}}\\ &+6(A\cdot B)\frac{1}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\Big{(}\frac{1}{r}\partial_{r}\psi\Big{)}=0.\end{split} \tag{4.51}\]
The equation (4.47) implies that
\[\frac{1}{r}\partial_{r}\phi=\pm\{f_{2}r^{2}+f_{1}\}^{1/2},\,\,\,f_{1}\geq 0, \tag{4.52}\]
\[\partial_{r}\Big{(}\frac{1}{r}\partial_{r}\phi\Big{)}=\pm f_{2}r\{f_{2}r^{2}+f_ {1}\}^{-1/2},\]
\[\Delta\phi=\frac{3}{r}\partial_{r}\phi+r\partial_{r}\Big{(}\frac{1}{r }\partial_{r}\phi\Big{)}\] \[= \pm 4\{f_{2}r^{2}+f_{1}\}^{1/2}\mp f_{1}\{f_{2}r^{2}+f_{1}\}^{-1/2},\] \[\partial_{r}\Delta\phi=\pm 4f_{2}r\{f_{2}r^{2}+f_{1}\}^{-1/2}\pm f _{1}f_{2}r\{f_{2}r^{2}+f_{1}\}^{-3/2},\] \[\partial_{r}\{\partial_{r}\Delta\phi\}=\pm 2f_{1}f_{2}\{f_{2}r^{2}+f _{1}\}^{-3/2}\pm 3f_{1}^{2}f_{2}\{f_{2}r^{2}+f_{1}\}^{-5/2},\] \[\partial_{r}^{2}\{\partial_{r}\Delta\phi\}=\mp 6f_{1}f_{2}^{2}r\{f_ {2}r^{2}+f_{1}\}^{-5/2}\mp 15f_{1}^{2}f_{2}^{2}r\{f_{2}r^{2}+f_{1}\}^{-7/2},\] \[\Delta\{\partial_{r}\Delta\phi\}=\partial_{r}^{2}\{\partial_{r} \Delta\phi\}+\frac{2}{r}\partial_{r}\{\partial_{r}\Delta\phi\} \tag{4.53}\] \[= \mp\frac{2}{r}f_{1}f_{2}\{f_{2}r^{2}+f_{1}\}^{-3/2}\mp\frac{3}{r }f_{1}^{2}f_{2}\{f_{2}r^{2}+f_{1}\}^{-5/2}\] \[\pm\frac{15}{r}f_{1}^{3}f_{2}\{f_{2}r^{2}+f_{1}\}^{-7/2},\] \[\partial_{t}\{\partial_{r}\Delta\phi\}= \pm 2f_{2t}r\{f_{2}r^{2}+f_{1}\}^{-1/2}\] \[\pm\frac{3}{2}f_{1}f_{2t}r\{f_{2}r^{2}+f_{1}\}^{-3/2}\mp f_{1t}f_ {2}r\{f_{2}r^{2}+f_{1}\}^{-3/2}\] \[\pm\frac{3}{2}f_{1}^{2}f_{2t}r\{f_{2}r^{2}+f_{1}\}^{-5/2}\mp\frac {3}{2}f_{1}f_{1t}f_{2}r\{f_{2}r^{2}+f_{1}\}^{-5/2},\]
where \(f_{2}\) and \(f_{1}\) are any functions of \(t\).
Putting (4.53)(4.54) into equation (4.22), we derive
\[2f_{2t}r\{f_{2}r^{2}+f_{1}\}^{-1/2}\] \[\quad+\frac{3}{2}f_{1}f_{2t}r\{f_{2}r^{2}+f_{1}\}^{-3/2}-f_{1t}f_ {2}r\{f_{2}r^{2}+f_{1}\}^{-3/2}\] \[\quad+\frac{3}{2}f_{1}^{2}f_{2t}r\{f_{2}r^{2}+f_{1}\}^{-5/2}- \frac{3}{2}f_{1}f_{1t}f_{2}r\{f_{2}r^{2}+f_{1}\}^{-5/2}\] \[= -2\nu\frac{1}{r}f_{1}f_{2}\{f_{2}r^{2}+f_{1}\}^{-3/2}-3\nu\frac{1 }{r}f_{1}^{2}f_{2}\{f_{2}r^{2}+f_{1}\}^{-5/2}\] \[\quad+15\nu\frac{1}{r}f_{1}^{3}f_{2}\{f_{2}r^{2}+f_{1}\}^{-7/2}. \tag{4.55}\]
Assume that \(f_{2}\neq 0\). Let \(r\to\infty\) in equation (4.55), we obtain \(f_{2t}=0\). Then equation (4.55) implies that
\[f_{1t}r^{3}\{f_{2}r^{2}+f_{1}\}^{-3/2}+\frac{3}{2}f_{1t}f_{1}r^{ 3}\{f_{2}r^{2}+f_{1}\}^{-5/2}\] \[= 2\nu f_{1}r\{f_{2}r^{2}+f_{1}\}^{-3/2}+3\nu f_{1}^{2}r\{f_{2}r^{ 2}+f_{1}\}^{-5/2}\] \[-15\nu f_{1}^{3}r\{f_{2}r^{2}+f_{1}\}^{-7/2}. \tag{4.56}\]
Let \(r\to\infty\) in equation (4.56), we obtain \(f_{1t}=0\). Then equation (4.56) implies that
\[0=2\{f_{2}r^{2}+f_{1}\}^{2}+3f_{1}\{f_{2}r^{2}+f_{1}\}-15f_{1}^{2},\ \ \forall r>0. \tag{4.57}\]
Thus \(f_{2}=f_{1}=0\). This is contradictory.
Therefore \(f_{2}=0\) and \(\partial_{r}\phi=f_{1}r\) where \(f_{1}\) is any function of \(t\).
Similarly, the equation (4.45) implies that
\[\frac{1}{r}\partial_{r}\psi=\pm\{g_{2}r^{2}+g_{1}\}^{1/2},\ \ g_{1}\geq 0. \tag{4.58}\]
Putting (4.58) into equation (4.23), we derive
\[2g_{2t}r\{g_{2}r^{2}+g_{1}\}^{-1/2}\] \[+\frac{3}{2}g_{1}g_{2t}r\{g_{2}r^{2}+g_{1}\}^{-3/2}-g_{1t}g_{2}r\{g _{2}r^{2}+g_{1}\}^{-3/2}\] \[+\frac{3}{2}g_{1}^{2}g_{2t}r\{g_{2}r^{2}+g_{1}\}^{-5/2}-\frac{3}{2 }g_{1}g_{1t}g_{2}r\{g_{2}r^{2}+g_{1}\}^{-5/2}\] \[= -2\nu\frac{1}{r}g_{1}g_{2}\{g_{2}r^{2}+g_{1}\}^{-3/2}-3\nu\frac{1 }{r}g_{1}^{2}g_{2}\{g_{2}r^{2}+g_{1}\}^{-5/2}\] \[+15\nu\frac{1}{r}g_{1}^{3}g_{2}\{g_{2}r^{2}+g_{1}\}^{-7/2}. \tag{4.59}\]
Assume that \(g_{2}\neq 0\). Let \(r\to\infty\) in equation (4.59), we obtain \(g_{2t}=0\). Then equation (4.59) implies that
\[g_{1t}r^{3}\{g_{2}r^{2}+g_{1}\}^{-3/2}+\frac{3}{2}g_{1t}g_{1}r^ {3}\{g_{2}r^{2}+g_{1}\}^{-5/2}\] \[= 2\nu g_{1}r\{g_{2}r^{2}+g_{1}\}^{-3/2}+3\nu g_{1}^{2}r\{g_{2}r^{ 2}+g_{1}\}^{-5/2}\] \[-15\nu g_{1}^{3}r\{g_{2}r^{2}+g_{1}\}^{-7/2}. \tag{4.60}\]
Let \(r\to\infty\) in equation (4.60), we obtain \(g_{1t}=0\). Then equation (4.60) implies that
\[0=2\{g_{2}r^{2}+g_{1}\}^{2}+3g_{1}\{g_{2}r^{2}+g_{1}\}-15g_{1}^{2},\ \ \forall r>0. \tag{4.61}\]
Thus \(g_{2}=g_{1}=0\). This is contradictory.
Therefore \(g_{2}=0\) and \(\partial_{r}\psi=g_{1}r\) where \(g_{1}\) is any function of \(t\).
Provided \(\partial_{r}\phi=f_{1}r\) and \(\partial_{r}\psi=g_{1}r\), then all equations (4.47) (4.45) (4.48) (4.35) (4.46) (4.50) (4.51) (4.22) and (4.23) are satisfied. Therefore the equations (4.15) (4.16) are satisfied.
In summary, Theorem 1.8 is proved.
## 5. (2,2)-Symplectic Representation and
Radial Symmetry Breaking in \(\mathbb{R}^{3}\)
In this section, we assume that the velocity vector \(u\) holds the following (2,2)-symplectic representation
\[u(t,x)= \{(A\times\nabla)\times\nabla\}\phi(t,x)+\{(B\times\nabla)\times \nabla\}\psi(t,x)\] \[= \big{(}\nabla^{1}_{att},\nabla^{2}_{att},\nabla^{3}_{att}\big{)} \phi(t,x)+\big{(}\nabla^{1}_{btt},\nabla^{2}_{btt},\nabla^{3}_{btt}\big{)} \psi(t,x), \tag{5.1}\]
where vectors \(A=(a_{1},a_{2},a_{3})\in\mathbb{R}^{3}-\{0\}\) and \(B=(b_{1},b_{2},b_{3})\in\mathbb{R}^{3}-\{0\}\) are linearly independent, \((A\times\nabla)\times\nabla=\big{(}\nabla^{1}_{att},\nabla^{2}_{att},\nabla^ {3}_{att}\big{)}\), \((B\times\nabla)\times\nabla=\big{(}\nabla^{1}_{btt},\nabla^{2}_{btt},\nabla^ {3}_{btt}\big{)}\),
\[\nabla^{1}_{att} =(A\cdot\nabla)\partial_{1}-a_{1}\Delta,\] \[\nabla^{2}_{att} =(A\cdot\nabla)\partial_{2}-a_{2}\Delta,\] \[\nabla^{3}_{att} =(A\cdot\nabla)\partial_{3}-a_{3}\Delta,\] \[\nabla^{1}_{btt} =(B\cdot\nabla)\partial_{1}-b_{1}\Delta, \tag{5.3}\] \[\nabla^{2}_{btt} =(B\cdot\nabla)\partial_{2}-b_{2}\Delta,\] \[\nabla^{3}_{btt} =(B\cdot\nabla)\partial_{3}-b_{3}\Delta. \tag{5.2}\]
Thanks the following observations
\[(B\times\nabla)\cdot u(t,x) =-\Big{(}\{(A\times\nabla)\times(B\times\nabla)\}\cdot\nabla\Big{)} \phi(t,x)\] \[=-(A\times B)\cdot\nabla\Delta\phi(t,x), \tag{5.5}\] \[(A\times\nabla)\cdot u(t,x) =\Big{(}\{(A\times\nabla)\times(B\times\nabla)\}\cdot\nabla\Big{)} \psi(t,x)\] \[=(A\times B)\cdot\nabla\Delta\psi(t,x), \tag{5.4}\]
taking scalar product with \(B\times\nabla\) and equation (1.1), we have
\[(A\times B)\cdot\nabla\Delta\{\phi_{t}-\nu\Delta\phi\}-(B\times\nabla)\cdot\{( u\cdot\nabla)u\}=0. \tag{5.6}\]
And taking scalar product with \(A\times\nabla\) and equation (1.1), we derive
\[(A\times B)\cdot\nabla\Delta\{\psi_{t}-\nu\Delta\psi\}+(A\times\nabla)\cdot\{( u\cdot\nabla)u\}=0. \tag{5.7}\]
Introduce symbol \(\partial_{at}^{j}\) and \(\partial_{bt}^{j}\) as follows
\[\partial_{at}^{1} =a_{2}\partial_{3}-a_{3}\partial_{2},\;\partial_{at}^{2}=a_{3} \partial_{1}-a_{1}\partial_{3},\;\partial_{at}^{3}=a_{1}\partial_{2}-a_{2} \partial_{1},\] \[\partial_{bt}^{1} =b_{2}\partial_{3}-b_{3}\partial_{2},\;\partial_{bt}^{2}=b_{3} \partial_{1}-b_{1}\partial_{3},\;\partial_{bt}^{3}=b_{1}\partial_{2}-b_{2} \partial_{1}. \tag{5.8}\]
Then \(A\times\nabla=\big{(}\partial_{at}^{1},\partial_{at}^{2},\partial_{at}^{3} \big{)}\) and \(B\times\nabla=\big{(}\partial_{bt}^{1},\partial_{bt}^{2},\partial_{bt}^{3} \big{)}\). Let us rewrite the nonlinear terms in the equation (5.6)
\[(B\times\nabla)\cdot\{(u\cdot\nabla)u\}=\partial_{bt}^{j}u^{k} \partial_{k}u^{j}\] \[= \partial_{bt}^{j}(\nabla_{att}^{k}\phi+\nabla_{btt}^{k}\psi) \partial_{k}\big{(}\nabla_{att}^{j}\phi+\nabla_{btt}^{j}\psi\big{)}\] \[= -\big{(}\nabla_{att}^{k}\phi+\nabla_{bt}^{k}\psi\big{)}\partial_ {k}(A\times B)\cdot\nabla\Delta\phi\] \[+\big{(}\partial_{bt}^{j}\nabla_{att}^{k}\phi+\partial_{bt}^{j} \nabla_{btt}^{k}\psi\big{)}\partial_{k}\big{(}\nabla_{att}^{j}\phi+\nabla_{btt }^{j}\psi\big{)}\] \[= -(A\times B)\cdot\nabla\big{(}\{A\cdot\nabla\partial_{k}\phi\} \partial_{k}\Delta\phi-\Delta\phi\{A\cdot\nabla\Delta\phi\}\big{)}\] \[-(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla\partial_{k}\psi\} \partial_{k}\Delta\phi-\Delta\psi\{B\cdot\nabla\Delta\phi\}\big{)}\] \[+\nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)^{2}\phi+ \nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\psi\] \[+\nabla\Delta\psi\cdot(B\times\nabla)(B\cdot\nabla)^{2}\psi+ \nabla\Delta\psi\cdot(B\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\phi, \tag{5.9}\]
where
\[\{\nabla_{att}^{k}\phi\}\partial_{k}=\big{\{}\{(A\cdot\nabla) \partial_{k}-a_{k}\Delta\}\phi\big{\}}\partial_{k}\] \[= \{(A\cdot\nabla)\partial_{k}\phi\}\partial_{k}-\Delta\phi\;\;A \cdot\nabla, \tag{5.10}\]
\[\{\nabla_{btt}^{k}\psi\}\partial_{k}=\big{\{}\{(B\cdot\nabla) \partial_{k}-b_{k}\Delta\}\psi\big{\}}\partial_{k}\] \[= \{(B\cdot\nabla)\partial_{k}\psi\}\partial_{k}-\Delta\psi\;\;B \cdot\nabla, \tag{5.11}\]
\[\big{(}\partial_{bt}^{j}\nabla_{att}^{k}\phi+\partial_{bt}^{j} \nabla_{btt}^{k}\psi\big{)}\partial_{k}\big{(}\nabla_{att}^{j}\phi+\nabla_{btt }^{j}\psi\big{)}\] \[= \big{(}\partial_{bt}^{j}\nabla_{att}^{k}\phi+\partial_{bt}^{j} \nabla_{btt}^{k}\psi\big{)}\big{(}\{(A\cdot\nabla)\partial_{j}-a_{j}\Delta\} \partial_{k}\phi+\{(B\cdot\nabla)\partial_{j}-b_{j}\Delta\}\partial_{k}\psi \big{)}, \tag{5.12}\]
\[\partial_{bt}^{j}\nabla_{attt}^{k}\psi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi\] \[= \{(A\cdot\nabla)\partial_{k}-a_{k}\Delta\}\partial_{bt}^{j}\phi\ \ \partial_{k}\partial_{j}(A\cdot\nabla)\phi \tag{5.13}\] \[= \partial_{bt}^{j}(A\cdot\nabla)\partial_{k}\phi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi-\partial_{bt}^{j}\Delta\phi\ \ \partial_{j}(A\cdot\nabla)(A\cdot\nabla)\phi\] \[= \partial_{j}\Delta\phi\ \ \partial_{bt}^{j}(A\cdot\nabla)^{2}\phi\] \[= \nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)^{2}\phi,\]
\[-\partial_{bt}^{j}\nabla_{att}^{k}\phi\ \ a_{j}\Delta\partial_{k} \phi=-(A\times B)\cdot\nabla\nabla_{att}^{k}\phi\ \ \partial_{k}\Delta\phi, \tag{5.14}\]
\[\partial_{bt}^{j}\nabla_{att}^{k}\phi\ \ \partial_{j}(B\cdot\nabla) \partial_{k}\psi\] \[= \{(A\cdot\nabla)\partial_{k}-a_{k}\Delta\}\partial_{bt}^{j}\phi \ \partial_{k}\partial_{j}(B\cdot\nabla)\psi\] \[= \partial_{bt}^{j}(A\cdot\nabla)\partial_{k}\phi\ \ \partial_{j}(B\cdot\nabla) \partial_{k}\psi-\partial_{bt}^{j}\Delta\phi\ \ \partial_{j}(A\cdot\nabla)(B\cdot\nabla)\psi\] \[= -\partial_{j}(A\cdot\nabla)\partial_{k}\phi\ \ \partial_{bt}^{j}(B\cdot\nabla) \partial_{k}\psi+\partial_{j}\Delta\phi\ \ \partial_{bt}^{j}(A\cdot\nabla)(B\cdot\nabla)\psi\] \[= -\nabla(A\cdot\nabla)\partial_{k}\phi\cdot(B\times\nabla)(B\cdot \nabla)\partial_{k}\psi+\nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)(B \cdot\nabla)\psi, \tag{5.15}\]
\[\partial_{bt}^{j}\nabla_{btt}^{k}\psi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi\] \[= \partial_{bt}^{j}\{(B\cdot\nabla)\partial_{k}-b_{k}\Delta\}\psi \ \partial_{j}(A\cdot\nabla)\partial_{k}\phi\] \[= \partial_{bt}^{j}(B\cdot\nabla)\partial_{k}\psi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi-\partial_{bt}^{j}\Delta\psi\ \ \partial_{j}(A\cdot\nabla)(B\cdot\nabla)\phi\] \[= \partial_{bt}^{j}(B\cdot\nabla)\partial_{k}\psi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi+\partial_{j}\Delta\psi\ \ \partial_{bt}^{j}(A\cdot\nabla)(B\cdot\nabla)\phi\] \[= (B\times\nabla)(B\cdot\nabla)\partial_{k}\psi\cdot\nabla(A\cdot \nabla)\partial_{k}\phi+\nabla\Delta\psi\cdot(B\times\nabla)(A\cdot\nabla)(B \cdot\nabla)\phi, \tag{5.16}\]
\[-\partial_{bt}^{j}\nabla_{btt}^{k}\psi\ \ a_{j}\Delta\partial_{k} \phi=-(A\times B)\cdot\nabla\nabla_{btt}^{k}\psi\ \ \partial_{k}\Delta\phi, \tag{5.18}\]
\[\partial_{bt}^{j}\nabla_{btt}^{k}\psi\ \ \partial_{j}(B\cdot\nabla) \partial_{k}\psi\] \[= \partial_{bt}^{j}\{(B\cdot\nabla)\partial_{k}-b_{k}\Delta\}\psi\ \ \partial_{j}(B \cdot\nabla)\partial_{k}\psi\] \[= \partial_{bt}^{j}(B\cdot\nabla)\partial_{k}\psi\ \ \partial_{j}(B \cdot\nabla)\partial_{k}\psi-\partial_{bt}^{j}\Delta\psi\ \ \partial_{j}(B\cdot\nabla)b_{k}\partial_{k}\psi\] \[= -\partial_{bt}^{j}\Delta\psi\ \ \partial_{j}(B\cdot\nabla)^{2}\psi\] \[= \partial_{j}\Delta\psi\ \ \partial_{bt}^{j}(B\cdot\nabla)^{2}\psi= \nabla\Delta\psi\cdot(B\times\nabla)(B\cdot\nabla)^{2}\psi, \tag{5.19}\]
\[-\partial_{bt}^{j}\nabla_{btt}^{k}\psi\ \ b_{j}\Delta\partial_{k}\psi=0. \tag{5.20}\]
Similarly we rewrite the nonlinear terms in the equation (5.7)
\[(A\times\nabla)\cdot\{(u\cdot\nabla)u\}=\partial_{at}^{j}u^{k} \partial_{k}u^{j}\] \[= \partial_{at}^{j}\big{(}\nabla_{att}^{k}\phi+\nabla_{btt}^{k}\psi \big{)}\partial_{k}\big{(}\nabla_{att}^{j}\phi+\nabla_{btt}^{j}\psi\big{)}\] \[= \big{(}\nabla_{att}^{k}\phi+\nabla_{btt}^{k}\psi\big{)}\partial_{k }(A\times B)\cdot\nabla\Delta\psi\] \[+\big{(}\partial_{at}^{j}\nabla_{att}^{k}\phi+\partial_{at}^{j} \nabla_{btt}^{k}\psi\big{)}\partial_{k}\big{(}\nabla_{att}^{j}\phi+\nabla_{btt} ^{j}\psi\big{)}\] \[= (A\times B)\cdot\nabla\big{\{}\big{(}\nabla_{att}^{k}\phi+\nabla_ {btt}^{k}\psi\big{)}\partial_{k}\Delta\psi\big{\}}\] \[-\big{\{}(A\times B)\cdot\nabla\nabla_{att}^{k}\phi+(A\times B) \cdot\nabla\nabla_{btt}^{k}\psi\big{\}}\partial_{k}\Delta\psi\] \[+\big{(}\partial_{at}^{j}\nabla_{att}^{k}\phi+\partial_{at}^{j} \nabla_{btt}^{k}\psi\big{)}\partial_{k}\big{(}\nabla_{att}^{j}\phi+\nabla_{btt }^{j}\psi\big{)}\] \[= (A\times B)\cdot\nabla\big{(}\{A\cdot\nabla\partial_{k}\phi\} \partial_{k}\Delta\psi-\Delta\phi\{A\cdot\nabla\Delta\psi\}\big{)}\] \[+(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla\partial_{k}\psi\} \partial_{k}\Delta\psi-\Delta\psi\{B\cdot\nabla\Delta\psi\}\big{)}\] \[+\nabla\Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)^{2}\phi+ \nabla\Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\psi\] \[+\nabla\Delta\psi\cdot(A\times\nabla)(B\cdot\nabla)^{2}\psi+ \nabla\Delta\psi\cdot(A\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\phi, \tag{5.21}\]
where
\[\big{(}\partial_{at}^{j}\nabla_{att}^{k}\phi+\partial_{at}^{j} \nabla_{btt}^{k}\psi\big{)}\partial_{k}\big{(}\nabla_{att}^{j}\phi+\nabla_{btt }^{j}\psi\big{)}\] \[= \big{(}\partial_{at}^{j}\nabla_{att}^{k}\phi+\partial_{at}^{j} \nabla_{btt}^{k}\psi\big{)}\big{(}\{(A\cdot\nabla)\partial_{j}-a_{j}\Delta\} \partial_{k}\phi+\{(B\cdot\nabla)\partial_{j}-b_{j}\Delta\}\partial_{k}\psi \big{)}, \tag{5.22}\]
\[\partial_{at}^{j}\nabla_{att}^{k}\phi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi\] \[= \{(A\cdot\nabla)\partial_{k}-a_{k}\Delta\}\partial_{at}^{j}\phi \ \partial_{k}\partial_{j}(A\cdot\nabla)\phi\] \[= \partial_{at}^{j}(A\cdot\nabla)\partial_{k}\phi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi-\partial_{at}^{j}\Delta\phi\ \ \partial_{j}(A\cdot\nabla)(A\cdot\nabla)\phi\] \[= -\partial_{at}^{j}\Delta\phi\ \ \partial_{j}(A\cdot\nabla)^{2}\phi\] \[= \partial_{j}\Delta\phi\ \ \partial_{at}^{j}(A\cdot\nabla)^{2}\phi= \nabla\Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)^{2}\phi, \tag{5.23}\]
\[-\partial_{at}^{j}\nabla_{att}^{k}\phi\ \ a_{j}\Delta\partial_{k}\phi=0, \tag{5.24}\]
\[\partial_{at}^{j}\nabla_{att}^{k}\phi\ \ \partial_{j}(B\cdot\nabla) \partial_{k}\psi\] \[= \{(A\cdot\nabla)\partial_{k}-a_{k}\Delta\}\partial_{at}^{j}\phi \ \ \partial_{k}\partial_{j}(B\cdot\nabla)\psi\] \[= \partial_{at}^{j}(A\cdot\nabla)\partial_{k}\phi\ \ \partial_{j}(B\cdot\nabla) \partial_{k}\psi-\partial_{at}^{j}\Delta\phi\ \ \partial_{j}(A\cdot\nabla)(B\cdot\nabla)\psi\] \[= \partial_{at}^{j}(A\cdot\nabla)\partial_{k}\phi\ \ \partial_{j}(B\cdot\nabla) \partial_{k}\psi+\partial_{j}\Delta\phi\ \ \partial_{at}^{j}(A\cdot\nabla)(B\cdot\nabla)\psi\] \[= (A\times\nabla)(A\cdot\nabla)\partial_{k}\phi\cdot\nabla(B\cdot \nabla)\partial_{k}\psi+\nabla\Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)(B \cdot\nabla)\psi, \tag{5.25}\]
\[-\partial_{at}^{j}\nabla_{att}^{k}\phi\ \ b_{j}\Delta\partial_{k}\psi=\{(A\times B) \cdot\nabla\nabla_{att}^{k}\phi\}\Delta\partial_{k}\psi, \tag{5.26}\]
\[\partial_{at}^{j}\nabla^{k}_{btt}\psi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi\] \[= \partial_{at}^{j}\{(B\cdot\nabla)\partial_{k}-b_{k}\Delta\}\psi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi \tag{5.27}\] \[= \partial_{at}^{j}(B\cdot\nabla)\partial_{k}\psi\ \ \partial_{j}(A\cdot\nabla) \partial_{k}\phi-\partial_{at}^{j}\Delta\psi\ \ \partial_{j}(A\cdot\nabla)(B\cdot\nabla)\phi\] \[= -\partial_{j}(B\cdot\nabla)\partial_{k}\psi\ \ \partial_{at}^{j}(A\cdot\nabla) \partial_{k}\phi+\partial_{j}\Delta\psi\ \ \partial_{at}^{j}(A\cdot\nabla)(B\cdot\nabla)\phi\] \[= -\nabla(B\cdot\nabla)\partial_{k}\psi\cdot(A\times\nabla)(A\cdot \nabla)\partial_{k}\phi+\nabla\Delta\psi\cdot(A\times\nabla)(A\cdot\nabla)(B \cdot\nabla)\phi,\]
\[-\partial_{at}^{j}\nabla^{k}_{btt}\psi\ \ a_{j}\Delta\partial_{k}\phi=0, \tag{5.28}\]
\[\partial_{at}^{j}\nabla^{k}_{btt}\psi\ \ \partial_{j}(B\cdot \nabla)\partial_{k}\psi\] \[= \partial_{at}^{j}\{(B\cdot\nabla)\partial_{k}-b_{k}\Delta\}\psi \ \partial_{j}(B\cdot\nabla)\partial_{k}\psi\] \[= \partial_{at}^{j}(B\cdot\nabla)\partial_{k}\psi\ \ \partial_{j}(B\cdot \nabla)\partial_{k}\psi-\partial_{at}^{j}\Delta\psi\ \ \partial_{j}(B\cdot\nabla)b_{k}\partial_{k}\psi\] \[= -\partial_{at}^{j}\Delta\psi\ \ \partial_{j}(B\cdot\nabla)^{2}\psi\] \[= \partial_{j}\Delta\psi\ \ \partial_{at}^{j}(B\cdot\nabla)^{2}\psi= \nabla\Delta\psi\cdot(A\times\nabla)(B\cdot\nabla)^{2}\psi, \tag{5.29}\]
\[-\partial_{at}^{j}\nabla^{k}_{btt}\psi\ \ b_{j}\Delta\partial_{k}\psi=\{(A \times B)\cdot\nabla\nabla^{k}_{btt}\psi\}\Delta\partial_{k}\psi. \tag{5.30}\]
Putting together (5.6) and (5.9), we derive
\[(A\times B)\cdot\nabla\Delta\{\phi_{t}-\nu\Delta\phi\}\] \[+(A\times B)\cdot\nabla\big{(}\{A\cdot\nabla\partial_{k}\phi\} \partial_{k}\Delta\phi-\Delta\phi\{A\cdot\nabla\Delta\phi\}\big{)}\] \[+(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla\partial_{k}\psi\} \partial_{k}\Delta\phi-\Delta\psi\{B\cdot\nabla\Delta\phi\}\big{)}\] \[-\nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)^{2}\phi- \nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\psi\] \[-\nabla\Delta\psi\cdot(B\times\nabla)(B\cdot\nabla)^{2}\psi- \nabla\Delta\psi\cdot(B\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\phi=0. \tag{5.31}\]
Putting together (5.7) and (5.21), we derive
\[(A\times B)\cdot\nabla\Delta\{\psi_{t}-\nu\Delta\psi\}\] \[+(A\times B)\cdot\nabla\big{(}\{A\cdot\nabla\partial_{k}\phi\} \partial_{k}\Delta\psi-\Delta\phi\{A\cdot\nabla\Delta\psi\}\big{)}\] \[+(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla\partial_{k}\psi\} \partial_{k}\Delta\psi-\Delta\psi\{B\cdot\nabla\Delta\psi\}\big{)}\] \[+\nabla\Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)^{2}\phi+\nabla \Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\psi\] \[+\nabla\Delta\psi\cdot(A\times\nabla)(B\cdot\nabla)^{2}\psi+ \nabla\Delta\psi\cdot(A\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\phi=0. \tag{5.32}\]
Now we assume that \(\phi\) and \(\psi\) are radial symmetric functions with respect to space variable \(x\in\mathbb{R}^{3}\). It is that \(\phi(t,x)=\phi(t,r)\), \(\psi(t,x)=\psi(t,r)\) and \(r^{2}=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\).
Then we have
(5.33) \[\begin{split}&(A\times B)\cdot\nabla\big{(}\{A\cdot\nabla\partial_{k} \phi\}\partial_{k}\Delta\phi-\Delta\phi\{A\cdot\nabla\Delta\phi\}\big{)}\\ =&(A\times B)\cdot\nabla\big{(}\{A\cdot\nabla\frac{x_{ k}}{r}\partial_{r}\phi\}\frac{x_{k}}{r}\partial_{r}\Delta\phi-\{A\cdot x\} \Delta\phi\cdot\frac{1}{r}\partial_{r}\Delta\phi\big{)}\\ =&(A\times B)\cdot\nabla\big{\{}(A\cdot x)\big{(}\{ \frac{1}{r}\partial_{r}\phi+r\partial_{r}(\frac{1}{r}\partial_{r}\phi)\}\frac{ 1}{r}\partial_{r}\Delta\phi-\Delta\phi\cdot\frac{1}{r}\partial_{r}\Delta\phi \big{)}\}\\ =&(A\times B)\cdot x\{A\cdot x\}\frac{1}{r}\partial_{ r}\big{(}\{\frac{1}{r}\partial_{r}\phi+r\partial_{r}(\frac{1}{r}\partial_{r} \phi)\}\frac{1}{r}\partial_{r}\Delta\phi-\Delta\phi\cdot\frac{1}{r}\partial_ {r}\Delta\phi\big{)}\\ =&-(A\times B)\cdot x\{A\cdot x\}\frac{1}{r}\partial_ {r}\big{\{}\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\Delta\phi \big{\}}\\ =&-(A\times B)\cdot x\{A\cdot x\}\big{\{}\frac{2}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial _{r}\Delta\phi+\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(} \frac{1}{r}\partial_{r}\Delta\phi\big{)}\big{\}},\\ &(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla\partial_{k}\psi\} \partial_{k}\Delta\phi-\Delta\psi\{B\cdot\nabla\Delta\phi\}\big{)}\\ =&(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla x_{ k}\frac{1}{r}\partial_{r}\psi\}x_{k}\frac{1}{r}\partial_{r}\Delta\phi-\{B \cdot x\}\Delta\psi\cdot\frac{1}{r}\partial_{r}\Delta\phi\big{)}\\ =&(A\times B)\cdot\nabla\big{\{}(B\cdot x)\big{(}\{ \frac{1}{r}\partial_{r}\psi+r\partial_{r}(\frac{1}{r}\partial_{r}\psi)\}\frac{ 1}{r}\partial_{r}\Delta\phi-\Delta\psi\cdot\frac{1}{r}\partial_{r}\Delta\phi \big{)}\big{\}}\\ =&(A\times B)\cdot x\{B\cdot x\}\frac{1}{r}\partial_ {r}\big{\{}\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\Delta\phi \big{\}}\\ =&-(A\times B)\cdot x\{B\cdot x\}\big{\{}\frac{2}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\psi\big{)}\cdot\frac{1}{r}\partial _{r}\Delta\phi+\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\big{(} \frac{1}{r}\partial_{r}\Delta\phi\big{)}\big{\}},\\ &-\nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)^{2}\phi\\ =&-x\frac{1}{r}\partial_{r}\Delta\phi\cdot(B\times \nabla)\{(A\cdot\nabla)(A\cdot x)\frac{1}{r}\partial_{r}\phi\}\\ =&-x\frac{1}{r}\partial_{r}\Delta\phi\cdot(B \times\nabla)\{A\cdot A\frac{1}{r}\partial_{r}\phi+(A\cdot x)^{2}\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\phi)\}\\ =& 2(A\times B)\cdot x(A\cdot x)\frac{1}{r} \partial_{r}\Delta\phi\cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r} \phi)\},\\ &-\nabla\Delta\phi\cdot(B\times\nabla)(A\cdot\nabla)(B\cdot \nabla)\psi\\ =&-x\frac{1}{r}\partial_{r}\Delta\phi\cdot(B\times \nabla)(A\cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\\ =&-x\frac{1}{r}\partial_{r}\Delta\phi\cdot(B\times \nabla)\{(A\cdot B)\frac{1}{r}\partial_{r}\psi+(A\cdot x)(B\cdot x)\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\psi)\}\\ =&(A\times B)\cdot x(B\cdot x)\frac{1}{r} \partial_{r}\Delta\phi\cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r} \psi)\},\\ &-\nabla\Delta\psi\cdot(B\times\nabla)(B\cdot\nabla)^{2}\psi\\ =&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla) (B\cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\\ =&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla) \{(B\cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla) (B\cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\\ =&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla) \{(B\cdot\nabla)(B\cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla) \{(B\cdot\nabla)(B\cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r} \psi\]\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla) \{(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r} \psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r} \psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r}\psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\nabla)\{(B \cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)(B\cdot\nabla)\frac{1}{r}\partial_{r} \psi\] \[=&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times\
\[\begin{split}&-\nabla\Delta\psi\cdot(B\times\nabla)(A\cdot\nabla)(B \cdot\nabla)\phi\\ =&-x\frac{1}{r}\partial_{r}\Delta\psi\cdot(B\times \nabla)(A\cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\phi\\ =&(A\times B)\cdot x(B\cdot x)\frac{1}{r}\partial_{ r}\Delta\psi\cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\},\\ &(A\times B)\cdot\nabla\big{(}\{A\cdot\nabla\partial_{k}\phi\} \partial_{k}\Delta\psi-\Delta\phi\{A\cdot\nabla\Delta\psi\}\big{)}\\ =&(A\times B)\cdot\nabla\big{(}\{A\cdot\nabla x_{k} \frac{1}{r}\partial_{r}\phi\}x_{k}\frac{1}{r}\partial_{r}\Delta\psi-\{A\cdot x \}\Delta\phi\cdot\frac{1}{r}\partial_{r}\Delta\psi\big{)}\\ =&(A\times B)\cdot\nabla\big{\{}(A\cdot x)\big{(}\{ \frac{1}{r}\partial_{r}\phi+r\partial_{r}(\frac{1}{r}\partial_{r}\phi)\}\frac{ 1}{r}\partial_{r}\Delta\psi-\Delta\phi\cdot\frac{1}{r}\partial_{r}\Delta\psi \big{)}\}\\ =&(A\times B)\cdot x\{A\cdot x\}\frac{1}{r}\partial_ {r}(\{\frac{1}{r}\partial_{r}\phi+r\partial_{r}(\frac{1}{r}\partial_{r}\phi) \}\frac{1}{r}\partial_{r}\Delta\psi-\Delta\phi\cdot\frac{1}{r}\partial_{r} \Delta\psi)\\ =&-(A\times B)\cdot x\{A\cdot x\}\frac{1}{r} \partial_{r}\big{\{}\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \Delta\psi\big{\}}\\ =&-(A\times B)\cdot x\{A\cdot x\}\big{\{}\frac{2}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial _{r}\Delta\psi+\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}(\frac{ 1}{r}\partial_{r}\Delta\psi)\big{\}},\\ &(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla\partial_{k}\psi\} \partial_{k}\Delta\psi-\Delta\psi\{B\cdot\nabla\Delta\psi\}\big{)}\\ =&(A\times B)\cdot\nabla\big{(}\{B\cdot\nabla x_{k} \frac{1}{r}\partial_{r}\psi\}x_{k}\frac{1}{r}\partial_{r}\Delta\psi-\{B \cdot x\}\Delta\psi\cdot\frac{1}{r}\partial_{r}\Delta\psi\big{)}\\ =&(A\times B)\cdot\nabla\big{\{}(B\cdot x)\big{(}\{ \frac{1}{r}\partial_{r}\psi+r\partial_{r}(\frac{1}{r}\partial_{r}\psi)\}\frac{ 1}{r}\partial_{r}\Delta\psi-\Delta\psi\cdot\frac{1}{r}\partial_{r}\Delta\psi \big{)}\big{\}}\\ =&(A\times B)\cdot x\{B\cdot x\}\frac{1}{r} \partial_{r}\big{(}\{\frac{1}{r}\partial_{r}\psi+r\partial_{r}(\frac{1}{r} \partial_{r}\psi)\}\frac{1}{r}\partial_{r}\Delta\psi-\Delta\psi\cdot\frac{1}{ r}\partial_{r}\Delta\psi\big{)}\\ =&-(A\times B)\cdot x\{B\cdot x\}\frac{1}{r} \partial_{r}\big{\{}\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \Delta\psi\big{\}}\\ =&-(A\times B)\cdot x\{B\cdot x\}\big{\{}\frac{2}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\psi\big{)}\cdot\frac{1}{r}\partial_{ r}\Delta\psi+\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r}\partial_{r} \big{(}\frac{1}{r}\partial_{r}\Delta\psi\big{)}\big{\}},\\ &\nabla\Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)^{2}\phi\\ =& x\frac{1}{r}\partial_{r}\Delta\phi\cdot(A\times\nabla)(A \cdot\nabla)(A\cdot x)\frac{1}{r}\partial_{r}\phi\\ =& x\frac{1}{r}\partial_{r}\Delta\phi\cdot(A\times\nabla) \{(A\cdot A)\frac{1}{r}\partial_{r}\phi+(A\cdot x)^{2}\frac{1}{r}\partial_{r} (\frac{1}{r}\partial_{r}\phi)\}\\ =& 0,\\ &\nabla\Delta\phi\cdot(A\times\nabla)(A\cdot\nabla)(B\cdot\nabla) \psi\\ =& x\frac{1}{r}\partial_{r}\Delta\phi\cdot(A\times\nabla)(A \cdot\nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\\ =& x\frac{1}{r}\partial_{r}\Delta\phi\cdot(A\times\nabla) \{(A\cdot B)\frac{1}{r}\partial_{r}\psi+(A\cdot x)(B\cdot x)\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\psi)\}\\ =&(A\times B)\cdot x(A\cdot x)\frac{1}{r}\partial_{r} \Delta\phi\cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\psi)\},\\ \end{split} \tag{5.38}\]
\[\nabla\Delta\psi\cdot(A\times\nabla)(B\cdot\nabla)^{2}\psi \tag{5.43}\] \[= x\frac{1}{r}\partial_{r}\Delta\psi\cdot(A\times\nabla)(B\cdot \nabla)(B\cdot x)\frac{1}{r}\partial_{r}\psi\] \[= x\frac{1}{r}\partial_{r}\Delta\psi\cdot(A\times\nabla)\{(B\cdot B )\frac{1}{r}\partial_{r}\psi+(B\cdot x)^{2}\frac{1}{r}\partial_{r}(\frac{1}{r} \partial_{r}\psi)\}\] \[= 2(A\times B)\cdot x(B\cdot x)\frac{1}{r}\partial_{r}\Delta\psi \cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\psi)\},\]
\[\nabla\Delta\psi\cdot(A\times\nabla)(A\cdot\nabla)(B\cdot\nabla)\phi\] \[= x\frac{1}{r}\partial_{r}\Delta\psi\cdot(A\times\nabla)(A\cdot \nabla)(B\cdot x)\frac{1}{r}\partial_{r}\phi\] \[= x\frac{1}{r}\partial_{r}\Delta\psi\cdot(A\times\nabla)\{(A\cdot B )\frac{1}{r}\partial_{r}\phi+(A\cdot x)(B\cdot x)\frac{1}{r}\partial_{r}( \frac{1}{r}\partial_{r}\phi)\}\] \[= (A\times B)\cdot x(A\cdot x)\frac{1}{r}\partial_{r}\Delta\psi \cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\}.\]
Here
\[\Delta f=\frac{3}{r}\partial_{r}f+r\partial_{r}(\frac{1}{r}\partial_{r}f).\]
Putting together (5.31) and (5.33)- (5.38), we derive
\[\Delta\{\phi_{t}-\nu\Delta\phi\}\] \[= \{A\cdot x\}\big{\{}\frac{2}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\phi+\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\phi\big{)}\big{\}}\] \[+\{B\cdot x\}\big{\{}\frac{2}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\psi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\phi+\frac{2}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\phi\big{)}\big{\}}\] \[-2(A\cdot x)\frac{1}{r}\partial_{r}\Delta\phi\cdot\{\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\phi)\}-(B\cdot x)\frac{1}{r}\partial_{r} \Delta\phi\cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\psi)\}\] \[-(B\cdot x)\frac{1}{r}\partial_{r}\Delta\psi\cdot\{\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\phi)\}\] \[= \{A\cdot x\}\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{ r}\big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)}-(B\cdot x)\frac{1}{r} \partial_{r}\Delta\psi\cdot\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\] \[+\{B\cdot x\}\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\psi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\phi+\frac{2}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\phi\big{)}\big{\}}\] \[= \Big{\{}A\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)}-B\frac{1}{r}\partial_{r} \Delta\psi\cdot\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\] \[+B\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \psi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\phi+\frac{2}{r}\partial_{r}\psi \cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)} \big{\}}\Big{\}}\cdot x. \tag{5.45}\]
Putting together (5.32) and (5.39)- (5.44), we derive
\[\Delta\{\psi_{t}-\nu\Delta\psi\}\] \[= \{A\cdot x\}\big{\{}\frac{2}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi+\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\psi\big{)}\big{\}}\] \[+\{B\cdot x\}\big{\{}\frac{2}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\psi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi+\frac{2}{r} \partial_{r}\psi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\psi\big{)}\big{\}}\] \[-(A\cdot x)\frac{1}{r}\partial_{r}\Delta\phi\cdot\{\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\psi)\}\] \[-2(B\cdot x)\frac{1}{r}\partial_{r}\Delta\psi\cdot\{\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\psi)\}-(A\cdot x)\frac{1}{r}\partial_{r} \Delta\psi\cdot\{\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\}\] \[= \{A\cdot x\}\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi+\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\psi\big{)}\big{\}}\] \[-(A\cdot x)\frac{1}{r}\partial_{r}\Delta\phi\cdot\frac{1}{r} \partial_{r}(\frac{1}{r}\partial_{r}\psi)+\{B\cdot x\}\frac{2}{r}\partial_{r} \psi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\psi \big{)}\] \[= \Big{\{}A\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi+\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\psi\big{)}\big{\}}\] \[-A\frac{1}{r}\partial_{r}\Delta\phi\cdot\frac{1}{r}\partial_{r}( \frac{1}{r}\partial_{r}\psi)+B\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\psi\big{)}\Big{\}}\cdot x. \tag{5.46}\]
Proof of Theorem 1.9.: Let us select orthogonal transformation \(\rho\) as follows
\[y=\rho x=x\left(\begin{array}{ccc}0&0&1\\ 1&0&0\\ 0&1&0\end{array}\right). \tag{5.47}\]
Then \(r^{2}=y\cdot y=\rho x\cdot\rho x=x\cdot x\).
Applying the orthogonal transformation (5.47) in the equations (5.45) (5.46), we obtain
\[\Delta\{\phi_{t}-\nu\Delta\phi\}\] \[= \Big{\{}A\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)}-B\frac{1}{r}\partial_{r} \Delta\psi\cdot\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\] \[+B\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \psi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\phi+\frac{2}{r}\partial_{r} \psi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)} \big{\}}\Big{\}}\cdot y,\] \[\Delta\{\psi_{t}-\nu\Delta\psi\} \tag{5.49}\] \[= \Big{\{}A\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi+\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\psi\big{)}\big{\}}\] \[-A\frac{1}{r}\partial_{r}\Delta\phi\cdot\frac{1}{r}\partial_{r}( \frac{1}{r}\partial_{r}\psi)+B\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\psi\big{)}\Big{\}}\cdot y. \tag{5.48}\]
Employing the equations (5.45) (5.48), we get
\[0= \Big{\{}A\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r} \big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)}-B\frac{1}{r}\partial_{r} \Delta\psi\cdot\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\] \[+B\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \psi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\phi+\frac{2}{r}\partial_{r}\psi \cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)} \big{\}}\Big{\}}\cdot(\rho x-x). \tag{5.50}\]
Similarly using the equations (5.46) (5.49), we derive
\[0= \Big{\{}A\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r} \partial_{r}\phi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi+\frac{2}{r} \partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\psi\big{)}\big{\}}\] \[-A\frac{1}{r}\partial_{r}\Delta\phi\cdot\frac{1}{r}\partial_{r}( \frac{1}{r}\partial_{r}\psi)+B\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\psi\big{)}\Big{\}}\cdot(\rho x- x). \tag{5.51}\]
Given \(r\), thanks \(x\in\mathbb{S}_{r}^{2}\) is arbitrary, the equations (5.50) (5.51) imply that
\[A\frac{2}{r}\partial_{r}\phi\cdot\frac{1}{r}\partial_{r}\big{(} \frac{1}{r}\partial_{r}\Delta\phi\big{)}-B\frac{1}{r}\partial_{r}\Delta\psi \cdot\frac{1}{r}\partial_{r}(\frac{1}{r}\partial_{r}\phi)\] \[+B\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \psi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\phi+\frac{2}{r}\partial_{r}\psi \cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)} \big{\}}=0, \tag{5.53}\] \[A\big{\{}\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r} \phi\big{)}\cdot\frac{1}{r}\partial_{r}\Delta\psi+\frac{2}{r}\partial_{r}\phi \cdot\frac{1}{r}\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\psi\big{)} \big{\}}\] \[-A\frac{1}{r}\partial_{r}\Delta\phi\cdot\frac{1}{r}\partial_{r} (\frac{1}{r}\partial_{r}\psi)+B\frac{2}{r}\partial_{r}\psi\cdot\frac{1}{r} \partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\psi\big{)}=0. \tag{5.52}\]
Putting (5.52) into (5.45) and putting (5.53) into (5.46), we have
\[\Delta\{\phi_{t}-\nu\Delta\phi\}=0, \tag{5.54}\]
\[\Delta\{\psi_{t}-\nu\Delta\psi\}=0. \tag{5.55}\]
Since the vectors \(A\) and \(B\) are linearly independent, the equation (5.52) implies that
\[\partial_{r}\phi\cdot\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\phi\big{)}=0, \tag{5.56}\]
\[\partial_{r}\big{(}\frac{1}{r}\partial_{r}\psi\big{)}\cdot \partial_{r}\Delta\phi+2\partial_{r}\psi\cdot\partial_{r}\big{(}\frac{1}{r} \partial_{r}\Delta\phi\big{)}\] \[= \partial_{r}\Delta\psi\cdot\partial_{r}(\frac{1}{r}\partial_{r} \phi). \tag{5.57}\]
And the equation (5.53) implies that
\[\partial_{r}\big{(}\frac{1}{r}\partial_{r}\phi\big{)}\cdot \partial_{r}\Delta\psi+2\partial_{r}\phi\cdot\partial_{r}\big{(}\frac{1}{r} \partial_{r}\Delta\psi\big{)}\] \[= \partial_{r}\Delta\phi\cdot\partial_{r}(\frac{1}{r}\partial_{r} \psi), \tag{5.58}\]
\[\partial_{r}\psi\cdot\partial_{r}\big{(}\frac{1}{r}\partial_{r} \Delta\psi\big{)}=0. \tag{5.59}\]
If \(\partial_{r}\phi=\partial_{r}\psi=0\), all equations (5.54), (5.55), (5.56), (5.57), (5.58) and (5.59) are satisfied. Here velocity vector \(u=0\). This is trivial.
Now assume that at least one of \(\partial_{r}\phi\) and \(\partial_{r}\psi\) is not zero. Then at least one of the following equations
\[\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\phi\big{)}=0 \tag{5.60}\]
and
\[\partial_{r}\big{(}\frac{1}{r}\partial_{r}\Delta\psi\big{)}=0 \tag{5.61}\]
is satisfied.
The equations (5.60) and (5.54) imply that
\[\phi=f_{4}r^{4}+(12f_{4}\nu t+f_{2})r^{2}+f_{0}(t), \tag{5.62}\]
where \(f_{2}\) and \(f_{4}\) are arbitrary constants, \(f_{0}(t)\) is arbitrary function of \(t\).
Similarly the equations (5.61) and (5.55) imply that
\[\psi=g_{4}r^{4}+(12g_{4}\nu t+g_{2})r^{2}+g_{0}(t), \tag{5.63}\]
where \(g_{2}\) and \(g_{4}\) are arbitrary constants, \(g_{0}(t)\) is arbitrary function of \(t\).
The equations (5.57) and (5.58) imply
\[f_{2}g_{4}=f_{4}g_{2}. \tag{5.64}\]
It is obvious that
\[\int_{\mathbb{R}^{3}}|(A\times\nabla)\times\nabla\phi|^{2}dx=\infty,\]
\[\int_{\mathbb{R}^{3}}|(B\times\nabla)\times\nabla\psi|^{2}dx=\infty,\]
\[\int_{\mathbb{R}^{3}}|u|^{2}dx=\int_{\mathbb{R}^{3}}|(A\times\nabla)\times \nabla\phi+(B\times\nabla)\times\nabla\psi|^{2}dx=\infty,\]
where \(\phi\) and \(\psi\) are defined by (5.62) (5.63) respectively.
In summary, the equations (5.45) (5.46) are only satisfied by \((\phi,\psi)\) defined in (5.62) (5.63) (5.64).
On the other hand, provided that at least one of (5.62), (5.63) and (5.64) is not satisfied, then the equations (5.45) (5.46) can not be satisfied by any radial symmetry functions \(\phi\) and \(\psi\).
In summary, Theorem 1.9 is proved.
## Acknowledgments
This work is supported by National Natural Science Foundation of China-NSF, Grant No.11971068 and No.11971077.
|
2306.05363 | Subject clustering by IF-PCA and several recent methods | Subject clustering (i.e., the use of measured features to cluster subjects,
such as patients or cells, into multiple groups) is a problem of great
interest. In recent years, many approaches were proposed, among which
unsupervised deep learning (UDL) has received a great deal of attention. Two
interesting questions are (a) how to combine the strengths of UDL and other
approaches, and (b) how these approaches compare to one other.
We combine Variational Auto-Encoder (VAE), a popular UDL approach, with the
recent idea of Influential Feature PCA (IF-PCA), and propose IF-VAE as a new
method for subject clustering. We study IF-VAE and compare it with several
other methods (including IF-PCA, VAE, Seurat, and SC3) on $10$ gene microarray
data sets and $8$ single-cell RNA-seq data sets. We find that IF-VAE
significantly improves over VAE, but still underperforms IF-PCA. We also find
that IF-PCA is quite competitive, which slightly outperforms Seurat and SC3
over the $8$ single-cell data sets. IF-PCA is conceptually simple and permits
delicate analysis. We demonstrate that IF-PCA is capable of achieving the phase
transition in a Rare/Weak model. Comparatively, Seurat and SC3 are more complex
and theoretically difficult to analyze (for these reasons, their optimality
remains unclear). | Dieyi Chen, Jiashun Jin, Zheng Tracy Ke | 2023-06-08T17:07:24Z | http://arxiv.org/abs/2306.05363v1 | # Subject clustering by IF-PCA and several recent methods
###### Abstract
Subject clustering (i.e., the use of measured features to cluster subjects, such as patients or cells, into multiple groups) is a problem of great interest. In recent years, many approaches were proposed, among which unsupervised deep learning (UDL) has received a great deal of attention. Two interesting questions are (a) how to combine the strengths of UDL and other approaches, and (b) how these approaches compare to one other.
We combine Variational Auto-Encoder (VAE), a popular UDL approach, with the recent idea of Influential Feature PCA (IF-PCA), and propose IF-VAE as a new method for subject clustering. We study IF-VAE and compare it with several other methods (including IF-PCA, VAE, Seurat, and SC3) on 10 gene microarray data sets and 8 single-cell RNA-seq data sets. We find that IF-VAE significantly improves over VAE, but still underperforms IF-PCA. We also find that IF-PCA is quite competitive, which slightly outperforms Seurat and SC3 over the 8 single-cell data sets. IF-PCA is conceptually simple and permits delicate analysis. We demonstrate that IF-PCA is capable of achieving the phase transition in a Rare/Weak model. Comparatively, Seurat and SC3 are more complex and theoretically difficult to analyze (for these reasons, their optimality remains unclear).
## 1 Introduction
We are interested in the problem of _high-dimensional clustering_ or _subject clustering_. Suppose we have a group of \(n\) subjects (e.g., patients or cells) measured on the same set of \(p\) features (e.g., genes). The subjects come from \(K\) different classes or groups (e.g., normal group and diseased group), but unfortunately, the class labels are unknown. In such a case, we say the data are _unlabeled_. For \(1\leq i\leq n\), denote the class label of subject \(i\) by \(Y_{i}\) and denote the \(p\)-dimensional measured feature vector of subject \(i\) by \(X_{i}\). Note that \(Y_{i}\) take values from \(\{1,2,\ldots,K\}\). The class labels are unknown and the goal is to predict them using the measured features \(X_{1},X_{2},\ldots,X_{n}\).
High-dimensional clustering is an unsupervised learning problem. It is especially interesting in the _Big Data era_: although the volume of available scientific data grows rapidly, a significant fraction of them are unlabeled. In some cases, it is simply hard to label each individual sample (e.g., action unit recognition [47]). In some other cases, labeling each individual sample is not hard, but due to the large sample size, it takes a huge amount of time and efforts to label the whole data set (e.g., ImageNet [7]). In other instances (e.g., cancer diagnosis), we may have a preliminary opinion on how to label the data, but we are unsure of the labels' accuracy, so we would like a second, preferably independent, opinion. In all these cases, we seek an effective and user-friendly clustering method.
In recent years, the area of high-dimensional clustering has witnessed exciting advancements in several directions. First, many new types of data sets (e.g., sing-cell data) have emerged and become increasingly more accessible. Second, remarkable successes have been made on nonlinear modeling for high dimensional data, and several Unsupervised Deep Leaning (UDL) approaches
have been proposed [13], including but not limited to Variational Auto-Encoder (VAE) and Generative Adversarial Network (GAN). Last but not the least, several clustering methods for single-cell data (e.g., Seurat [39] and SC3 [29]) have been proposed and become popular.
In this paper, we are primarily interested in Influential-Feature Principal Component Analysis (IF-PCA), a clustering algorithm proposed by [25]. As in many recent works in high-dimensional data analysis (e.g., [2], [37]), we assume
* \(p\gg n\gg 1\)
* out of all \(p\) measured features, only a small fraction of them are relevant to clustering decision.
IF-PCA is easy-to-use and does not have tuning parameters. It is conceptually simple, and (on a high-level) contains two steps as follows.
* _IF-step._ A feature selection step that selects a small fraction of measured features which we believe to be influential or significant to the clustering decision.
* _Clustering step._ A clustering step in which PCA (as a spectral clustering approach) is applied to all retained features.
Instead of viewing IF-PCA as a specific clustering algorithm, we can view it as a _generic two-step clustering approach_: for each of the two steps, we can choose methods that may vary from occasion to occasion in order to best suit the nature of the data. We anticipate that IF-PCA will adapt and develop over time as new data sets and tasks emerge.
[25] compared IF-PCA to a number of clustering algorithms (including the classical kmeans [35], kmeans++ [3], SpectralGem [31], hierarchical clustering [20] and sparse PCA [52]) using 10 microarray data sets. They found that IF-PCA was competitive in clustering accuracy. Later, [24] developed a theoretical framework for clustering and showed that IF-PCA is optimal in the Rare/Weak signal model (a frequently used model in high-dimensional data analysis ([9], [10]).
These appealing properties of IF-PCA motivate a revisit of this method. Specifically, we are interested in the two questions listed below.
* There are many recent clustering algorithms specifically designed for single-cell data, such as Seurat [39], SC3 [29], RaceID [16], ACTIONet [36], Monocle3 [42], and SINCERA [17]. Also, many UDL algorithms have been proposed and become well-known in recent years. An interesting question is how IF-PCA compares with these popular algorithms.
* [25] only examined IF-PCA on gene microarray data. The single-cell RNA-seq data are similar to gene microarray data in some aspects but also have some distinguished characteristics (e.g., singel-cell RNA-sequencing provides an unbiased view of all transcripts and is therefore reliable for accurately measuring gene expression level changes [51]). How IF-PCA compares to other popular methods for subject clustering with single-cell data is an intriguing question.
* The PCA employed in the clustering step of IF-PCA is a linear method. Although we believe that the associations between class labels and measured features may be nonlinear, the significance of the nonlinear effects is unclear. To investigate this, we may consider a variant of IF-PCA in which PCA is replaced by some non-linear UDL methods in the clustering step. An interesting question is how this variant compares to IF-PCA and standard UDL methods (which has no IF-step). It helps us understand how significant the nonlinear effects are.
To answer these questions, first, we propose a new approach, IF-VAE, which combines the main idea of IF-PCA with the Variational Auto-Encoder (VAE) [28] (one of the most popular Unsupervised Deep Learning approaches in recent literatures).
Second, we compare IF-VAE with several methods including VAE, IF-PCA, SpectralGem [31], and classical kmeans, using the 10 microarray data sets in [25]. We find that
* Somewhat surprisingly, VAE underperforms most other methods, including the classical kmeans.
* IF-VAE, which combines VAE with the IF-step of IF-PCA, significantly outperforms VAE.
* The performance of IF-PCA and IF-VAE is comparable for approximately half of the data sets, whereas IF-VAE significantly underperforms IF-PCA for the remaining half of the data sets.
These results suggest the following:
* (a). The idea of combining the IF step in the IF-PCA with VAE is valuable.
* (b). Deep neural network methods do not appear to have a clear advantage for this type of data sets.
For (b), one possible reason is that the associations between class labels and measured features are not highly nonlinear. Another possible reason is that existing deep neural network approaches need further improvements in order to perform satisfactorily on these data sets. Since IF-PCA and IF-VAE use the same IF-step, the unsatisfactory performance of IF-VAE is largely attributable to the VAE-step and not the IF-step. To see this, we note that SpectralGem is essentially the classical PCA clustering method (see Section 2.2). VAE does not appear to show an advantage over SpecGem, explaining why IF-VAE cannot outperform IF-PCA.
Last, we compare IF-VAE with IF-PCA, Seurat and SC3 on 8 single-cell RNA-seq data sets. We observe that
* IF-VAE continues to underperform other methods on the 8 single-cell data sets, but similar as above, the unsatisfactory performance is largely attributable to the VAE step and not the IF-step.
* IF-PCA outperforms SC3 slightly and outperforms Seurat more significantly.
At the same time, we note that
* Seurat has four tuning parameters and is the method that has the shortest execution time.
* The idea of SC3 is quite similar to that of IF-PCA, except that SC3 has a "consensus voting" step that aggregates the strengths of many clustering results. With consensus voting, SC3 may empirically perform more satisfactorily, but it is also more complex internally. Regarding the computational cost, it runs much slower than IF-PCA due to the consensus voting step.
Moreover, IF-PCA is conceptually simple and permits fine-grained analysis. In Section 4, we develop a theoretical framework and show that IF-PCA achieves the optimal phase transition in a Rare/Weak signal setting. Especially, we show in the region of interest (where successful subject clustering is possible),
* if the signals are less sparse, signals may be individually weak. In this case, PCA is optimal (and IF-PCA reduces to PCA if we choose the IF-step properly).
* if the signals are more sparse, the signals need to be relatively strong (so successful clustering is possible). In this case, feature selection is necessary, and IF-PCA is optimal. However, PCA may be non-optimal for it does not use a feature selection step.
In comparison, other popular methods are difficult to analyze theoretically, hence, their optimality is unclear. We note that hard-to-analyze methods will also be hard to improve in the future.
In conclusion, IF-PCA is quite competitive compared to the recently popular subject clustering methods, both for gene microarray data and single-cell data. It is worthwhile to study IF-PCA both theoretically and in (a variety of) applications. IF-VAE is a significant improvement over VAE, but it is still inferior to other prevalent methods in this area (the underperformance is largely due to the VAE step, not the IF-step). It is desirable to further improve IF-VAE (especially the VAE step) to make it more competitive.
## 2 Models and methods
As before, suppose we have measurements on the same set of \(p\) features for \(n\) samples. Denote the data matrix by \(X\in\mathbb{R}^{n,p}\), and write
\[X=[X_{1},X_{2},\ldots,X_{n}]^{\prime}=[x_{1},x_{2},\ldots,x_{p}], \tag{2.1}\]
where \(X_{i}\in\mathbb{R}^{p}\) denotes the measured feature vector for sample \(i\), \(1\leq i\leq n\). From time to time, we may want to normalize the data matrix before we implement any approaches. For \(1\leq j\leq p\), let \(\hat{X}(j)\) and \(\hat{\sigma}(j)\) be the empirical mean and standard deviation associated with feature \(j\) (column \(j\) of \(X\)), respectively. We normalize each column of \(X\) and denote the resultant matrix by \(W\), where
\[W=[w_{1},w_{2},\ldots,w_{p}]=[W_{1},W_{2},\ldots W_{n}]^{\prime}\in\mathbb{R} ^{n,p},\ \ \text{and}\ \ \ W_{i}(j)=[X_{i}(j)-\hat{X}(j)]/\hat{\sigma}(j). \tag{2.2}\]
Below in Section 2.1, we introduce two models for \(X\); then in Sections 2.2-2.6, we describe the clustering methods considered in this paper, some of which (e.g., IF-VAE, IF-VAE(X), IF-PCA(X)) are new.
### Two models
A reasonable model is as follows. We encode the class label \(Y_{i}\) as a \(K\)-dimensional vector \(\pi_{i}\), where \(\pi_{i}=e_{k}\) if and only if sample \(i\) belongs to class \(k\), and \(e_{k}\) is the \(k\)-th standard Euclidean basis vector of \(\mathbb{R}^{K}\), \(1\leq k\leq K\). Let \(M=[\mu_{1},\mu_{2},\ldots,\mu_{K}]\) where \(\mu_{k}\in\mathbb{R}^{p}\) is the mean vector for class \(k\). We assume
\[\mathbb{E}[X_{i}]=\mu_{k}\text{ if and only if subject $i$ belongs to class $k$},\qquad\text{or equivalently }\mathbb{E}[X_{i}]=M\pi_{i}. \tag{2.3}\]
Let \(\Pi=[\pi_{1},\pi_{2},\ldots,\pi_{n}]^{\prime}\) be the matrix of encoded class labels. We can rewrite (2.3) as
\[X=\mathbb{E}[X]+(X-\mathbb{E}[X])=\text{``signal matrix''}+\text{`` noise matrix''},\qquad\mathbb{E}[X]=\Pi M^{\prime}. \tag{2.4}\]
Also, it is reasonable to assume that out of many measured features, only a small fraction of them are useful in the clustering decision. Therefore, letting \(\bar{\mu}=(1/K)\sum_{k=1}^{K}\mu_{k}\), we assume
\[\mu_{1},\mu_{2},\ldots,\mu_{K}\text{ are linearly independent and }\mu_{k}-\bar{\mu}\text{ is sparse for each }1\leq k\leq K. \tag{2.5}\]
It follows that the \(n\times p\) signal matrix \(\mathbb{E}[X]\) has a rank \(K\).
Recall that \(W\) is the normalized data matrix. Similar to (2.5), we may decompose \(W\) as the sum of a signal matrix and a noise matrix. But due to the normalization, the rank of the signal matrix is reduced to \((K-1)\).
In Model (2.3)-(2.5), \(\mathbb{E}[X_{i}]=M\pi_{i}\), which is a linear function of the encoded class label vectors \(\pi_{i}\). For this reason, we may view Model (2.3)-(2.5) as a linear model. In many modern applications, linear models may be inadequate, and we may prefer to use a nonlinear model
The recent idea of neural network modeling provides a wide class of nonlinear models, which may be useful for our setting. As an alternative to Model (2.3)-(2.5), we may consider a neural network model as follows. In this model, we assume
\[Y_{i}=f(X_{i},\theta),\qquad i=1,2,\ldots,n, \tag{2.6}\]
where \(f(x,\theta)\) belongs to a class of nonlinear functions. For example, we may assume \(f(x,\theta)\) belongs to the class of functions (without loss of generality, \(x\) always includes a constant feature):
\[\big{\{}f(x,\theta):f(x,\theta)=A_{L}(s_{L}(A_{L-1}\ldots s_{2}(A_{2}s_{1}(A_ {1}x)))|\theta=\{A_{1},A_{2},\ldots,A_{L}\}\big{\}},\]
where \(A_{1},A_{2},\ldots,A_{L}\) are matrices of certain sizes and \(s_{1},s_{2},\ldots,s_{L}\) are some non-linear functions. Similar to Model (2.3)-(2.5), we can impose some sparsity conditions on Model (2.6). See [13] for example.
### The PCA clustering approach and the SpectralGem
Principal Component Analysis (PCA) is a classical spectral clustering approach, which is especially appropriate for linear models like that in (2.3)-(2.5) when the relevant features are non-sparse (see below for discussions on the case when the relevant features are sparse). The PCA clustering approach contains two simple steps as follows. Input: normalized data matrix \(X\) and number of clusters \(K\). Output: predicted class label vector \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})^{\prime}\).
* Obtain the \(n\times K\) matrix \(\widehat{H}=[\hat{\eta}_{1},\ldots,\hat{\eta}_{K}]\), where \(\hat{\eta}_{k}\) is the \(k\)-th left singular vector of \(X\) (associated with the \(k\)-th largest singular value of \(X\)).
* Cluster the \(n\) rows of \(\widehat{H}\) to \(K\) groups by applying the classical kmeans assuming there are \(\leq K\) classes. Let \(\hat{Y}_{i}\) be the estimated class label of subject \(i\). Output \(\hat{Y}_{1},\ldots,\hat{Y}_{n}\).
From time to time, we may choose to apply the PCA clustering approach to the normalized data matrix \(W\). As explained before, we can similarly write \(W\) as the sum of a "signal" matrix and a "noise" matrix as in (2.5), but due the normalization, the rank of the "signal" matrix under Model (2.3) is reduced from \(K\) to \((K-1)\). In such a case, we replace the \(n\times K\) matrix \(\widehat{H}\) by the \(n\times(K-1)\) matrix
\[\widehat{\Xi}=[\hat{\xi}_{1},\hat{\xi}_{2},\ldots,\hat{\xi}_{K-1}],\]
where similarly \(\hat{\xi}_{k}\) is the \(k\)-th left singular vector of \(W\).
The PCA clustering approach has many modern variants, including but not limited to the SpectralGem [31] and SCORE [22, 26]. In this paper, we consider SpectralGem but skip the discussion on SCORE (SCORE was motivated by unsupervised learning in network and text data and shown to be effective on those types of data; it is unclear if SCORE is also effective for genetic and genomic data). Instead of applying PCA clustering to the data matrix \(X\) (or \(W\)) directly, SpectralGem constructs an \(n\times n\) symmetric matrix \(M\), where \(M(i,j)\) can be viewed as a similarity metric between subject \(i\) and subject \(j\). The remaining part of the algorithm has many small steps, but the essence is to apply the PCA clustering approach to the Laplacian normalized graph induced by \(M\).
The PCA spectral clustering approach is based on two important assumptions.
* The signal matrix \(\mathbb{E}[X]\) is a linear function of class labels.
* It is hard to exploit sparsity in the data: either the data are non-sparse (such as the classical setting of \(p\ll n\)) or how to exploit sparsity is unclear.
In many modern settings, these assumptions are not satisfied: the relationship between the signal matrix \(\mathbb{E}[X]\) and class labels may be nonlinear, and it is highly desirable to exploit sparsity by adding a feature selection before conducting PCA clustering. In such cases, we need an alternative approach. Below, we address respectively the non-linearity by VAE and the feature selection by IF-PCA.
### The Variational AutoEncoder (VAE) and VAE(X) clustering approaches
Given an \(n\times p\) data matrix \(X\) and an integer \(d\leq\text{rank}(X)\), the essence of the PCA spectral clustering approach is to obtain a rank-\(d\) approximation of \(X\) is to use Singular Value Decomposition (SVD),
\[\widehat{X}=\sum_{k=1}^{d}\sigma_{k}u_{k}v_{k}^{\prime}.\]
Here \(\sigma_{k}\) is the \(k\)-th smallest singular value of \(X\), and \(u_{k}\) and \(v_{k}\) are the corresponding left and right singular vectors of \(X\), respectively. Variational AutoEncoder (VAE) can be viewed as an extension of SVD, which obtains a rank-\(d\) approximation of \(X\) from training a neural network. The classical SVD is a linear method, but the neural network approach can be highly nonlinear.
VAE was first introduced by [28] and has been successfully applied to many application areas (e.g., image processing [38], computer vision [14], and text mining [40]). VAE consists of an encoder, a decoder, and a loss function. Given a data matrix \(X\in\mathbb{R}^{n,p}\), the encoder embeds \(X\) into a matrix \(\widehat{Z}\in\mathbb{R}^{n,d}\) (usually \(d\ll p\)), and the decoder maps \(\widehat{Z}\) back to the original data space and outputs a matrix \(\widehat{X}\in\mathbb{R}^{n,p}\), which can be viewed as a rank-\(d\) approximation of \(X\). Different from classical SVD, \(\widehat{X}\) is obtained in a nonlinear fashion by minimizing an objective that measures the information loss between \(X\) and \(\widehat{X}\).
A popular way to use VAE for subject clustering is as follows [46]. Input: normalized data matrix \(W=[w_{1},w_{2},\ldots,w_{p}]=[W_{1},W_{2},\ldots,W_{n}]^{\prime}\), number of classes \(K\), dimension of the latent space \(d\) (typically much smaller than \(\min\{n,p\}\)). Output: predicted class label vector \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})\).
* (_Dimension reduction by VAE_). Train VAE and use the trained encoder to get an \(n\times d\) matrix \(\widehat{Z}\).
* (_Clustering_). Cluster all \(n\) subjects into \(K\) classes by applying k-means to the rows of \(\widehat{Z}\). Let \(\hat{Y}\) be the predicted label vector.
Except for using a nonlinear approach to dimension reduction, VAE is similar to the PCA approach in clustering. We can apply VAE either to the normalized data matrix \(W\) or the unnormalized data matrix \(X\). We call them VAE(W) and VAE(X), respectively. In the context of using these notations, it is unnecessary to keep (W) and (X) at the same time, so we write VAE(W) as VAE for short (and to avoid confusion, we still write VAE(X) as VAE(X)).
### The orthodox IF-PCA and its variant IF-PCA(X)
For many genomic and genetic data, Model (2.3)-(2.5) is already a reasonable model. We recall that under this model the normalized data matrix can be approximately written as
\[W=Q+(W-Q)=\text{``signal matrix'' + ``noise matrix'',}\]
where approximately,
\[Q=\Pi[\mu_{1}-\bar{\mu},\mu_{2}-\bar{\mu},\ldots,\mu_{K}-\bar{\mu}]^{\prime}\in \mathbb{R}^{n,p},\]
and is sparse (in the sense that only a small fraction of the columns of \(Q\) have a large \(\ell^{2}\)-norm; the \(\ell^{2}\)-norm of other columns are small or \(0\)). In such a setting, it is appropriate to conduct features selection, which removes a large amount of noise while keeping most nonzero columns of \(Q\).
Such observations motivate the (orthodox) IF-PCA. The IF-PCA was first proposed in [25] and shown to have appealing clustering results on 10 gene microarray data sets. In [24], it was shown that IF-PCA is optimal in high-dimensional clustering. IF-PCA contains an IF step and a PCA step, and the IF-step contains two important components which we now introduce.
The first component of the IF-step is the use of the Kolmogorov-Smirnov (KS) test for feature selection. Suppose we have \(n\) (univariate) samples \(z_{1},z_{2},\ldots,z_{n}\) from a cumulative distribution function (CDF) denoted by \(F\). Introduce the empirical CDF by
\[F_{n}(t)=(1/n)\sum_{i=1}^{n}1\{z_{i}\leq t\}. \tag{2.7}\]
Let \(z=(z_{1},z_{2},\ldots,z_{n})\). The KS testing score is then
\[\phi_{n}(z)=\sqrt{n}\sup_{t}\{\|F_{n}(t)-F(t)\|\}. \tag{2.8}\]
In the IF-PCA below, we take \(F\) to be the theoretical CDF of \((z_{i}-\bar{z})/\hat{\sigma}\), where \(z_{i}\stackrel{{ iid}}{{\sim}}N(0,1)\), \(1\leq i\leq n\), and \(\bar{z}\) and \(\hat{\sigma}\) are the empirical mean and standard deviation of \(z_{1},z_{2},\ldots,z_{n}\), respectively.
The second component of the IF step is _Higher Criticism Threshold (HCT)_. Higher Criticism was initially introduced by [9] (see also [10, 18, 21, 44]) as a method for global testing. It has been recently applied to genetic data (e.g., [4]). HCT adapts Higher Criticism to a data-driven threshold choice [25]. It takes as input \(p\) marginal \(p\)-values, one for a feature, and outputs a threshold for feature selection. Suppose we have \(p\)-values \(\pi_{1},\pi_{2},\ldots,\pi_{p}\). We sort them in the ascending order:
\[\pi_{(1)}<\pi_{(2)}<\ldots<\pi_{(p)}.\]
Define the feature-wise HC score by \(HC_{p,j}=\sqrt{p}(j/p-\pi_{(j)})/\sqrt{\max\{\sqrt{n}(j/p-\pi_{(j)}),0\}+j/p}\). The HCT is then
\[\hat{t}_{HC}=\pi_{(j)},\qquad\mbox{where }\hat{j}=\mbox{argmax}_{\{j:\pi_{(j)} >\log p/p,\,j<p/2\}}\{HC_{p,j}\}. \tag{2.9}\]
IF-PCA runs as follows.
Input: normalized feature vectors \(W=[w_{1},w_{2},\ldots,w_{p}]=[W_{1},W_{2},\ldots,W_{n}]^{\prime}\), number of classes \(K\). Output: predicted class label vector \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})^{\prime}\).
* (IF-step). For each \(1\leq j\leq p\), compute a KS-score for feature \(j\) by applying (2.7)-(2.8) with \(z=w_{j}\). Denote the KS scores by \(\phi_{n}(w_{1}),\ldots,\phi_{n}(w_{p})\) and let \(\mu^{*}\) and \(\sigma^{*}\) be their empirical mean and standard deviation, respectively. Let \(\psi_{j}^{*}=[\phi_{n}(w_{j})-\mu^{*}]/\sigma^{*}\). Compute the \(p\)-values by \(\pi_{j}=1-F(\psi_{j}^{*})\), where \(F\) is the same CDF used in (2.8). Obtain the HCT by applying (2.9) to \(\pi_{1},\pi_{2},\ldots,\pi_{p}\). Retain feature \(j\) if \(\pi_{j}\leq\hat{t}_{HC}\), and remove it otherwise.
* (Clustering-step). Let \(W^{IF}\) be the \(n\times m\) sub-matrix of \(W\) consisting of columns of \(W\) corresponding to the retained features only (\(m\) is the number of retained features in (a)). For any \(1\leq k\leq\min\{m,n\}\), let \(\hat{\xi}_{k}^{IF}\) be the left singular vector of \(W^{IF}\) corresponding to the \(k\)-th largest singular value of \(W^{IF}\). Let \(\widehat{\Xi}^{IF}=[\hat{\xi}_{1}^{IF},\ldots,\hat{\xi}_{K-1}^{IF}]\in\mathbb{R }^{n,K-1}\). Cluster all \(n\) subjects by applying the \(k\)-means to the \(n\) rows of \(\widehat{\Xi}^{IF}\), assuming there are \(K\) clusters. Let \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})^{\prime}\) be the predicted class labels.
In the IF-step, the normalization of \(\psi_{j}^{*}=[\phi_{n}(w_{j})-\mu^{*}]/\sigma^{*}\) is called Efron's null correction [11], a simple idea that is proved to be both necessary and effective for analyzing genomic and genetic data [23]. We remark that although IF-PCA is motivated by the linear model in (2.5), it is not tied to (2.5) and is broadly applicable. In fact, the algorithm does not require any knowledge of Model (2.3)-(2.5).
In the (orthodox) IF-PCA, we apply both the IF-step and the clustering-step to the normalized data matrix \(W\). Seemingly, for the IF-step, applying the algorithm to \(W\) instead of the un-normalized data matrix \(X\) is preferred. However, for the clustering-step, whether we should apply the algorithm to \(W\) or \(X\) remains unclear. We propose a small variant of IF-PCA by applying the IF-step and the clustering step to \(W\) and \(X\), respectively.
* (IF-step). Apply exactly the same IF-step to \(W\) as in the (orthodox) IF-PCA above.
* (Clustering-step). Let \(X^{IF}\) be the \(n\times m\) sub-matrix of \(X\) consisting of columns of \(X\) corresponding to the retained features in the IF-step only. For any \(1\leq k\leq\min\{m,n\}\), let \(\hat{\eta}_{k}^{IF}\) be the left singular vector of \(X^{IF}\) corresponding to the \(k\)-th largest singular value of \(X^{IF}\). Let \(\widehat{H}^{IF}=[\hat{\eta}_{1}^{IF},\ldots,\hat{\eta}_{K-1}^{IF}]\in\mathbb{ R}^{n,K-1}\). Cluster all \(n\) subjects by applying the \(k\)-means to the \(n\) rows of \(\widehat{H}^{IF}\), assuming there are \(K\) clusters. Let \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})^{\prime}\) be the predicted class labels.
To differentiate from the (orthodox) IF-PCA (which we call IF-PCA below), we call the above variant IF-PCA(X). See Table 1 in Section 2.7. The new variant was never proposed or studied before. It outperforms the (orthodox) IF-PCA in several data sets (e.g., see Section 3).
### IF-VAE and IF-VAE(X)
Near the end of Section 2.2, we mention that the classical PCA has two disadvantages, not exploiting sparsity in feature vectors and not accounting for possible nonlinear relationships between the signal matrix and class labels. In Sections 2.3-2.4, we have seen that VAE aims to exploit nonlinear relationships, and IF-PCA aims to exploit sparsity. We may combine VAE with the IF-step of IF-PCA for a simultaneous exploitation of sparsity and non-linearity. To this end, we propose a new algorithm called IF-VAE.
IF-VAE contains an IF-step and a clustering step, and runs as follows. Input: normalized data matrix \(W=[w_{1},w_{2},\ldots,w_{p}]=[W_{1},W_{2},\ldots,W_{n}]^{\prime}\), number of classes \(K\), dimension of the latent space in VAE (denoted by \(d\)). Output: predicted class label vector \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})\).
* (_IF-step_). Run the same IF-step as in Section 2.4, and let \(W^{IF}=[W_{1}^{IF},\ldots,W_{n}^{IF}]^{\prime}\in\mathbb{R}^{n\times m}\) be the matrix consisting of the retained features only (same as in the IF-step in IF-PCA, \(m\) is the number of retained features).
* (_Clustering-step_). Apply VAE with \(W^{IF}\in\mathbb{R}^{n\times m}\) and obtain an \(n\times d\) matrix \(\widehat{Z}^{IF}\), which can be viewed as an estimation of the low-dimensional representation of \(W^{IF}\). Cluster the \(n\) samples into \(K\) clusters by applying the classical k-means to \(\widehat{Z}^{IF}\) assuming there are \(K\) classes. Let \(\hat{Y}\) be the predicted label vector.
In the clustering-step, we apply VAE to the normalized data matrix \(W\). Similarly as in Section 2.4, if we apply VAE to the un-normalized data matrix \(X\), then we have a variant of IF-VAE, which we denote by IF-VAE(X). See Table 1 in Section 2.7.
### Seurat and SC3
We now introduce Seurat and SC3, two recent algorithms that are especially popular for subject clustering with Single-cell RNA-seq data. We discuss them separately.
Seurat was proposed in [39]. On a high level, Seurat is quite similar to IF-PCA, and we can view it as having only two main steps: a feature selection step and a clustering step. But different from IF-PCA, Seurat uses a different feature selection step and a much more complicated clustering step (which combines several methods including PCA, k-nearest neighborhood algorithm, and modularity optimization). Seurat needs 4 tuning parameters: \(m,N,k_{0},\delta\), where \(m\) is the number of selected features in the feature selection step, and \(N,k_{0},\delta\) are for the clustering step, corresponding to the PCA part, the k-nearest neighborhood algorithm part, and the modularity optimization part, respectively.
Below is a high-level sketch for Seurat (see [39]) for more detailed description). Input: un-normalized \(n\times p\) data matrix \(X\), number of clusters \(K\), and tuning parameters \(m,N,k_{0},\delta\). Output: predicted class label vectors \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})^{\prime}\).
* (IF-step). Select the \(m\) features that are mostly variable. Obtain the \(n\times m\) post-selection data matrix.
* (Clustering-step). Normalize the post-selection data matrix and obtain the first \(N\) left singular vectors. For each pair of subjects, compute how many neighbors (for each subject, we only count the \(k_{0}\) nearest neighbors) they share with each other, and use the results to construct a shared nearest neighborhood (SNN) graph. Cluster the class labels by applying a modularity optimization algorithm to the SNN graph, where we need a resolution parameter \(\delta\).
An apparent limitation of Seurat is that it needs 4 tuning parameters. Following the recommendations by [19], we may take \((N,k_{0})=(50,20)\), but it remains unclear how to select \((m,\delta)\).
SC3 was first presented by [29]. To be consistent with many other methods we discuss in this paper, we may view SC3 as containing two main steps, a gene filtering step and a clustering step. Similar to Seurat, the clustering step of SC3 is much more complicated than that of IF-PCA, where the main idea is to apply PCA many times (each for a different number of leading singular vectors) and use the results to construct a matrix of consensus. We then cluster all subjects into \(K\) groups by applying the classical hierarchical clustering method to the consensus matrix. SC3 uses one tuning parameter \(x_{0}\) in the gene filtering step, and two tuning parameters \(d_{0}\) and \(k_{0}\) in the clustering-step, corresponding to the PCA part and the hierarchical clustering part, respectively.
Below is a high-level sketch for SC3 (see [29]) for more detailed description). Input: un-normalized \(n\times p\) data matrix \(X\), true number of clusters \(K\), and tuning parameters \(x_{0},d_{0},k_{0}\). Output: predicted class label vectors \(\hat{Y}=(\hat{Y}_{1},\hat{Y}_{2},\ldots,\hat{Y}_{n})^{\prime}\).
* (Gene filtering-step). Removes genes/transcripts that are either expressed (expression value is more than 2) in less than \(x_{0}\%\) of cells or expressed (expression value is more than 0) in at least \((100-x_{0})\%\) of cells. This step may reduce a significant fraction of features, and we consider it to be more like a feature selection step than a preprocessing step.
* (Clustering-step). First, we take a log-transformation of the post-filtering data matrix and construct an \(n\times n\) matrix \(M\), where \(M(i,j)\) is some kind of distances (e.g., Euclidean, Pearson, Spearman) between subject \(i\) and \(j\). Second, Let \(\widehat{H}=[\hat{\eta}_{1},\ldots,\hat{\eta}_{d}]\), where \(\hat{\eta}_{k}\) is the \(k\)-th singular vector of \(M\) (or alternatively, of the normalized graph Laplacian matrix of \(M\)). Third, for \(d=1,2,\ldots,d_{0}\), we cluster all \(n\) subjects to \(K\) classes by applying the k-means to the rows of the \(n\times d\) sub-matrix of \(\widehat{H}\) consisting of the first \(d\) columns, and use the results to build a consensus matrix using the Cluster-based Similarity Partitioning Algorithm (CSPA) [41]. Finally, we cluster the subjects by applying the classical hierarchical clustering to the consensus matrix with \(k_{0}\) levels of hierarchy.
Following the recommendation by [29], we set \((x_{0},d_{0})=(6,15)\) and take \(k_{0}\) to be the true number of clusters \(K\). Such a tuning parameter choice may work effectively in some cases, but for more general cases, we may (as partially mentioned in [29]) need more complicated tuning.
In summary, on a high level, we can view both Seurat and SC3 as two-stage algorithms, which consist of a feature selection step and a clustering step, just as in IF-PCA. However, these methods use more complicated clustering steps where the key is combining _many different clustering results to reach a consensus_; note that the Shared Nearest Neighborhood (SNN) in Seurat can be viewed a type of consensus matrix. Such additional miles taken in Seurat and SC3 may help reduce the clustering error rates, but also make the algorithms conceptually more complex, computationally more expensive, and theoretically more difficult to analyze.
### A brief summary of all the methods
We have introduced about 10 different methods, some of which (e.g., IF-PCA(X), IF-VAE, IF-VAE(X)) were never proposed before. Among these methods, VAE is a popular unsupervised deep learning approach, Seurat and SC3 are especially popular in clustering with single-cell data, and IF-PCA is a conceptually simple method which was shown to be effective in clustering with gene microarray data before. Note that some of the methods are conceptually similar to each other with some small differences (though it is unclear how different their empirical performances are). For example, many of these methods are two-stage methods, containing an IF-step and a clustering-step. In the IF-step, we usually use the normalized data matrix \(W\). In the clustering-step, we may use either \(W\) or the un-normalized data matrix \(X\). To summarize all these methods and especially to clarify the small differences between similar methods, we have prepared a table below; see Table 1 for details.
## 3 Result
Our study consists of two parts. In Section 3.1, we compare IF-VAE with several other methods using 10 microarray data sets. In Section 3.2, we compare IF-VAE with several other methods, including the popular approaches of Seurat and SC3, using 8 single-cell data sets. In all these data sets, the class labels are given. However, we do not use the class labels in any of the clustering approaches; we only use them when we evaluate the error rates. The code for numerical results in this section can be found at [https://github.com/ZhengTracyKe/IFPCA](https://github.com/ZhengTracyKe/IFPCA). The 10 microarray data sets can be downloaded at [https://data.mendeley.com/datasets/cdsz2ddv3t](https://data.mendeley.com/datasets/cdsz2ddv3t), and the 8 single-cell RNA-seq data sets can be downloaded at [https://data.mendeley.com/drafts/nv2x6kf5rd](https://data.mendeley.com/drafts/nv2x6kf5rd).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & PCA & SpecCam & VAE & VAE(X) & IF-PCA & IF-PCA(X) & IF-VAE & IF-VAE(X) & Seurat & SC3 \\ \hline IF-step & NA & NA & NA & NA & \(W\) & \(W\) & \(W\) & \(W\) & \(X\) & \(X\) \\ \hline Clustering-step & \(X\) or \(W\) & NA & \(W\) & \(X\) & \(W\) & \(X\) & \(W\) & \(X\) & \(X\) & \(X\) \\ \hline \end{tabular}
\end{table}
Table 1: A summary of all methods discussed in this section. This table clarifies the small differences between similar methods. Take the column IF-PCA(X) for example: “\(W\)” on row 2 means that the IF-step of this method is applied to the normalized data matrix \(W\) defined in (2.2), and “\(X\)” on row 3 means the clustering-step is applied to the un-normalized data matrix \(X\) (NA: not applicable).
### Comparison of clustering approaches with \(10\) microarray data sets
Table 2 tabulates \(10\) gene microarray data sets (alphabetically) studied in [25]. Here, Data sets \(1\), \(3\), \(4\), \(7\), \(8\), and \(9\) were analyzed and cleaned in [8], Data sets \(2\), \(6\), \(10\) were analyzed and grouped into two classes in [49], among which Data set \(10\) was cleaned by [25] in the same way as by [8]. Data set \(5\) is from [15].
First, we compare the IF-VAE approach introduced in Section 2.5 with four existing clustering methods: (1) the classical kmeans; (2) Spectral-GEM (SpecGem) [30], which is essentially classical PCA combined with a Laplacian normalization; (3) the orthodox IF-PCA [25], which adds a feature selection step prior to spectral clustering (see Section 2.4 for details); (4) The VAE approach, which uses VAE for dimension reduction and then runs kmeans clustering (see Section 2.3 for details). Among these methods, SpecGem and VAE involve dimension reduction, and IF-PCA and IF-VAE use both dimension reduction and feature selection. For IF-PCA, VAE and IF-VAE, we can implement the PCA step and the VAE step to either the original data matrix \(X\) or the normalized data matrix \(W\). The version of IF-PCA associated with \(X\) is called IF-PCA(X), and the version associated with \(W\) is still called IF-PCA; similar rules apply to VAE and IF-VAE. Counting these variants, we have a total of \(8\) different algorithms.
Table 3 shows the numbers of clustering errors (i.e., number of incorrectly clustered samples, subject to a permutation of \(K\) clusters) of these methods. The results of SpecGem and IF-PCA are copied from [25]. We implemented kmeans using the Python library sklearn, wrote Matlab code for IF-PCA(X), and wrote Python code for the remaining four methods. The IF-step of IF-VAE needs no tuning. In the VAE-step of IF-VAE, we fix the latent dimension as \(d=25\) and use a traditional architecture in which both the encoder and decode have one hidden layer; the encoder uses the ReLU activation and the decode uses the sigmoid activation; when training the encoder and decoder, we use a mini-batch stochastic gradient descent with \(50\) batches, \(100\) epochs, and a learning rate of \(0.0005\). The same neural network architecture and tuning parameters are applied to VAE. We note that the outputs of these methods may have randomness due to the initialization in the kmeans step or in the VAE step. For VAE, IF-VAE, and IF-VAE(X) we repeat the algorithm \(10\) times and report the average clustering error. For kmeans, we repeat it for \(5\) times (because the results are more stable); for IF-PCA(X), we repeat it \(20\) times. We use the clustering errors to rank all \(8\) methods for each data set; in the presence of ties, we assign ranks in a way such that the total rank sum is \(36\) (e.g., if two methods have the smallest error rate, we rank both of them as \(1.5\) and rank the second best method as \(3\); other cases are similar). The average rank of a method is a metric of its overall performance across multiple data sets.
\begin{table}
\begin{tabular}{|l|l|l||l|l|l|} \hline \# & Data Name & Source & \(K\) & \(n\) & \(p\) \\ \hline
1 & Brain & Pomeroy (02) & 5 & 42 & 5597 \\
2 & Breast Cancer & Wang et al. (05) & 2 & 276 & 22,215 \\
3 & Colon Cancer & Alon et al. (99) & 2 & 62 & 2000 \\
4 & Leukemia & Golub et al. (99) & 2 & 72 & 3571 \\
5 & Lung Cancer(1) & Gordon et al. (02) & 2 & 181 & 12,533 \\
6 & Lung Cancer(2) & Bhattacharjee et al. (01) & 2 & 203 & 12,600 \\
7 & Lymphoma & Alizadeh et al. (00) & 3 & 62 & 4026 \\
8 & Prostate Cancer & Singh et al. (02) & 2 & 102 & 6033 \\
9 & SRBCT & Kahn (01) & 4 & 63 & 2308 \\
10 & SuCancer & Su et al (01) & 2 & 174 & 7909 \\ \hline \end{tabular}
\end{table}
Table 2: The \(10\) gene microarray data sets analyzed in Section 3.1 (\(n\): number of subjects; \(p\): number of genes; \(K\): number of clusters).
Besides ranks, we also compute _regret_s: For each data set, the _regret_ of a method is defined to be \(r=(e-e_{min})/(e_{max}-e_{min})\), where \(e\) is the clustering error of this method, and \(e_{max}\) and \(e_{min}\) are the respective maximum and minimum clustering error among all the methods. The average regret also measures the overall performance of a method (the smaller, the better).
There are several notable observations. First, somewhat surprisingly, the simple and tuning-free method, IF-PCA, has the best overall performance. It has the lowest average rank among all 8 methods and achieves the smallest number of clustering errors in 4 out of 10 data sets. We recall that the key idea of IF-PCA is to add a tuning-free feature selection step prior to dimension reduction. The results in Table 2 confirm that this idea is highly effective on microarray data and hard to surpass by other methods. Second, VAE (either on \(W\) or on \(X\)), which combines k-means with nonlinear dimension reduction, significantly improves kmeans on some "difficult" datasets, such as BreastCancer, ColonCancer and SuCancer. However, for those "easy" data sets such as Leukemia and Lymphoma, VAE significantly underperforms kmeans. It suggests that the nonlinear dimension reduction is useful mainly on "difficult" data sets. Third, IF-VAE (either on \(W\) or on \(X\)) improves VAE in the majority of data sets. In some data sets such as LungCancer(1), the error rate of IF-VAE is much lower than that of VAE. This observation confirms that the IF step plays a key role in reducing the clustering errors. [25] made a similar observation by combining the IF step with linear dimension reduction by PCA. Our results suggest that the IF step continues to be effective when it is combined with nonlinear dimension reduction by VAE. Last, IF-VAE(X) achieves the lowest error rate in 3 out of 10 data sets, and it has the second lowest average rank among all 8 methods. Compared with IF-PCA (the method with the lowest average rank), IF-VAE(X) has an advantage in 3 data sets (BreastCancer, SRBCT and SuCancer) but has a similar or worse performance in the other data sets. These two methods share the same IF step, hence, the results imply that the nonlinear dimension reduction by VAE has an advantage over the linear dimension reduction by PCA only on "difficult" data sets.
Next, we study IF-VAE(X) more carefully on the LungCancer(1) data set. Recall that the IF step ranks all the features using KS statistics and selects the number of features by a tuning-free procedure. We use the same feature ranking but manually change the number of retained features. For each \(m\), we select the \(m\) top-ranked features, perform VAE on the unnormalized
\begin{table}
\begin{tabular}{|l|c c c c c c c c|} \hline Dataset & kmeans & SpecGen & IF-PCA & IF-PCA(X) & VAE & VAE(X) & IF-VAE & IF-VAE(X) \\ \hline Brain & 14 & 6 & 11 & 7 & 14 & 17 & 21 & 21 \\ Breast Cancer & 121 & 121 & 112 & 91 & 105 & 130 & 120 & 118 \\ Colon Cancer & 28 & 30 & 25 & 26 & 29 & 23 & 25 & 25 \\ Leukemia & 2 & 21 & 5 & 3 & 28 & 17 & 20 & 12 \\ Lung Cancer(1) & 18 & 22 & 5 & 24 & 21 & 64 & 6 & 7 \\ Lung Cancer(2) & 44 & 88 & 44 & 45 & 66 & 80 & 44 & 44 \\ Lymphoma & 1 & 14 & 1 & 18 & 23 & 22 & 16 & 10 \\ Prostate Cancer & 43 & 43 & 39 & 44 & 41 & 45 & 42 & 41 \\ SRBCT & 28 & 32 & 28 & 24 & 33 & 26 & 30 & 23 \\ SuCancer & 83 & 85 & 58 & 57 & 62 & 60 & 57 & 57 \\ \hline Rank(mean) & 4.3 & 6.1 & **2.65** & 3.9 & 5.7 & 5.8 & 4.3 & 3.25 \\ Rank(SD) & 2.07 & 2.20 & 1.18 & 2.33 & 2.20 & 2.35 & 1.90 & 1.74 \\ Regret(mean) & 0.43 & 0.69 & **0.18** & 0.26 & 0.60 & 0.65 & 0.46 & 0.31 \\ Regret(SD) & 0.35 & 0.33 & 0.22 & 0.32 & 0.33 & 0.39 & 0.36 & 0.33 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of clustering errors of different methods on the 10 microarray data sets in Table 2. IF-PCA has the smallest average rank and average regret (boldface) and is regarded as the best on average.
data matrix \(X\) restricted to these \(m\) features, and report the average number of clustering errors over \(5\) repetitions of VAE. Figure 1 displays the number of clustering errors as a function of \(m\). An interesting observation is that as \(m\) increases, the clustering error first decreases and then increases (for a good visualization, Figure 1 only shows the results for \(m\) between \(1\) and \(0.1p\); we also tried larger values of \(m\) and found that the number of clustering errors continued to increase; especially, the number errors increased fast when \(m>4000\)). A possible explanation is as follows: when \(m\) is too small, some influential features are missed, resulting in weak signals in the VAE step; when \(m\) is too large, too many non-influential features are selected, resulting in large noise in the VAE step. There is a sweet spot between \(200\) and \(400\), and the tuning-free procedure in the IF step selects \(m=251\). Figure 1 explains why IF step benefits the subsequent VAE step. A similar phenomenon was discovered in [25], but it is for PCA instead of VAE.
**Remark 1**_(Comparison with other clustering methods for microarray)_: [25] reported the clustering errors of several classical methods on these \(10\) microarray data sets. We only include kmeans and SpecGem in Table 3, because kmeans is the most widely-used generic clustering methods and SpecGem is specially designed for microarray data. The table below shows the clustering errors of other methods reported in [25], including kmeans++ (a variant of kmeans with a particular initilization) and hierarchical clustering. It suggests that these methods significantly underperform IF-PCA.
### Comparison of clustering approaches on \(8\) single-cell RNA-seq data sets
Table 5 tabulates \(8\) single-cell RNA-seq data sets. The data were downloaded from the Hemberg Group at the Sanger Institute ([https://hemberg-lab.github.io/scRNA.seq.datasets](https://hemberg-lab.github.io/scRNA.seq.datasets)). It contains scRNA-seq data sets from Human and Mouse. Among them, we selected \(8\) data sets that have a sample size between \(100\) and \(2{,}000\) and can be successfully downloaded and pre-processed
\begin{table}
\begin{tabular}{|l|c c c c c c c c c c|} \hline & Brain & Breast & Colon & Leuk & Lung1 & Lung2 & Lymph & Prostate & SRBCT & Su \\ \hline kmeans++ & 18 & 119 & 29 & 19 & 35 & 89 & 20 & 44 & 33 & 80 \\ Hier & 22 & 138 & 24 & 20 & 32 & 61 & 29 & 49 & 34 & 78 \\ \hline IF-PCA & 11 & 112 & 25 & 5 & 5 & 44 & 1 & 39 & 28 & 58 \\ \hline \end{tabular}
\end{table}
Table 4: The clustering errors of kmeans++ and hierarchical clustering on the \(10\) microarray data sets (the clustering errors of IF-PCA are listed for reference).
Figure 1: Clustering errors of IF-VAE(X) as a function of the number of selected features in the IF step (data set: LungCancer(1); y-axis: number of clustering errors; x-axis: number of selected features).
using the code provided by Hemberg Group under the column 'Scripts'. The data sets Camp1, Camp2, Darmanis, Li and Patel come from Human, and the data sets Deng, Goolam and Grun come from Mouse. Each data matrix contains the log-counts of the RNA-seq reads of different genes (features) in different cells (samples). The cell types are used as the true cluster labels to evaluate the performances of clustering methods. We first pre-processd all the data using the code provided by Hemberg Group, then features (genes) with fractions of non-zero entries \(<5\%\) are filtered out. The resulting dimension for all data sets are shown in Table 5.
We compare IF-VAE with three other existing methods: (1) the orthodox IF-PCA [25], (2) Seurat [39] and (3) SC3 [29]. The orthodox IF-PCA was proposed for subject clustering on microarray data. It is the first time this method is applied to single-cell data. Seurat and SC3 are two popular methods clustering single-cell RNA-seq data (see Sections 2.6 for details). As discussed in Section 2.6, Seurat and SC3 implicitly use some feature selection ideas and some dimension reduction ideas, but they are much more complicated than IF-PCA and have several tuning parameters. Seurat has 4 tuning parameters, where \(m\) is the number of selected features, \(N\) is the number of principal components in use, \(k_{0}\) is the number of clusters in k-nearest neighbors, and \(\delta\) is a'resolution' parameter. We fix \((m,N,k_{0})=(1000,50,20)\) for all data sets (the values of \((N,k_{0})\) are the default ones; the default value of \(m\) is 2000, but we found that \(m\)=1000 gives the same results on the 8 data sets and is faster to compute). We choose a separate value of \(\delta\) for each data set in a way such that the resulting number of clusters from a modularity optimization is exactly \(K\) (details can be found in [45]). Seurat is implemented by the R package Sertu [19]. SC3 has 3 tuning parameters, where \(x_{0}\%\) is a threshold of cell fraction used in the gene filtering step, \(d_{0}\) is the number of eigenvectors in use, and \(k_{0}\) is the level of hierarchy in the hierarchical clustering step. We fix \((x_{0},d_{0})=(10,15)\) and set \(k_{0}\) as the number of true clusters in each data set. SC3 is implemented using the R package SC3[29]. We observed that SC3 output an NA value on the Patel data set, because the gene filtering step removed all of the genes. To resolve this issue, we introduced a variant of SC3 by skipping the gene filtering step. This variant is called SC3(NGF), where NGF stands for 'no gene filtering.' Seurat, SC3 and SC3(NGF) can only be applied to the unnormalized data matrix \(X\). These methods also have randomness in the output, but the standard deviation of clustering error is quite small; hence, we only run 1 repetition for each of them. The implementation of IF-PCA, IF-PCA(X), IF-VAE and IF-VAE(X) are the same as in Section 3.1.
Table 6 contains the clustering accuracies (number of correctly clustered cells divided by the total number of cells) of different methods. For each data set, we rank all 6 methods (excluding SC3) by their clustering accuracies (the higher accuracy, the lower rank). SC3 is excluded in rank calculation, because it outputs NA on the Patel data set. Instead, we include SC3(NGF),
\begin{table}
\begin{tabular}{|l|l||l|l|l|} \hline \# & Dataset & \(K\) & \(n\) & \(p\) \\ \hline
1 & Camp1 & 7 & 777 & 13,111 \\
2 & Camp2 & 6 & 734 & 11,233 \\
3 & Darmanis & 9 & 466 & 13,400 \\
4 & Deng & 6 & 268 & 16,347 \\
5 & Goolam & 5 & 124 & 21,199 \\
6 & Grun & 2 & 1502 & 5,547 \\
7 & Li & 9 & 561 & 25,369 \\
8 & Patel & 5 & 430 & 5,948 \\ \hline \end{tabular}
\end{table}
Table 5: Single-cell RNA-seq data sets investigated in this paper. (\(n\): number of cells; \(p\): number of genes; \(K\): number of cell types)
a version of SC3 that resolves this issue on Patel and has better performances in most other data sets; this gives more favor to SC3 in the comparison. For each data set, we also compute the regret of each method (the same as in Section 3.1). Similarly, we exclude SC3 but include SC3(NGF) in the regret calculation. Each method has a rank and a regret on each data set. The last 4 rows of Table 6 show the mean and standard deviation of the 8 ranks of each method, as well as the mean and standard deviation of the 8 regrets of each method.
We make a few comments. First, if we measure the overall performance on 8 data sets using the average rank, then IF-PCA(X) and SC3(NGF) are the best. If we use the average regret as the performance metric, then IF-PCA(X) is the best method. Second, a closer look at SC3(NGF) and IF-PCA(X) suggests that their performances have different patterns. SC3(NGF) is ranked 1 in some data sets (e.g., Camp2, Darmanis, etc.) but has low ranks in some other data sets (e.g., Goolam, Grun, etc.). In contrast, IF-PCA(X) is ranked 2 in almost all data sets. Consequently, IF-PCA(X) has a smaller rank standard deviation, even though the two methods have the same average rank. One possible explanation is that SC3 is a complicated method with several tuning parameters. For some data sets, the current tuning parameters are appropriate, and so SC3 can achieve an extremely good accuracy; for some other data sets, the current tuning parameters are probably inappropriate, resulting in an unsatisfactory performance. In comparison, IF-PCA is a simple and tuning-free method and has more stable performances across multiple data sets. Third, IF-VAE(X) is uniformly better than IF-VAE, hence, we recommend applying IF-VAE to the unnormalized data matrix instead of the normalized one. Last, IF-VAE(X) significantly improves IF-PCA(X) on Deng and Grun. This suggests that the nonlinear dimension reduction by VAE is potentially useful on these two data sets. In the other data sets, IF-VAE(X) either under-performs IF-PCA(X) or performs similarly.
In terms of computational costs, Seurat is the fastest, and IF-PCA is the second fastest. VAE and SC3 are more time-consuming, where the main cost of VAE arises from training the neural network and the main cost of SC3 arises from computing the \(n\times n\) similarity matrix among subjects. For a direct comparison, we report the running time of different methods on the Camp1
\begin{table}
\begin{tabular}{|l||c c c c c c c|} \hline Dataset & Seurat & SC3 & SC3(NGF) & IF-PCA & IF-PCA(X) & IF-VAE & IF-VAE(X) \\ \hline Camp1 & 0.637 & 0.750 & 0.627 & 0.738 & 0.736 & 0.660 & 0.700 \\ Camp2 & 0.661 & 0.713 & 0.759 & 0.601 & 0.656 & 0.393 & 0.491 \\ Darmanis & 0.682 & 0.826 & 0.867 & 0.635 & 0.747 & 0.406 & 0.617 \\ Deng & 0.530 & 0.590 & 0.754 & 0.791 & 0.588 & 0.607 & 0.687 \\ Goolam & 0.621 & 0.758 & 0.629 & 0.637 & 0.700 & 0.612 & 0.703 \\ Grun & 0.994 & 0.509 & 0.511 & 0.740 & 0.657 & 0.595 & 0.753 \\ Li & 0.934 & 0.938 & 0.980 & 0.889 & 0.968 & 0.848 & 0.853 \\ Patel & 0.898 & NA & 0.995 & 0.795 & 0.934 & 0.325 & 0.465 \\ \hline Rank (mean) & 3.5 & NA & **2.75** & 3.0 & **2.75** & 5.38 & 3.63 \\ Rank (SD) & 1.7 & NA & 2.3 & 1.3 & 1.2 & 0.9 & 1.6 \\ Regret (mean) & 0.50 & NA & 0.37 & 0.40 & **0.28** & 0.90 & 0.53 \\ Regret (SD) & 0.4 & NA & 0.5 & 0.3 & 0.3 & 0.1 & 0.3 \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of the clustering accuracies with the 8 single-cell RNA-seq data sets in Table 5. The result for SC3 on Patel is NA, because all genes are removed in the gene filtering step; for this reason, we exclude SC3 when calculating the rank and the regret. To resolve this issue, we also introduce a variant of SC3 by skipping the gene filtering step. This variant is called SC3(NGF), where ‘NGF’ stands for no gene filtering. It has a better performance than the original SC3. Note that IF-PCA(X) is regarded as the best on average: it has the smallest average regret (boldface) and average rank (boldface). Note also that the standard deviation (SD) of its rank is only about 50% of that of SC3(NGF).
dataset (\(n=777\) and \(p=13111\)). IF-PCA is implemented in Matlab and takes about \(1.7\) minutes. VAE and IF-VAE are implemented in Python, where the VAE steps are conducted using the Python library keras. The running time of VAE is \(2.7\) minutes, and the running time of IF-VAE is \(1.4\) minutes. SC3 is implemented via the package SC3 of Bioconductor in R, and it takes \(3\) minutes. Seurat is implemented using the R package Seurat and takes only \(6\) seconds.
**Remark 2**_(Using ARI as the performance metric)_: The adjusted rand index (ARI) is another commonly-used metric for clustering performance. In Table 7, we report the ARI of different methods and recalculate the ranks and regrets. The results are quite similar to those in Table 6.
**Remark 3**_(Comparison with RaceID)_: Besides Seraut and SC3, there are many other clustering methods for single-cell data (e.g., see [50] for a survey). RaceID [16] is a recent method. It runs an initial clustering, followed by an outlier identification; and the outlier identification is based on a background model of combined technical and biological variability in single-cell RNA-seq measurements. We now compare IF-PCA(X) and IF-VAE(X) with RaceID (we used the R package RaceID and set all tuning parameters to be the default values in this package). We observe that IF-PCA(X) and IF-VAE(X) outperform RaceID on most datasets. One possible reason is that the outlier identification step in RaceID is probably more suitable for applications with a large number of cells (e.g., tens of thousands of cells).
**Remark 4**_(Combining the IF-step with Seurat and SC3)_: We investigate if the IF-step of IF-PCA can be used to conduct feature selection for other clustering methods. To this end, we introduce IF-Seurat and IF-SC3(NGF), in which Seurat and SC3(NGF) are applied respectively to the post-selection unnormalized data matrix from the IF-step of IF-PCA. Table 9 compares these two methods with their original versions. For Seurat, the IF-step improves the clustering accuracies on Camp1, Darmanis, and Patel, yields similar performances on Deng, Goolam Grun, and Li, and deteriorates the performances significantly on Camp2. For SC3, the IF-step sometimes yields a significant improvement (e.g., Camp1) and sometimes a significant deterioration (e.g.,
\begin{table}
\begin{tabular}{|l||c c c c c c c|} \hline Dataset & Seurat & SC3 & SC3(NGF) & IF-PCA & IF-PCA(X) & IF-VAE & IF-VAE(X) \\ \hline Camp1 & 0.534 & 0.768 & 0.526 & 0.628 & 0.627 & 0.606 & 0.615 \\ Camp2 & 0.443 & 0.577 & 0.502 & 0.410 & 0.493 & 0.162 & 0.304 \\ Darmanis & 0.480 & 0.682 & 0.784 & 0.489 & 0.650 & 0.219 & 0.525 \\ Deng & 0.442 & 0.646 & 0.669 & 0.771 & 0.477 & 0.487 & 0.555 \\ Goolam & 0.543 & 0.687 & 0.544 & 0.356 & 0.562 & 0.410 & 0.534 \\ Grun & 0.969 & -0.066 & -0.060 & 0.135 & 0.102 & 0.023 & 0.137 \\ Li & 0.904 & 0.951 & 0.968 & 0.797 & 0.940 & 0.798 & 0.792 \\ Patel & 0.790 & NA & 0.989 & 0.598 & 0.850 & 0.173 & 0.235 \\ \hline Rank (mean) & 3.62 & NA & **2.50** & 3.50 & **2.50** & 5.00 & 3.88 \\ Rank (SD) & 1.60 & NA & 2.20 & 1.77 & 1.31 & 0.93 & 1.36 \\ Regret (mean) & 0.42 & NA & 0.30 & 0.51 & **0.29** & 0.84 & 0.59 \\ Regret (SD) & 0.37 & NA & 0.44 & 0.40 & 0.37 & 0.27 & 0.33 \\ \hline \end{tabular}
\end{table}
Table 7: The values of adjusted rand index (ARI) for the same datasets and methods as in Table 6. Similar, the average rank and regret of SC3 is denoted as NA, for it generated NA on the Patel data set.
\begin{table}
\begin{tabular}{|l|c c c c c c c|} \hline & Camp1 & Camp2 & Darmanis & Deng & Goolam & Grun & Li & Patel \\ \hline IF-PCA(X) & 0.736 & 0.656 & 0.747 & 0.588 & 0.700 & 0.657 & 0.968 & 0.934 \\ IF-VAE(X) & 0.700 & 0.491 & 0.617 & 0.687 & 0.703 & 0.753 & 0.853 & 0.465 \\ RaceID & 0.645 & 0.425 & 0.290 & 0.630 & 0.443 & 0.583 & 0.624 & 0.542 \\ \hline \end{tabular}
\end{table}
Table 8: Comparison of the clustering accuracies of IF-PCA(X), IF-VAE(X) and RaceID.
Deng). It is an interesting theoretical question when the current IF-step is suitable to combine with clustering methods other than PCA.
## 4 Phase transition for PCA and IF-PCA
Compared with VAE, Seurat, and SC3, an advantage of IF-PCA is that it is conceptually much simpler and thus comparably easier to analyze. In this section, we present some theoretical results and show that IF-PCA is optimal in a Rare/Weak signal setting.
We are interested in several intertwined questions.
* When the IF-step of the IF-PCA is really necessary. As IF-PCA reduces to classical PCA when we omit the IF-step, an equivalent question is when IF-PCA really has an advantage over of PCA.
* When IF-PCA is optimal in a minimax decision framework.
To facilitate the analysis, we consider a high-dimensional clustering setting where \(K=2\) so we only have two classes. We assume the two classes are equally likely so the class labels satisfy
\[Y_{i}\stackrel{{ iid}}{{\sim}}2\mbox{Bernoulli}(1/2)-1,\qquad 1 \leq i\leq n; \tag{4.1}\]
extension to the case where we replace the Bernoulli parameter \(1/2\) by a \(\delta\in(0,1)\) is comparably straightforward. We also assume that the \(p\)-dimensional data vectors \(X_{i}\)'s are standardized, so that for a contrast mean vector \(\mu\in R^{p}\) (\(I_{p}\) standards for the \(p\times p\) identity matrix),
\[X_{i}=Y_{i}\mu+Z_{i},\qquad Z_{i}\stackrel{{ iid}}{{\sim}}N(0,I_{p}), \qquad 1\leq i\leq n. \tag{4.2}\]
As before, write \(Y=(Y_{1},Y_{2},\ldots,Y_{n})^{\prime}\), \(X=[X_{1},X_{2},\ldots,X_{n}]^{\prime}=[x_{1},x_{2},\ldots,x_{p}]\). It follows
\[X=Y\mu^{\prime}+Z,\qquad\mbox{where similarly }Z=[Z_{1},Z_{2},\ldots,Z_{n}]^{ \prime}=[z_{1},z_{2},\ldots,z_{p}].\]
For any \(1\leq j\leq p\), we call feature \(j\) an "influential feature" or "useless feature" if \(\mu(j)\neq 0\) and a "noise" or "useless feature" otherwise. We adopt a Rare/Weak model setting where (\(\nu_{a}\) stands for point mass at \(a\))
\[\mu(j)\stackrel{{ iid}}{{\sim}}(1-\epsilon_{p})\nu_{0}+( \epsilon_{p}/2)\nu_{\tau_{p}}+(\epsilon_{p}/2)\nu_{-\tau_{p}}. \tag{4.3}\]
For fixed parameters \(0<\theta,\beta,\alpha<1\),
\[n=n_{p}=p^{\theta},\qquad\epsilon_{p}=p^{-\beta},\qquad\tau_{p}=p^{-\alpha}. \tag{4.4}\]
From time to time, we drop the subscript of \(n_{p}\) and write \(n=n_{p}\). For later use, let
\[s_{p}=p\epsilon_{p}\qquad\mbox{and}\qquad S_{p}(\mu)=\{1\leq j\leq p:\mu(j) \neq 0\}\mbox{ be the support of }\mu. \tag{4.5}\]
It is seen \(|S_{p}(\mu)|\sim\mbox{Bernoulli}(p,\epsilon_{p})\) and \(|S_{p}(\mu)|/s_{p}\sim 1\). Model (4.1)-(4.4) models a scenario where \(1\ll n\ll p\) and
\begin{table}
\begin{tabular}{|l|c c c c c c c c|} \hline & Camp1 & Camp2 & Darmanis & Deng & Goolam & Grun & Li & Patel \\ \hline Seurat & 0.637 & 0.661 & 0.682 & 0.530 & 0.621 & 0.994 & 0.934 & 0.898 \\ IF-Seurat & 0.647 & 0.485 & 0.779 & 0.526 & 0.597 & 0.986 & 0.879 & 0.937 \\ \hline SC3(NGF) & 0.627 & 0.759 & 0.867 & 0.754 & 0.629 & 0.511 & 0.980 & 0.995 \\ IF-SC3(NGF) & 0.724 & 0.702 & 0.796 & 0.489 & 0.637 & 0.550 & 0.998 & 0.981 \\ \hline \end{tabular}
\end{table}
Table 9: Combinations of IF-Seurat with Seurat and IF-SC3(NGF) with SC3(NGF).
* (Signals are Sparse/Rare). The fraction of influential feature is \(p^{-\beta}\), which \(\to 0\) rapidly as \(p\to\infty\),
* (Signals are individually Weak). The signal strength of each influential feature may be much smaller than \(n^{-1/4}\) and the signals are individually weak; it is non-trivial to separate the useful features from the useless ones.
* (No free lunch). Summing over \(X\) either across rows (samples) or across columns (feature) would not provide any useful information for clustering decisions.
The model is frequently used if we want to study the fundamental limits and phase transition associated with a high-dimensional statistical decision problem (e.g., classification, clustering, global testing). Despite the seeming simplicity, the RW model is actually very delicate to study, for it models a setting where the signals (i.e., useful features) are both rare and weak. See [9, 10, 18, 21, 44, 48] for example.
Compared with the model in [25] (which only considers one-sided signals, where all nonzero \(\mu(j)\) are positive), our model allows two-sided signal and so is different. In particular, in our model, summing over \(X\) either across rows or columns would not provide any useful information for clustering decisions. As a result, the phase transition we derive below is different from those in [25].
Consider a clustering procedure and let \(\hat{Y}\in\mathbb{R}^{n}\) be the predicted class label vector. Note that for any \(1\leq i\leq n\), both \(Y_{i}\) (true class label) and \(\hat{Y}_{i}\) take values from \(\{-1,1\}\). Let \(\Pi\) be the set of all possible permutations on \(\{-1,1\}\). We measure the performance of \(\hat{Y}\) by the Hamming error rate:
\[\mathrm{Hamm}_{p}(\hat{Y},Y)=\mathrm{Hamm}_{p}(\hat{Y},Y;\beta,\theta)=n^{-1} \inf_{\pi_{0}\in\Pi}\biggl{\{}\sum_{i=1}^{n}P(\hat{Y}_{i}\neq\pi_{0}Y_{i}) \biggr{\}}, \tag{4.6}\]
where the probability measure is with respect to the randomness of \((\mu,Y,Z)\).
### A slightly simplified version of PCA and IF-PCA
To facilitate analysis for Model (4.1)-(4.4), we consider a slightly more idealized version of PCA and IF-PCA, where the main changes are (a) we skip the normalization step (as we assume the model is for data that is already normalized), and (b) we replace feature selection by Kolmogorov-Smirnov statistics in IF-PCA by feature selection by the \(\chi^{2}\) statistics, (c) we remove Efron's correction in IF-PCA (Efron's correction is especially useful for analyzing gene microarray data, but is not necessary for the current model), and (d) we skip the Higher Criticism Threshold (HCT) choice (the study on HCT is quite relevant for our model, but technically it is very long so we skip it). Note also the rank of the signal matrix \(Y\mu^{\prime}\) is \(1\) in Model (4.1)-(4.4), so in both PCA and the clustering step of IF-PCA, we should apply kmeans clustering to the first singular vector of \(X\) only. Despite these simplifications, the essences of original PCA and IF-PCA are retained. See below for more detailed description of the (simplified) PCA and IF-PCA.
In detail, to use PCA for Model (4.1)-(4.4), we run the following.
* Obtain the first singular vector of \(X\) and denote it by \(\xi\) (this is simpler than \(\hat{\xi}\); we are misusing the notation a little bit here).
* Cluster by letting \(\hat{Y}_{i}=\mathrm{sgn}(\xi_{i})\), \(1\leq i\leq n\).
To differentiable from PCA in Section 2.2, we may call the approach _the slightly simplified PCA_.
Also, to use IF-PCA for Model (4.1)-(4.4), we introduce the normalized \(\chi^{2}\)-testing scores for feature \(j\) by
\[\psi_{j}=(\|x_{j}\|^{2}-n)/\sqrt{2n}. \tag{4.7}\]
By elementary statistics,
\[\psi_{j}\sim\left\{\begin{array}{ll}N(\sqrt{(n/2)}\tau_{p}^{2},1),&\qquad\mbox{ if feature $j$ is useful,}\\ N(0,1),&\qquad\mbox{otherwise.}\end{array}\right.\]
Fix a threshold
\[t_{p}^{*}=\sqrt{2\log(p)}.\]
The IF-PCA runs as follows.
* (IF-step). Select feature \(j\) if and only if \(\psi_{j}\geq t_{p}^{*}\).
* (Clustering-step). Let \[\hat{S}=\{1\leq j\leq p:\psi_{j}\geq t_{p}^{*}\},\] and let \(X_{\hat{S}}\) be the post-selection data matrix (which is a sub-matrix of \(X\) consisting of columns in \(\hat{S}\)). Let \(\xi^{*}\in\mathbb{R}^{n}\) be the first singular vector of \(\hat{X}_{\hat{S}}\). We cluster by letting \[\hat{Y}_{i}=\mbox{sgn}(\xi_{i}^{*}),\qquad 1\leq i\leq p.\]
Similarly, to differentiate from the IF-PCA in Section 2.4, we call this _the slightly simplified IF-PCA_.
### The computational lower bound (CLB)
We first discuss the computational lower bound (CLB). The notion of CLB is an extension of the classical information lower bound (LB) (e.g., the Cramer-Rao lower bound), and in comparison,
* Classical information lower bound usually claims a certain goal is not achievable for any methods (which includes methods that are computationally NP hard).
* Computational lower bound usually claims a certain goal is not achievable for any methods with _a polynomial computational time_.
From a computational perspective, we highly prefer to have algorithms with a polynomial computation time. Therefore, compared with classical information lower bound, CLB is practically more relevant.
Let \(s_{p}=p\epsilon_{p}\). Note that in our model, the number of signals is Bernoulli\((p,\epsilon_{p})\), which concentrates at \(s_{p}\). Recall that in our calibrations, \(n=p^{\theta}\) and \(s_{p}=p^{1-\beta}\), and the strength of individual signals is \(\tau_{p}\). Introduce the critical signal strength by
\[\tau_{p}^{*}=\left\{\begin{array}{ll}[p/(ns_{p}^{2})]^{1/4},&\qquad\mbox{if $ \beta<1/2$ (so $s_{p}\gg\sqrt{p}$),}\\ n^{-1/4},&\qquad\mbox{if $1/2<\beta<(1-\theta/2)$ (so $\sqrt{n}\ll s_{p}\ll\sqrt{p}$).}\\ s_{p}^{-1/2},&\qquad\mbox{if $(1-\theta/2)<\beta<1$ (so $1\ll s_{p}\ll\sqrt{n}$).}\end{array}\right.\]
We have the following theorem.
**Theorem 4.1**.: _(Computational Lower Bound)). Fix \((\theta,\beta)\in(0,1)^{2}\) and consider the clustering problem for Models (4.1)-(4.4). As \(p\to\infty\), if \(\tau_{p}/\tau_{p}^{*}\to 0\), then for any clustering procedure \(\tilde{Y}\) with a polynomial computational time, \(\mbox{\rm Hamm}_{p}(\tilde{Y},Y)\geq(1/2+o(1))\)._
In other words, any "computable clustering procedures" (meaning those with a polynomial computational time) fails in this case, where the error rate is approximately the same as that of random guess. The proof of Theorem 4.1 is long but is similar to that of [24, Theorem 1.1], so we omit it.
Next, we study the performance of classical PCA and IF-PCA. But before we do that, we present a lemma on classical PCA in Section 4.3. We state the lemma in a setting that is more general than Model (4.1)-(4.4), but we will come back to Model (4.1)-(4.4) in Section (4.4).
### A useful lemma on classical PCA
Suppose we have a data matrix \(X\in\mathbb{R}^{N,m}\) in the form of
\[X=Y\mu^{\prime}+Z,\qquad Y\in\mathbb{R}^{N},\mu\in\mathbb{R}^{m}. \tag{4.8}\]
In such a setting, we investigate when the PCA approach in Section 4.1 is successful. Recall that \(\xi\) is the first singular vector of \(X\). By basic algebra, it is the first eigenvector of the \(N\times N\) matrix \(XX^{\prime}\), or equivalently, the first eigenvector of \(XX^{\prime}-mI_{N}\). Write
\[XX^{\prime}-mI_{N} =\|\mu\|^{2}YY^{\prime}+(ZZ^{\prime}-mI_{N})+(Y\mu^{\prime}Z^{ \prime}+Z\mu Y^{\prime})\] \[=\|\mu\|^{2}\cdot YY^{\prime}+(ZZ^{\prime}-mI_{N})+\text{secondary term}.\]
In order for the PCA approach to be successful, we need that the spectral norm of \(\|\mu\|^{2}YY^{\prime}\) is much larger than that of \((ZZ^{\prime}-mI_{N})\). Note that \(\|\mu\|^{2}YY^{\prime}\) is a rank-1 matrix where the spectral norm is \(N\|\mu\|^{2}\). Also, by Random Matrix Theory [43], the spectral norm of \((ZZ^{\prime}-mI_{N})\) concentrates at \((\sqrt{N}+\sqrt{m})^{2}-m=N+2\sqrt{Nm}\). Therefore, the main condition we need for the PCA approach to be successful is
\[N\|\mu\|^{2}/(N+2\sqrt{Nm})\to\infty. \tag{4.9}\]
We have the following lemma.
**Lemma 4.1**.: _Consider Model (4.8) where condition (4.9) holds and that \(\|\mu\|^{2}\gg\log(N+m)\). Let \(\xi\) be the first left singular vector of \(X\). When \(\min\{N,m\}\to\infty\), with probability \(1-o(m^{-3})\),_
\[\min\{\|\sqrt{N}\xi+Y\|_{\infty},\|\sqrt{N}\xi-Y\|_{\infty}\}=o(1).\]
Lemma 4.1 is proved in the supplementary material. This result connects to the recent interests of studying entry-wise large-deviation bounds of eigenvectors [1, 12]. Our proof is based on a form of Taylor expansion of eigenvectors. Please see the supplementary material for details.
By Lemma 4.1, there is an error vector \(r\) with \(\|r\|_{\infty}=o(1)\) such that
\[\sqrt{N}\xi=\pm Y+r;\qquad\text{recall that }Y_{i}\in\{-1,1\}.\]
Therefore, if we let \(\hat{Y}_{i}=\text{sgn}(\xi_{i})\) as in PCA approach in Section 4.1, then except for a small probability,
\[\hat{Y}=\pm Y.\]
This says that the PCA approach is able to fully recover the true class labels.
### Achievability of classical PCA and IF-PCA
We now come back to Model (4.1)-(4.4) and study the behavior of classical PCA and IF-PCA in our setting. The computational limits of clustering has received extensive interests (e.g., [33]). By the computational lower bound [24], successful clustering by a computable algorithm is impossible when \(\frac{\tau_{p}}{\tau_{p}^{*}}\to 0\), so the interesting parameter range for PCA and IF-PCA is when
\[\tau_{p}/\tau_{p}^{*}\to\infty.\]
We first discuss when feature selection by \(\chi^{2}\)-test is feasible. As before, let
\[\psi_{j}=(2n)^{-1/2}(\|x_{j}\|^{2}-n)\]
be the feature-wise \(\chi^{2}\)-testing scores, and recall that approximately,
\[\psi_{j}\sim\left\{\begin{array}{ll}N(\sqrt{(n/2)}\tau_{p}^{2},1),&\qquad\mbox {if feature $j$ is useful,}\\ N(0,1),&\qquad\mbox{otherwise.}\end{array}\right.\]
We can view \(\sqrt{(n/2)}\tau_{p}^{2}\) as the Signal-to-Noise ratio (SNR) for the \(\chi^{2}\)-test for a useful feature. We have two cases.
* (Less sparse case of \(\beta<1/2\)). In this case, the number of useful features \(s_{p}\) is much larger than \(\sqrt{p}\) and \(\tau_{p}^{*}\ll n^{-1/4}\), and the SNR of \(\psi_{j}\) for a useful feature \(j\) may be much smaller than \(1\) even though \(\tau_{p}/\tau_{p}^{*}\to\infty\). In such a case, feature selection by the \(\chi^{2}\)-test is not useful. Consequently, except for a negligible probability, the IF-step of IF-PCA selects all features, so IF-PCA reduces to PCA.
* (More sparse case of \(\beta>1/2\)). In this case, the number of useful features \(s_{p}\) is much smaller than \(\sqrt{p}\) and \(\tau_{p}^{*}\geq n^{-1/4}\). If \(\tau_{p}/\tau_{p}^{*}\to\infty\), then the SNR of \(\psi_{j}\to\infty\) if \(j\) is a useful feature. In such a case, feature selection maybe successful and IF-PCA is significantly different from PCA.
Consider the first case and suppose we apply the PCA approach in Section 4.1 directly to matrix \(X\). Applying Lemma 4.1 with \((N,m)=(n,p)\) and noting that in this setting,
\[n\|\mu\|^{2}\sim ns_{p}\tau_{p}^{2},\qquad N+2\sqrt{Nm}=p+2\sqrt{np}\sim 2\sqrt{ np}\;\;\mbox{(since $n\ll p$)},\]
the PCA approach is successful if
\[ns_{p}\tau_{p}^{2}/\sqrt{np}\to\infty.\]
Comparing this with the definition of \(\tau_{p}^{*}\), this is equivalent to
\[\tau_{p}/\tau_{p}^{*}\to\infty,\qquad\mbox{as $0<\beta<1/2$ in the current case.}\]
We have the following theorem.
**Theorem 4.2**.: _(Possibility Region for PCA). Fix \((\theta,\beta)\in(0,1)^{2}\) and consider the clustering problem for Models (4.1)-(4.4). Let \(\hat{Y}^{pca}\) be the predicted class label vector by the PCA algorithm in Section 4.1. As \(p\to\infty\), if_
\[0<\beta<1/2\;(so\;s_{p}/\sqrt{p}\to\infty)\qquad\mbox{and}\qquad\frac{\tau_{p }}{\tau_{p}^{*}}\to\infty, \tag{4.10}\]
_then \(\operatorname{Hamm}_{p}(\hat{Y}^{pca},Y)\to 0\)._
Consider the second case, where we may have successful feature selection so it is desirable to use IF-PCA. We assume
\[\tau_{p}/\tau_{p}^{*}\geq(4\log(p))^{1/4}, \tag{4.11}\]
which is slightly stronger than that of \(\tau_{p}^{*}/\tau_{p}\to\infty\). By the definition of \(\tau_{p}^{*}\), we have that in the current case (where \(1/2<\beta<1\))
\[\tau_{p}^{*}\geq n^{-1/4}. \tag{4.12}\]
Recall that \(S(\mu)\) is the true support of \(\mu\) and
\[\hat{S}=\{1\leq j\leq p:\psi_{j}\geq\sqrt{2\log(p)}\}\]
is the set of selected features in the IF-step of IF-PCA. Recall that
\[\psi_{j}\sim\left\{\begin{array}{ll}N(\sqrt{(n/2)}\tau_{p}^{2},1),&\qquad \mbox{if feature $j$ is useful,}\\ N(0,1),&\qquad\mbox{otherwise.}\end{array}\right.\]
By (4.11)-(4.12), for any useful feature \(j\), the SNR is
\[\sim\sqrt{(n/2)}\tau_{p}^{2}\geq\sqrt{(n/2)}\sqrt{4\log(p)}n^{-1/2}=\sqrt{2\log(p )}.\]
By elementary statistics, we have that approximately,
\[P(\hat{S}\neq S)=o(1),\qquad\mbox{where for short $S=S(\mu)$; same below.}\]
Therefore, except for a negligible probability,
\[X_{\hat{S}}=X_{S}=Y\mu_{S}^{\prime}+Z_{S},\]
where similar as before, \(\mu_{S}\) is the sub-vector of \(\mu\) with all entries restricted to \(S\), and \(X_{S}\) and \(Z_{S}\) the sub-matrix of \(X\) and \(Z\) respectively, with columns restricted to \(S\). Therefore, in the clustering-step of IF-PCA, we are in effect applying the PCA approach of Section 4.1 to \(X_{S}\), where we recall \(|S|/s_{p}\approx 1\). Applying Lemma 4.1 with \((N,m)=(n,|S|)\) and noting that
\[n\|\mu_{S}\|^{2}\sim ns_{p}\tau_{p}^{2},\qquad N+2\sqrt{Nm}=n+2\sqrt{n|S|} \sim n+2\sqrt{ns_{p}},\]
it follows that in order for the clustering-step of IF-PCA to be successful, we need
\[ns_{p}\tau_{p}^{2}/(n+2\sqrt{ns_{p}})\to\infty,\qquad\mbox{(note that when $s_{p}\ll n$, this is equivalent to $s_{p}\tau_{p}^{2}\to\infty$)}. \tag{4.13}\]
Combining this with (4.13) and recalling that in the current case, \(s_{p}\ll\sqrt{p}\), IF-PCA is successful when
\[\left\{\begin{array}{ll}\tau_{p}^{2}\geq 2\sqrt{\log(p)/n},&\qquad\mbox{if $ \sqrt{n}\ll s_{p}\ll\sqrt{p}$,}\\ s_{p}\tau_{p}^{2}\to\infty,&\qquad\mbox{if $s_{p}\ll\sqrt{n}$.}\end{array}\right. \tag{4.14}\]
Comparing this with the definition of \(\tau_{p}^{*}\), (4.14) hold if we assume
\[\tau_{p}/(\sqrt{\log(p)}\tau_{p}^{*})\to\infty,\]
which is slightly stronger than that of \(\tau_{p}/\tau_{p}^{*}\to\infty\). We have the following theorem.
**Theorem 4.3**.: _(Possibility Region for IF-PCA). Fix \((\theta,\beta)\in(0,1)^{2}\) and consider the clustering problem for Models (4.1)-(4.4). Let \(\hat{Y}^{ifpca}\) be the predicted class label vector by the PCA algorithm in Section 4. As \(p\to\infty\), if_
\[1/2<\beta<1\;(\mbox{so $s_{p}/\sqrt{p}\to 0$})\qquad\mbox{and}\qquad\frac{ \tau_{p}}{\sqrt{\log(p)}\tau_{p}^{*}}\to\infty, \tag{4.15}\]
_then in the IF-step of IF-PCA,_
\[P(\hat{S}\neq S(\mu))=o(1).\]
_Moreover, \(\operatorname{Hamm}_{p}(\hat{Y}^{ifpca},Y)\to 0\)._
### Phase transition
Recall that \(s_{p}=p\epsilon_{p}\) and that in Model (4.1)-(4.4),
\[n=n_{p}=p^{\theta},\qquad\epsilon_{p}=p^{-\beta},\qquad\tau_{p}=p^{-\alpha}.\]
It follows
\[\tau_{p}^{*}=p^{-\alpha^{*}(\beta,\theta)},\qquad\mbox{where}\qquad\alpha^{*} (\beta,\theta)=\left\{\begin{array}{ll}(1+\theta-2\beta)/4,&\qquad\mbox{if $0< \beta<1/2$,}\\ \theta/4,&\qquad\mbox{if $1/2<\beta<1-\theta/2$,}\\ (1-\beta)/2,&\qquad\mbox{if $(1-\theta/2)<\beta<1$.}\end{array}\right.\]
Fixing \(0<\theta<1\), and consider the two-dimensional space where the two axes are \(\beta\) and \(\alpha\), respectively. Combining Theorem 4.2-4.3, the curve \(\alpha=\alpha^{*}(\beta,\theta)\) partitions the region \(\{(\alpha,\beta):0<\beta<1,\alpha>0\}\) into two regions.
* _Region of Impossibility_\(\{(\alpha,\beta):\alpha>\alpha^{*}(\beta,\theta),0<\beta<1\}\). In this region, the Hamming clustering error rate of any methods with polynomial computation time is bounded away from \(0\).
* _Region of Possibility_\(\{(\alpha,\beta):\alpha<\alpha^{*}(\beta,\theta),0<\beta<1\}\). The region further partitions into two parts: \(\beta<1/2\) (left) and \(\beta>1/2\) (right).
* The left is the _less sparse case_ where the number of useful features \(s_{p}\gg\sqrt{p}\). For any fixed \((\alpha,\beta)\) in this region, the Hamming error rates of PCA are \(o(1)\), so PCA achieves the optimal phase transition. Also, in this case, the signals are too weak individually and feature selection is infeasible. Therefore, in the IF-step, the best we can do is to select all features, so IF-PCA reduces to PCA.
* The right is the _more sparse case_, where the number useful features \(s_{p}\ll\sqrt{p}\). For any fixed \((\alpha,\beta)\) in this region, the Hamming error rates IF-PCA is \(o(1)\), so IF-PCA achieves the optimal phase transition. Also in this case, the signals are strong enough individually and feature selection is desirable. Therefore, IF-PCA and PCA are significantly different.
* In particular, for any fixed parameters in the region \(\{1/2<\beta<1,(1-\theta-2\beta)<\alpha<(1-\beta)/2\}\) (shaded green region of Figure 2), the Hamming clustering error rate of IF-PCA is \(o(1)\) but that of PCA is bounded away from \(0\). Therefore, PCA is non-optimal this particular region.
See Figure 2 for details.
## 5 Discussions
IF-PCA is a simple and tuning-free approach to unsupervised clustering of high-dimensional data. The main idea of IF-PCA is a proper combination of the feature selection and the dimension reduction by PCA. In this paper, we make several contributions. First, we extend IF-PCA to
Figure 2: Phase transition for PCA and IF-PCA (\(\theta=0.6\)). The (three-segment) solid green line is \(\alpha=\alpha^{*}(\beta,\theta)\), which separates the whole region into the Region of Impossibility (top) and Region of Possibility (bottom). In the part of Region of Possibility (\(\beta<1/2\)), feature selection is infeasible, PCA is optimal, and IF-PCA reduces to PCA with an appropriate threshold. In the right part (\(\beta>1/2\)), it is desirable to conduct feature selection, and IF-PCA is optimal. However, PCA is non-optimal for parameters in the shaded green region.
IF-VAE, by replacing PCA with the variational auto-encoder (VAE), a popular unsupervised deep learning algorithm. Second, we study the theoretical properties of IF-PCA in a simple clustering model and derive the phase transitions. Our results reveal how the feature sparsity and the feature strength affect the performance of IF-PCA, and explain why IF-PCA can significantly improve the classical PCA. Third, we investigate the performances of IF-PCA and IF-VAE on two applications, the subject clustering with gene microarray data and the cell clustering with single-cell RNA-seq data, and compare them with some other popular methods.
We discover that IF-PCA performs quite well in the aforementioned applications. Its success on microarray data was reported in [25], but it has never been applied to single-cell data. To use IF-PCA on single-cell data, we recommend a mild modification of the original procedure called IF-PCA(X), which performs the PCA step on the unnormalized data matrix \(X\) instead of the normalized data matrix \(W\). On the 8 single-cell RNA-seq data sets considered in this paper, IF-PCA(X) has the second best accuracy in almost all the data sets, showing a stable performance across multiple data sets. We think IF-PCA has a great potential for single-cell clustering, for the method is simple, transparent, and tuning-free. Although the current IF-PCA(X) still underperforms the state-of-the-art methods (e.g., SC3) in some data sets, it is hopeful that a variant of IF-PCA (say, by borrowing the consensus voting in SC3 or replacing PCA with some other embedding methods [5, 34] can outperform them.
We also find that unsupervised deep learning algorithms do not immediately yield improvements over classical methods on the microarray data and the single-cell data. IF-VAE underperforms IF-PCA in most data sets; there are only a few data sets in which IF-VAE slightly improves IF-PCA. The reason can be either that nonlinear dimension reduction has no significant advantage over linear dimension reduction in these data sets or IF-VAE is not optimally tuned. How to tune the deep learning algorithms in unsupervised settings is an interesting future research direction. Moreover, the theory on VAE remains largely unknown [13]. A theoretical investigation of VAE requires an understanding to both the deep neural network structures and the variational inference procedure. We also leave this to future work.
The framework of IF-PCA only assumes feature sparsity but no other particular structures on the features. It is possible that the features are grouped [6] or have some tree structures [32]. How to adapt IF-PCA to this setting is an interesting yet open research direction.
In the real data analysis, we assume that the number of clusters, \(K\), is given. When \(K\) is unknown, how to estimate \(K\) is a problem of independent interest. One approach is to use the scree plot. For example, [27] proposed a method that first computes a threshold from the bulk eigenvalues in the scree plot and then applies this threshold on the top eigenvalues to estimate \(K\). Another approach is based on global testing. Given a candidate \(K\), we may first apply a clustering method with this given \(K\) and then apply the global testing methods in [24] to test if each estimated cluster has no sub-clusters; \(\hat{K}\) is set as the smallest \(K\) such that the global null hypothesis is accepted in all estimated clusters. In general, estimating \(K\) is an independent problem from clustering. It is interesting to investigate which estimators of \(K\) work best for gene microarray data and single-cell RNA-seq data, which we leave to future work
|
2304.13378 | Quantum liquids of the S=3/2 Kitaev honeycomb and related Kugel-Khomskii
models | The $S=3/2$ Kitaev honeycomb model (KHM) is unique among the spin-$S$ Kitaev
models due to a massive ground state quasi-degeneracy that hampered previous
numerical and analytical studies. In a recent work~\cite{jin2022unveiling}, we
showed how an SO(6) Majorana parton mean-field theory of the $S=3/2$ isotropic
KHM explains the anomalous features of this Kitaev spin liquid (KSL) in terms
of an emergent low-energy Majorana flat band. Away from the isotropic limit,
the $S=3/2$ KSL generally displays a quadrupolar order with gapped or gapless
Majorana excitations, features that were quantitatively confirmed by DMRG
simulations. In this paper, we explore the connection between the $S = 3/2$ KHM
with Kugel-Khomskii models and discover new exactly soluble examples for the
latter. We perform a symmetry analysis for the variational parton mean-field
\emph{Ans{\"a}tze} in the spin and orbital basis for different quantum liquid
phases of the $S=3/2$ KHM. Finally, we investigate a proposed time-reversal
symmetry breaking spin liquid induced by a {[}111{]} single ion anisotropy and
elucidate its topological properties as well as experimental signatures, e.g.
an unquantized thermal Hall response. | Willian M. H. Natori, Hui-Ke Jin, Johannes Knolle | 2023-04-26T08:37:50Z | http://arxiv.org/abs/2304.13378v3 | # Quantum liquids of the S=3/2 Kitaev honeycomb and related Kugel-Khomskii models
###### Abstract
The \(S=3/2\) Kitaev honeycomb model (KHM) is unique among the spin-\(S\) Kitaev models due to a massive ground state quasi-degeneracy that hampered previous numerical and analytical studies. In a recent work [1], we showed how an SO(6) Majorana parton mean-field theory of the \(S=3/2\) isotropic KHM explains the anomalous features of this Kitaev spin liquid (KSL) in terms of an emergent low-energy Majorana flat band. Away from the isotropic limit, the \(S=3/2\) KSL generally displays a quadrupolar order with gapped or gapless Majorana excitations, features that were quantitatively confirmed by DMRG simulations. In this paper, we explore the connection between the \(S=3/2\) KHM with Kugel-Khomskii models and discover new exactly soluble examples for the latter. We perform a symmetry analysis for the variational parton mean-field _Ansatze_ in the spin and orbital basis for different quantum liquid phases of the \(S=3/2\) KHM. Finally, we investigate a proposed time-reversal symmetry breaking spin liquid induced by a [111] single ion anisotropy and elucidate its topological properties as well as experimental signatures, e.g. an unquantized thermal Hall response.
## I Introduction
The celebrated \(S=1/2\) Kitaev honeycomb model (KHM) [2] bridges different research fields, i.e., the theory of integrable models, topological quantum computation, and Mott insulators under strong spin-orbit coupling [3; 4; 5; 6]. The KHM's eigenstates display exact spin fractionalization into static \(Z_{2}\)_fluxes_ and Majorana _matter_ fermions, resulting in short-range spin correlations characteristic of quantum spin liquids (QSLs) [7]. Kitaev's original interest was to instantiate a simple strongly correlated Hamiltonian hosting non-abelian anyon excitations, therefore providing a toy model for fault-tolerant quantum computation [2]. This initial motivation explains both the surprise and the excitement about the first proposals of KHM implementations in heavy-ion Mott insulators [8] that later coined the term _Kitaev materials_[3; 4; 5; 6].
Kitaev materials generally display long-range ordered ground states stabilized by other symmetry-allowed exchanges [9; 10; 11; 12; 13; 14; 15; 16; 17; 18] and intense research has focused on the search for compounds approaching the Kitaev spin liquid (KSL) [3; 5; 19]. One noteworthy example is \(\alpha\)-RuCl\({}_{3}\)[20], which transitions from a zigzag ordered state [21] to a magnetically disordered phase under the application of a moderate in-plane magnetic field [22; 23; 24; 25; 26; 27]. The disordered phase is reminiscent of the chiral spin liquid (CSL) predicted by Kitaev [2], a point supported by experiments reporting half-quantization of the thermal Hall coefficient [28; 29], but which is currently under debate [30; 31; 32].
A recent alternative route to a KSL in \(\alpha\)-RuCl\({}_{3}\) was proposed for heterostructures involving monolayers in contact with graphene [33; 34]. The proximity effect strains the insulator [33] and can enhance the relative importance of Kitaev interactions [5; 12]. Another promising direction involves Kitaev materials with \(3d\) magnetic ions [35; 36; 37]. As an example, the cobalt-based Kitaev material Na\({}_{3}\)Co\({}_{2}\)SbO\({}_{6}\)[38] was proposed to reach the KSL state by reducing its trigonal crystal field through pressure or strain [37]. The \(3d\) materials were also essential for conceiving higher-spin Kitaev materials with \(S>1/2\)[39; 40; 41; 42]. They provide experimental motivation to revisit what were once purely theoretical questions. The spin-\(S\) KHMs retain two characteristics of the famous \(S=1/2\) case [43]: i) there is one conserved operator per plaquette defining a static \(Z_{2}\) flux, and ii) one can define a Jordan-Wigner transformation and obtain emergent Majorana fermion excitations for half-integer spin \(S\). These two characteristics are sufficient to ensure ultra-short ranged spin correlations entailing a QSL ground state [43]. Nevertheless, these results did not yield an exact solution or a quantitative theory for the Kitaev spin liquids with \(S>1/2\).
An alternative approach is to start from the semiclassical large-\(S\) limit [44], where the KHM can be mapped onto a toric-code model [45] over dimers forming a fixed kekule pattern, which provides an adequate understanding of the model for \(S>3/2\). The breakdown of this approximation for \(S=1/2\) and \(S=1\) is interpreted as the formation of QSLs with mobile fractionalized excitations, as evinced by independent numerical studies [46; 47]. The specific case \(S=3/2\) marks the borderline of the stability of the large-\(S\) KSL [44] and has proven to be a challenging numerical problem due to a pile-up of low-energy excitations [1].
The proposal that \(S=3/2\) Kitaev exchanges are relevant for 2D van der Waals magnets [48; 49; 39; 41; 42] provides a strong experimental motivation to readless the nature of this exotic QSL. Recently, we tackled this problem by studying the \(S=3/2\) KHM in terms of SO(6) Majorana partons [50; 51; 52; 53]. It allows an _exact_ mapping of the \(Z_{2}\) fluxes [43] into static \(Z_{2}\) gauge operators in analogy to the \(S=1/2\) KHM [1]. However, despite the pres
ence of a static gauge field the ensuing Majorana problem is fully interacting which prevents a full exact solution.
A parton mean-field theory (PMFT) of this model perturbed by a flux-conserving [001] single-ion anisotropy (SIA) unveiled a rich phase diagram with four types of QSLs (see Fig. 1): (i) a quantum spin-_orbital_ liquid at the isotropic point (\(J_{\gamma}=1\)), (ii) a gapless QSL dubbed \(A_{0}\) phase adiabatically connected with the \(S=1/2\) KSL, (iii) the same as (ii) for the gapped \(S=1/2\) KSL, and (iv) a gapped QSL dubbed \(B\) phase with vanishingly small flux excitations. The predictions of PMFT are in remarkable and even _quantitative_ agreement with state-of-the-art DMRG simulations on \(3\times 4\) tori and \(4\times 8\) cylinders. The abundance of low-energy excitations, which hampered previous DMRG simulations of the isotropic KHM, can be attributed to an almost zero-energy flat band of Majorana fermion excitations within the framework of PMFT.
Our previous work [1] also included a perturbative study of the isotropic \(S=3/2\) KHM under the [111] SIA that naturally arises in minimal models of van der Waals magnets [41; 39; 42]. Within the zero-flux sector, this perturbation induces a three-site interaction that in turn leads to a _spontaneously_ time-reversal symmetry (TRS) breaking QSL. This \(S=3/2\) KSL thus shares similarities with the celebrated \(S=1/2\) chiral KSL induced by a magnetic field [2] but is distinguished from it by its coexistence with an octupolar order parameter and a zero total Chern number [1].
In this paper, we explore the connection of the \(S=3/2\) KHM with Kugel-Khomskii (KK) models by studying the \(S=3/2\) operators in terms of pseudo-dipole \(\sigma_{i}^{\gamma}\) and pseudo-orbital operators \(T_{i}^{\gamma}\)[52; 53; 54; 55; 56; 57; 58]. This facilitates the identification of similarities with integrable KK models [59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70] and in doing so we discover new soluble examples. Moreover, the connection to KK models allows for a reinterpretation of the quantum liquid phases. We also provide a symmetry classification of the PMFT and discuss properties of the quantum liquid phases, in particular the one breaking TRS in the presence of the experimentally relevant [111] SIA.
The paper is structured as follows. Section II reviews essential results on the theory of integrable KK models and the spin-\(S\) KHM. It then translates these results to the \(S=3/2\) case using the pseudo-dipole and pseudo-orbital operators. Section III presents details for the parton representation of an exactly solvable model directly related to the \(S=3/2\) KHM. This section also discusses the effects of symmetry constraints on the allowed order parameters and their relations to the properties of the previously uncovered QSLs phases. Section IV discusses the origins of the first-order phase transition to the TRS breaking \(S=3/2\) KSL, as well as its observed topological properties. We conclude in section V with open questions for future research.
## II Review of some exact results
### Soluble vector models and spin-\(S\) Kthms
We start by recalling a class of exactly solvable spin-\(S\) models directly related to the KHM. Consider a set of operators \(\Gamma^{a}\) (\(a=1,2,...,2q+3,q\in\mathbb{N}_{0}\)) defined over a \(2^{q+1}\) dimensional Hilbert space which forms a basis for the Clifford algebra
\[\left\{\Gamma^{a}_{i},\Gamma^{b}_{i}\right\} =2\delta_{ab},\] \[\left[\Gamma^{a}_{i},\Gamma^{b}_{j}\right] =0,\text{ if }i\neq j, \tag{1}\]
with \(i\) and \(j\) labeling points on a graph. Several algorithms have been developed to generate models whose Hilbert space is restricted to a sub-algebra whose dimension scales polynomially with the number of lattice bonds [60; 71]. In particular, they proposed the class of _vector models_[60]
\[H_{\text{vec}}=\sum_{\langle ij\rangle_{a}}J_{a}\Gamma^{a}_{i}\Gamma^{a}_{j}, \tag{2}\]
in which the label \(a\) is assigned at most once for each type of bond in the lattice. All vector models commute with an extensive number of local operators given by an ordered product \(\Gamma\) on the elementary plaquettes [60].
An even larger number of integrable models can be defined with the operators \(\Gamma^{ab}=\frac{1}{2i}\left[\Gamma^{a},\Gamma^{b}\right]\) (\(a<b\) and \(q\geq 1\)) [65; 68]. For concreteness, we express these generalizations only on the honeycomb lattice, where they
Figure 1: The mean-field ground-state phase diagram of the S=3/2 KHM with a [001] SIA in the zero-flux sector. The \(A_{0}\) phase is a Dirac QSL with spin quadrupolar order \(\langle T^{z}\rangle=Q^{z}<0\). In the \(A_{z}\) (\(B\)) phase, the spinon excitations are gapped with \(Q^{z}<0\) (\(Q^{z}>0\)). At the isotropic point (blue star), the ground state is a Dirac QSL with \(Q^{z}=0\). The bold blue line at \(D_{z}=\infty\) with \(J_{z}<8\) (\(J_{z}>8\)) represents the effective gapless (gapped) S=1/2 KSL. The gapless phases in S=3/2 and S=1/2 KHMs can continuously connect to each other through the \(A_{0}\) phase.
read
\[H =\sum_{\langle ij\rangle_{\gamma}}K_{\gamma}\Gamma_{i}^{\gamma} \Gamma_{j}^{\gamma}\] \[+\sum_{\langle ij\rangle_{\gamma}}\sum_{\alpha=4}^{2q+1}\left(K_{ \gamma}^{\alpha}\Gamma_{i}^{\gamma}\Gamma_{j}^{\gamma\alpha}+K_{\gamma}^{ \prime\alpha}\Gamma_{i}^{\gamma\alpha}\Gamma_{j}^{\gamma}\right)\] \[+\sum_{\langle ij\rangle_{\gamma}}\sum_{\alpha,\beta=4}^{2q+1}J_{ \gamma}^{\alpha\beta}\Gamma_{i}^{\gamma\alpha}\Gamma_{j}^{\gamma\beta}, \tag{3}\]
with the three bond directions \(\gamma\) expressed by different colors in Fig. 2.
Next, we can discuss the connection with the spin-\(S\) KHM on the honeycomb lattice given by the Hamiltonian
\[H_{\text{Kit}}=\sum_{\langle ij\rangle_{\gamma}}J_{\gamma}S_{i}^{\gamma}S_{j}^ {\gamma}, \tag{4}\]
in which \(\gamma\) labels both the inequivalent bonds on the honeycomb lattice and the corresponding spin quantization axis in the cubic frame [14; 15; 18].
The operators \(\sigma_{i}^{\gamma}=2S_{i}^{\gamma}\) satisfy the Clifford algebra in Eq. (1) only for \(S=1/2\), which thus corresponds to the \(q=0\) vector model. The conserved operators for \(S=1/2\) are \(W_{p}^{1/2}=\sigma_{1}^{z}\sigma_{2}^{y}\sigma_{3}^{y}\sigma_{4}^{z}\sigma_{5 }^{x}\sigma_{6}^{y}\)[2] with the label convention set in Fig. 2(a). Kitaev then provided an exact solution of the \(S=1/2\) model using a Majorana fermion representation
\[\sigma_{i}^{\gamma}=-i\eta_{i}^{\gamma}\theta_{i}^{0}, \tag{5}\]
in which the four Majorana flavors satisfy \(\left\{\Upsilon_{i}^{\alpha},\Upsilon_{j}^{\beta}\right\}=2\delta_{ij}\delta^{ \alpha\beta}\), where \(\Upsilon\) is an \(\eta\) or \(\theta^{0}\) flavor. The Hamiltonian in terms of Majoranas is
\[H_{\text{Kit}}^{S=1/2}=\sum_{\langle ij\rangle_{\gamma}}\frac{J_{\gamma}}{4} \hat{u}_{\langle ij\rangle_{\gamma}}i\theta_{i}^{0}\theta_{j}^{0}, \tag{6}\]
in which \(\hat{u}_{\langle ij\rangle_{\gamma}}=-i\eta_{i}^{\gamma}\eta_{j}^{\gamma}\) are conserved \(Z_{2}\) bond operators akin to a static gauge field. The product of eigenvalues of \(\hat{u}_{\langle ij\rangle_{\gamma}}\) around a plaquette fixes the \(\left\{W_{p}^{1/2}\right\}\) flux sector [2]. The ground state in the thermodynamic limit is characterized by \(W_{p}^{1/2}=+1,\forall p\)[72] with a dispersion of the matter sector given by
\[\epsilon(\mathbf{k})=\frac{1}{2}\left|J_{z}+J_{x}e^{i\mathbf{k}\cdot\mathbf{a }_{x}}+J_{y}e^{i\mathbf{k}\cdot\mathbf{a}_{y}}\right|, \tag{7}\]
in which \(\mathbf{a}_{x,y}=\pm\frac{1}{2}\hat{\mathbf{x}}+\frac{\sqrt{3}}{2}\hat{\mathbf{ y}}\) as shown in Fig. 2.
The KHM for \(S>1/2\) is not within the class of vector models since the anticommutator \(\left\{S_{i}^{a},S_{i}^{b}\right\}\) corresponds to a quadrupolar operator. Nevertheless, using identities
\[\left\{e^{i\pi S_{i}^{a}},S_{i}^{\beta}\right\} =0\text{ if }\alpha\neq\beta,\] \[\left[e^{i\pi S_{i}^{a}},S_{i}^{a}\right] =0, \tag{8}\]
it is still possible to find one conserved operator \(W_{p}^{S}\) per plaquette given by [43]
\[W_{p}^{S}=-\exp\left[i\pi\left(S_{1}^{z}+S_{2}^{x}+S_{3}^{y}+S_{4}^{z}+S_{5}^ {x}+S_{6}^{y}\right)\right], \tag{9}\]
in which the minus sign was inserted to include \(W_{p}^{1/2}\) as a specific case. Since spin operators do not commute with \(W_{p}^{S}\) for any \(S\), one can prove that spin-spin correlations vanish beyond nearest neighbors and there is no long-range magnetic order in any flux eigenstates of spin-\(S\) KHMs [43].
The exponential operators in Eq. (8) can also be used for defining a Jordan-Wigner-like transformation (JWT) leading to an analytical representation of the \(Z_{2}\) flux sector of the spin-\(S\) KHM [43]. The JWT starts with the definition of a string operator
\[\mu_{n}=\prod_{m<n}e^{i\pi(S_{m}^{z}+S)}, \tag{10}\]
in which \(m\), \(n\) label the sites following an order defined by strings running over the \(xy\) bonds [74; 75; 76; 73] [see Fig. 2(b)]. At the \(n\)th site, the exchange interactions along the strings are given by \(J_{t_{1}}S_{n-1}^{t_{1}}S_{n}^{t_{1}}\) and \(J_{t_{2}}S_{n}^{t_{2}}S_{n+1}^{t_{2}}\), where \(t_{1},t_{2}=x,y\). We can then define
\[\xi_{n} \equiv e^{i\pi\left(S_{n}^{t_{1}}+S\right)}\mu_{n},\] \[\chi_{n} \equiv e^{i\pi\left(S_{n}^{t_{2}}+S\right)}\mu_{n}, \tag{11}\]
which satisfies Majorana fermion (hard-core boson) statistics for half-integer (integer) values of \(S\). For any pair of sites \(ij\) forming a \(z\)-bond, \(u_{ij}=e^{i\pi S}\chi_{i}\chi_{j}\) is a Hermitian operator commuting with the Hamiltonian [43], and is directly related to the bond operators \(\hat{u}_{\langle ij\rangle}\) discussed above, i.e., they can also be used to fix the KHM flux sectors. On the other hand, \(\xi_{n}\) represents Majorana fermions for the matter sector only when \(S=1/2\), and we need to get into the specifics for understanding KHM with \(S>1/2\).
Figure 2: Conventions for the honeycomb lattice that are used throughout the text. (a) Detail of the honeycomb plaquette. The colors green, blue, and red correspond to \(\gamma=z,x,y\), respectively. At each bond \(\langle ij\rangle_{\gamma}\), the interaction between the spins is given by \(J_{\gamma}S_{i}^{\gamma}S_{j}^{\gamma}\), in which \(\gamma\) is defined by the bond. (b) Site counting convention used in the Jordan-Wigner transformation discussed in the text together with the labelling convention of the nearest-neighbor vectors \(\mathbf{a}_{x,y}\) and next-nearest-neighbor vectors \(\mathbf{d}_{1,3,5}\).
### Spin-orbital representation of the Spin-3/2 KHM
For the remainder, we focus on the \(S=3/2\) case and derive an alternative representation in terms of a KK model. We start by defining the spin-3/2 pseudo-dipoles \(\mathbf{\sigma}\) and pseudo-orbitals \(\mathbf{T}\) as follows
\[\sigma_{i}^{\alpha} =-i\exp\left(i\pi S_{i}^{\alpha}\right),\] \[T_{i}^{z} =\left(S_{i}^{z}\right)^{2}-\frac{5}{4},\] \[T_{i}^{x} =\frac{1}{\sqrt{3}}\left[\left(S_{i}^{x}\right)^{2}-\left(S_{i}^ {y}\right)^{2}\right],\] \[T_{i}^{y} =\frac{2\sqrt{3}}{9}\overline{S_{i}^{x}S_{i}^{y}S_{i}^{z}}, \tag{12}\]
in which the bar indicates a sum over all permutations of the operators under it [77]. The definition of \(\mathbf{\sigma}\) is motivated by the exponential operators in Eqs.(8), (9), and (10), and an imaginary factor \(-i\) ensures that the pseudo-dipoles satisfy the SU(2) algebra for \(S=1/2\) operators. The \(T^{z}\) and \(T^{x}\) operators are \(S=3/2\) quadrupoles that commute with \(\mathbf{\sigma}\) and transform as \(e_{g}\) orbital operators by transformations in real space. Including the octupolar operator \(T^{y}\) which forms a unidimensional representation of the \(O_{h}\) group [77], \(\mathbf{T}\) also satisfy the SU(2) algebra. The algebra of \((\mathbf{\sigma},\mathbf{T})\) can be summarized as follows
\[\left[\sigma_{i}^{\alpha},\sigma_{j}^{\beta}\right] =2i\delta_{ij}\epsilon^{\alpha\beta\gamma}\sigma_{i}^{\gamma},\] \[\left[T_{i}^{\alpha},T_{j}^{\beta}\right] =2i\delta_{ij}\epsilon^{\alpha\beta\gamma}T_{i}^{\gamma},\] \[\left\{\sigma_{i}^{\alpha},\sigma_{j}^{\beta}\right\} =\left\{T_{i}^{\alpha},T_{j}^{\beta}\right\}=2\delta_{ij}\delta^{ \alpha\beta},\] \[\left[\sigma_{i}^{\alpha},T_{j}^{\beta}\right] =0, \tag{13}\]
in which \(\epsilon^{\alpha\beta\gamma}\) is the anti-symmetric Levi-Civita symbol. The \((\mathbf{\sigma},\mathbf{T})\) operators were extensively used in the description of \(j=3/2\) Mott insulators as they allow an alternative representation of multipolar interactions and a transparent representation of global symmetries [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 1997; 1998; 1999; 200; 201; 2021; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 220; 221; 222; 223; 224; 225; 226; 227; 228; 229; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 2778; 279; 280; 281; 282; 283; 284; 285; 286; 287; 288; 289; 290; 289; 281; 285; 287; 289; 286; 288; 287; 289; 291; 288; 289; 292; 293; 288; 289; 294; 295; 296; 297; 298; 300; 301; 302; 303; 304; 305; 306; 307; 308; 309; 310; 311; 329; 334; 335; 341; 342; 343; 356; 357; 368; 379; 380; 381; 382; 383; 384; 385; 386; 387; 388; 389; 390; 391; 392; 393; 394; 395; 396; 397; 398; 399; 401; 402; 403; 404; 405; 406; 407; 408; 4099; 411; 409; 412; 413; 414; 415; 416; 417; 418; 419; 424; 425; 426; 430; 431; 432; 433; 444; 45; 45; 46; 47; 48; 48; 49; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 778; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 80; 81; 84; 87; 88; 89; 80; 82; 85; 89; 86; 87; 89; 91; 88; 80; 83; 88; 89; 92; 81; 89; 80; 84; 86; 88; 87; 88; 89; 93; 94; 89; 95; 88; 89; 96; 97; 98; 99; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 109; 111; 113; 107; 108; 109; 1114; 115; 1116; 117; 118; 119; 120; 121; 123; 124; 125; 126; 127; 128; 129; 1314; 129; 140; 121; 128; 129; 141; 150; 151; 152; 153; 153; 154; 155; 156; 157; 158; 159; 160; 171; 183; 184; 185; 186; 187; 188; 189; 197; 198; 1999; 2001; 219; 223; 241; 25; 266; 27; 28; 299; 301; 310; 32; 333; 344; 35; 36; 37; 38; 39; 402; 303; 311; 332; 344; 35; 39; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 52; 54; 56; 57; 58; 59; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 74; 75; 76; 77; 78; 79; 81; 83; 84; 85; 86; 87; 89; 99; 90; 91; 101; 113; 114; 115; 116; 117; 19; 133; 144; 17; 18; 199; 199; 116; 197; 188; 199; 202; 219; 2
value \(\langle T^{z}\rangle=+1\) (\(\langle T^{z}\rangle=-1\)) for infinitesimal values of \(D_{z}<0\) (\(D_{z}>0\)) as indicated in Fig. 3(b). More generally, the \(|D_{z}|\to\infty\) limit of the spin operators reads
\[\lim_{|D_{z}|\to\infty}\mathbf{S}_{i}\to\begin{cases}\left(-\sigma_{i}^{x},- \sigma_{i}^{y},\,\frac{\sigma_{i}^{z}}{2}\right),&D_{z}>0,\\ \left(0,0,-\frac{3\sigma_{i}^{z}}{2}\right),&D_{z}<0.\end{cases} \tag{20}\]
Thus, large positive values of \(D_{z}\) map the \(S=3/2\) KHM into its \(S=1/2\) version with renormalized coupling constant \(J_{z}\to J_{z}/4\), while large negative \(D_{z}\) rapidly maps it into the \(S=1/2\) gapped KHM. In other words, the \([001]\) SIA provides a natural mapping between the \(S=3/2\) and \(S=1/2\) KHMs while also elucidating the relevance of the \(\langle T^{z}\rangle\) quadrupolar field.
Second, we study the KK model \(H_{\text{Kit}}^{\sigma T}\) which turns out to be an exactly solvable model within the class given by Eq. (3). This becomes transparent when using the following equivalence between \(\Gamma\) matrices and the spin-orbital operators \((\mathbf{\sigma},\mathbf{T})\)
\[\Gamma^{1}= \frac{\sqrt{3}}{3}\left\{S^{y},S^{y}\right\}=-\sigma^{x}T^{y},\] \[\Gamma^{2}= \frac{\sqrt{3}}{3}\left\{S^{z},S^{x}\right\}=-\sigma^{y}T^{y},\] \[\Gamma^{3}= \frac{\sqrt{3}}{3}\left\{S^{x},S^{y}\right\}=-\sigma^{z}T^{y},\] \[\Gamma^{4}= T^{x},\] \[\Gamma^{5}= T^{z}, \tag{21}\]
by which one can re-expresses Eq. (3) as
\[H=\sum_{\langle ij\rangle_{\gamma}}\sum_{a,b=x,y,z}J_{\gamma}^{ab}\sigma_{i} ^{\gamma}\sigma_{j}^{\gamma}T_{i}^{a}T_{j}^{b}. \tag{22}\]
We note that a related but different soluble KK model has been introduced and studied in Ref. [68]. We will discuss the properties of the exact solution of this new model in the next section in terms of an SO(6) Majorana parton representation of the \(S=3/2\) operators [1].
Third, the last model \(H_{\text{Kit}}^{\sigma,\sigma T}\) shares the gauge structure of the other two models but within a given flux sector the remaining Majorana problem is still quartic, and thus, not exactly soluble.
### Relation between spin-orbital operators and SO(6) Majorana partons
In Ref. [1], we used an SO(6) Majorana parton representation of the \(S=3/2\) operators which allowed us to uncover the static Z\({}_{2}\) gauge field description of the flux operators. Here, we will clarify the connection with the pseudo-dipole and pseudo-orbital operators, which can be written in terms of SO(6) Majorana partons as follows [50; 56; 59; 60; 62; 63; 65]
\[\mathbf{\sigma}_{i}=-\frac{i}{2}\mathbf{\eta}_{i}\times\mathbf{\eta}_{i}, \mathbf{T}_{i}=-\frac{i}{2}\mathbf{\theta}_{i}\times\mathbf{\theta}_{i},\] \[\sigma_{i}^{\alpha}T_{i}^{\beta}=-i\eta_{i}^{\alpha}\theta_{i}^{ \beta}, \tag{23}\]
in which \(\alpha,\beta=x,y,z\) and \(\mathbf{\eta}_{i},\mathbf{\theta}_{i}\) satisfy
\[\left\{\eta_{i}^{\alpha},\eta_{j}^{\beta}\right\} =\left\{\theta_{i}^{\alpha},\theta_{j}^{\beta}\right\}=2\delta_{ ij}\delta^{\alpha\beta},\] \[\left\{\eta_{i}^{\alpha},\theta_{j}^{\beta}\right\} =0. \tag{24}\]
The constraint to the physical Hilbert space is identified by noticing that Eq. (21) requires that \(\Gamma_{i}^{1}\Gamma_{i}^{2}\Gamma_{i}^{3}\Gamma_{i}^{4}\Gamma_{i}^{5}=-\mathbb{I}\) at all sites. In terms of Eq. (23), the left-hand side of the equation defines the operator \(D_{i}\) given by
\[D_{i}=i\eta_{i}^{\alpha}\eta_{i}^{\beta}\eta_{i}^{\gamma}\theta_{i}^{\alpha} \theta_{i}^{\beta}\theta_{i}^{\gamma}. \tag{25}\]
We then demand that a physical state satisfies \(D_{i}=1,\forall i\). Equivalently, we can formally write a projector operator \(P\)[82]
\[P=\prod_{i}\frac{1+D_{i}}{2}\equiv P^{\prime}\left(\frac{1+D}{2}\right), \tag{26}\]
in which \(D=\prod_{i=1}^{2N}D_{i}\) and \(P^{\prime}\) is the sum over all inequivalent gauge transformations. A physical state \(\ket{\psi}\) is considered physical if, and only if, \(\ket{\psi}=P\ket{\psi}\). An explicit formula for \(D\) can be derived following Refs. [83; 84] and is given in Appendix C for SO(6) Majorana fermions.
We can now use the partons for an exact solution of the model in Eq. (18b) as it reads in the new form
\[H_{\text{Kit}}^{\sigma T}=\sum_{\langle ij\rangle_{\gamma}}J_{\gamma}\hat{u}_{ \langle ij\rangle_{\gamma}}i\theta_{i}^{\alpha\beta}\theta_{j}^{\alpha\beta}, \tag{27}\]
where \(\hat{u}_{\langle ij\rangle_{\gamma}}\) is the same \(Z_{2}\) bond operator defined for the \(S=1/2\) KHM and \(\theta^{xy}=\theta^{z}\), \(\theta^{yz(zx)}=\left(-\theta^{z}\pm\sqrt{3}\theta^{x}\right)/2\). Notice that \(\theta^{y}\) fermions are absent and lead to zero-energy flat bands at any flux sector or choice of exchange couplings, in analogy to \(H_{\text{Kit}}^{\sigma}\). The ground state is again in the zero-flux sector [72], for which the dispersive bands can be gapped or gapless according to the values of \(J_{\gamma}\) (see Fig. 4). The isotropic case in Fig. 4(b) displays a band whose dispersion is exactly \(\epsilon(\mathbf{k})\) in Eq. (7), i.e., it is formally the same as the original Kitaev model. This band is sandwiched between two flat bands with energy
Figure 3: (a) Dispersion of \(H_{\text{Kit}}^{\sigma}\) with flat bands, (b) Strong first-order quantum phase transition induced by the onset of \([001]\) single-ion anisotropy quantified by \(D_{z}\).
given exactly by \(E=0\) and \(E=3J\). Away from the isotropic limit, the high-energy flat band acquires a dispersion and the intermediate bands deviate from \(\epsilon(\mathbf{k})\), see Figs. 4(c) and (d).
Finally, we would like to point out a few interesting aspects in relation to the exact solution of the first Hamiltonian \(H_{\text{Kit}}^{\sigma}\) in terms of SO(6) Majorana fermions. Although a solution for this model can be obtained directly through the SO(3) representation of \(\mathbf{\sigma}\) given by Eq. (23) [85], it is instructive to obtain an alternative representation of the pseudo-dipoles through the \(D_{i}\) operators [86]. By evaluating \(D_{i}\mathbf{\sigma}_{i}\) and then setting \(D_{i}=1\), we obtain \(\sigma_{i}^{\gamma}=-i\eta_{i}^{\gamma}\theta_{i}^{\theta}\), in which
\[\theta_{i}^{0}\equiv-i\theta_{i}^{x}\theta_{i}^{y}\theta_{i}^{z}=-i\theta_{i} ^{\alpha}\theta_{i}^{\beta}\theta_{i}^{\gamma}.\] (28a) The expression for the pseudo-dipoles is the same as the one expressed for the \[S=1/2\] KHM in Eq. ( 5 ), but the fact that \[\theta_{i}^{0}\] is now a product of three Majorana flavors demands more careful analysis. \[\theta_{i}^{0}\] satisfies the Majorana fermion algebra \[\left\{\theta_{i}^{0},\theta_{j}^{0}\right\}=2\delta_{ij}\text{ and }\left\{\theta_{i}^{0},\eta_{j}^{\gamma}\right\}=0,\] so that the spectrum of matter excitations of \[H_{\text{Kit}}^{z}\] can still be exactly known by mapping the Hamiltonian to a free fermion-like problem. However, the dimension of the \[\theta_{i}^{0}\] Hilbert space is twice that of a conventional Majorana fermion, which reflects the independence of \[H_{\text{Kit}}^{\sigma}\] in relation to orbital states. \[\theta_{i}^{0}\] is also very sensitive to local orbital operators such as the SIA in Eq. ( 19 ), which is represented like \[H_{\text{SIA}}^{z}=-\sum_{j}D_{z}i\theta_{j}^{x}\theta_{j}^{y}.\] (29) Combining Eq. ( 28a ) and Eq. ( 29 ), we observe that the SIA along the \(z\)-direction "freezes" the Majorana flavors \(\theta^{x}\) and \(\theta^{y}\) and allows the replacement \(\theta_{i}^{0}\rightarrow-\text{sign}\left(D_{z}\right)\theta_{i}^{z}\) in accordance with our previous discussion.
Remarkably, although \(H_{\text{Kit}}^{\sigma}\) and \(H_{\text{Kit}}^{\sigma T}\) are individually exactly soluble, their sum is not due to the same site commutation
\[\left[\theta_{i}^{0},\theta_{i}^{\gamma}\right]=0. \tag{30}\]
Thus, the set of four operators \(\theta_{i}^{0},\theta_{i}^{x,y,z}\) do not behave as mutual Majorana fermions when all present, but instead are operators akin to what is known in the literature as _Greenberg parafermions_[87; 88; 89]. Returning to the JWT expressed in Eq. (11), it is possible to demonstrate an equivalence between \(\theta_{i}^{0}\) and \(\xi_{i}\), as well as between \(\theta_{i}^{\gamma}\) and \(\xi_{i}T_{i}^{\gamma}\) (the interested reader can follow Appendix B). Besides giving an interpretation to the SO(6) Majorana partons in terms of strings of operators this observation could possibly be useful for more general classes of parafermions in \(S=3/2\) models [90; 91; 92; 93].
## III Parton mean-field theory of the \(S=3/2\) Khm
After having discussed the different representations of \(S=3/2\) operators in terms of spin-orbital operators and SO(6) partons, we would like to study the full \(S=3/2\) KHM that is explicitly given by
\[H_{\text{Kit}} =\sum_{\langle ij\rangle_{\gamma}}J_{\gamma}\hat{u}_{\langle ij \rangle_{\gamma}}i\theta_{i}^{\alpha\beta}\theta_{j}^{\alpha\beta}\] \[+\sum_{\langle ij\rangle_{\gamma}}\frac{J_{\gamma}}{4}\hat{u}_{ \langle ij\rangle_{\gamma}}i\theta_{i}^{0}\theta_{j}^{0}\] \[+\sum_{\langle ij\rangle_{\gamma}}\frac{J_{\gamma}}{2}\hat{u}_{ \langle ij\rangle_{\gamma}}\left(i\theta_{i}^{0}\theta_{j}^{\alpha\beta}+i \theta_{i}^{\alpha\beta}\theta_{j}^{0}\right). \tag{31}\]
We emphasize that the first line of Eq. 31 is quadratic in terms of SO(6) Majorana fermions, whereas the second line is sextic, and the third is quartic. In order to proceed with analytical calculations, we need to perform a mean-field decoupling in terms of the following parameters [1]
\[Q_{i}^{\gamma} =\left\langle T_{i}^{\gamma}\right\rangle=-\left\langle i\theta_{i }^{\alpha}\theta_{i}^{\beta}\right\rangle,\] \[\Delta_{ij}^{\lambda\mu} =-\left\langle i\theta_{i}^{\lambda}\theta_{j}^{\mu}\right\rangle, \tag{32}\]
in which \(i\) and \(j\) are nearest-neighbor sites and the averages are obtained self-consistently. More explicitly, we write
\[i\theta_{i}^{0}\theta_{j}^{\alpha\beta}+i\theta_{i}^{\alpha \beta}\theta_{j}^{0} \approx\sum_{p=x,y,z}\left(Q_{i}^{p}i\theta_{i}^{p}\theta_{j}^{ \alpha\beta}+\Delta_{\langle ij\rangle_{\gamma}}^{p,\alpha\beta}i\theta_{i}^{ q}\theta_{i}^{r}\right)\] \[+\sum_{p=x,y,z}\left(Q_{j}^{p}i\theta_{i}^{\alpha\beta}\theta_{j}^{ p}+\Delta_{\langle ij\rangle_{\gamma}}^{\alpha\beta,p}i\theta_{i}^{q}\theta_{i}^{r} \right), \tag{33}\]
\[i\theta_{i}^{0}\theta_{j}^{0}\approx -\sum_{a=x,y,z}\left\langle\theta_{i}^{e}\theta_{j}^{x}\theta_{j}^ {y}\theta_{j}^{z}\right\rangle i\theta_{i}^{a}\theta_{i}^{b}\] \[-\sum_{a=x,y,z}\left\langle\theta_{i}^{x}\theta_{i}^{y}\theta_{i}^ {e}\theta_{j}^{c}\right\rangle i\theta_{j}^{a}\theta_{j}^{b}\] \[-\sum_{a,a^{\prime}=x,y,z}\left\langle\theta_{i}^{b}\theta_{i}^{e} \theta_{j}^{y}\theta_{j}^{c^{\prime}}\right\rangle i\theta_{i}^{a}\theta_{j}^{a^ {\prime}}. \tag{34}\]
The quartic averages in Eq. (III.2) are written in terms of Eq.(III.2) using Wick's theorem, which states
\[-\left\langle\theta_{i}^{e}\theta_{j}^{x}\theta_{j}^{y}\theta_{j}^ {z}\right\rangle =\Delta_{\langle ij\rangle}^{ex}Q_{j}^{x}+\Delta_{\langle ij\rangle }^{cy}Q_{j}^{y}+\Delta_{\langle ij\rangle}^{ex}Q_{j}^{z},\] \[-\left\langle\theta_{i}^{x}\theta_{i}^{y}\theta_{i}^{z}\theta_{j}^ {z}\right\rangle =Q_{i}^{x}\Delta_{\langle ij\rangle}^{xc}+Q_{i}^{y}\Delta_{\langle ij \rangle}^{yc}+Q_{i}^{z}\Delta_{\langle ij\rangle}^{cz}, \tag{35}\] \[-\left\langle\theta_{i}^{b}\theta_{i}^{c}\theta_{j}^{y}\theta_{j}^ {c^{\prime}}\right\rangle =Q_{i}^{a}Q_{j}^{a^{\prime}}-\Delta_{\langle ij\rangle}^{bb^{ \prime}}\Delta_{\langle ij\rangle}^{ce^{\prime}}+\Delta_{\langle ij\rangle}^{be^{ \prime}}\Delta_{\langle ij\rangle}^{cb^{\prime}}.\]
Although a large number of mean-field parameters are introduced in Eq. (III.2), a closer analysis of \(Q_{i}^{\gamma}\) and \(\Delta_{ij}^{\lambda\mu}\) shows that many of them vanish or are related by at most a negative sign factor, evincing symmetry constraints. In the following, we revisit the \(S=3/2\) KSL under the assumption that it preserves TRS and spatial symmetries, which greatly reduces the number of independent PMFT parameters. We keep our analysis for exchange parameters along the line \(J_{x}=J_{y}=1\) and \(D_{z}=0\), for which
it displays one mirror symmetry \(M^{b}\), whose mirror operator lies along the \(a\) axis, and a \(\pi\) rotation around the \(b\) axis (see Fig. 5). Whenever \(Z_{2}\) gauge operators were fixed, we assume that the operation was performed in the zero-flux sector. We also analyze the \(C_{3}\) rotation symmetry around the \(c\) axis on the isotropic model, which is crucial to understand the strong first-order quantum phase transition that separates the distinct KSL phases.
### Symmetries of the \(S=3/2\) Khm
#### iv.1.1 Time-reversal symmetry
Due to the oddness of spin under time-reversal \(\mathcal{T}\), Eq. (12) implies that the pseudo-dipoles and pseudo-orbitals transform like
\[\mathcal{T}\mathbf{\sigma}_{i}\mathcal{T}^{-1} =-\mathbf{\sigma}_{i},\] \[\mathcal{T}\mathbf{T}_{i}\mathcal{T}^{-1} =\left(T_{i}^{x},-T_{i}^{y},T_{i}^{z}\right). \tag{36}\]
By including the effect of complex conjugation \(\mathcal{T}i\mathcal{T}^{-1}=-i\), the corresponding action of \(\mathcal{T}\) on the SO(6) Majorana partons is [52]
\[\mathcal{T}\mathbf{\eta}_{i}\mathcal{T}^{-1} =\left(\eta_{i}^{x},\eta_{i}^{y},\eta_{i}^{z}\right),\] \[\mathcal{T}\mathbf{\theta}_{i}\mathcal{T}^{-1} =\left(\theta_{i}^{x},-\theta_{i}^{y},\theta_{i}^{z}\right), \tag{37}\]
upon which we define the indices \(\mathfrak{t}_{x}=\mathfrak{t}_{z}=1\), \(\mathfrak{t}_{y}=-1\) for the matter fermions. The transformation of products of order parameters and \(Z_{2}\) gauge variables is then given by
\[\mathcal{T}\left(i\tilde{u}_{ij}^{\gamma}\theta_{i}^{\delta}\theta_{j}^{ \mu}\right)\mathcal{T}^{-1}=\mathfrak{t}_{\lambda}\mathfrak{t}_{\mu}i\tilde{u }_{ij}^{\gamma}\theta_{i}^{\lambda}\theta_{j}^{\mu}, \tag{38}\]
where we used \(\mathcal{T}\tilde{u}_{ij}^{\gamma}\mathcal{T}^{-1}=-\tilde{u}_{ij}^{\gamma}\). Let us then fix the gauge operators. If the ground state \(\ket{\psi_{0}}\) does not break a symmetry \(\mathcal{S}\), then \(\bra{\psi_{0}\ket{\mathcal{O}}\psi_{0}}=\bra{\psi_{0}\ket{\mathcal{S}\mathcal{ O}\mathcal{S}^{-1}}\ket{\psi_{0}}}\). Eq. (38) implies that when \(\mathcal{O}\) is a product of two matter fermions, the parameters must fulfill
\[\Delta^{\lambda y}_{\left\langle ij\right\rangle_{\gamma}}=\Delta^{y\lambda}_ {\left\langle ij\right\rangle_{\gamma}}=0\text{ if }\lambda\neq y. \tag{39}\]
An important consequence of this relation is that, in a time-reversal symmetric QSL, \(\theta^{y}\) hybridizes with other Majorana flavors only through the onsite order parameters \(Q_{i}^{z}=-\bra{i\theta_{i}^{x}\theta_{j}^{y}}\) or \(Q_{i}^{x}=-\bra{i\theta_{i}^{y}\theta_{i}^{z}}\).
#### iv.1.2 Mirror and \(C_{2}\) rotation
The effect of spatial symmetries on the Kitaev model is more readily understood in terms of \(\left(S^{a},S^{b},S^{c}\right)\) spins in the crystallographic frame, whose relation to the spins on the cubic axes is [14; 15; 18]
\[S^{x} =\frac{S^{a}}{\sqrt{6}}-\frac{S^{b}}{\sqrt{2}}+\frac{S^{c}}{\sqrt {3}},\] \[S^{y} =\frac{S^{a}}{\sqrt{6}}+\frac{S^{b}}{\sqrt{2}}+\frac{S^{c}}{\sqrt {3}},\] \[S^{z} =-\sqrt{\frac{2}{3}}S^{a}+\frac{S^{c}}{\sqrt{3}}. \tag{40}\]
The action of \(R=M^{b},C_{2}\) on an isolated spin is \(R\left(S^{a},S^{b},S^{c}\right)R^{-1}\equiv R\left(S^{a},S^{b},S^{c}\right)= \left(-S^{a},S^{b},-S^{c}\right)\), and leads to
\[R\mathbf{S}_{i}=\left(-S^{y}_{R(i)},-S^{x}_{R(i)},-S^{z}_{R(i)}\right). \tag{41}\]
The most relevant difference between \(M^{b}\) and \(C_{2}\) is that \(i\) and \(M^{b}(i)\) are on opposite sublattices, whereas \(i\) and \(C_{2}(i)\) are on the same. The application of Eq. (41) in Eq. (12) implies that
\[RT_{i}^{z} =T_{R(i)}^{z},\] \[RT_{i}^{x} =-T_{R(i)}^{x},\] \[RT_{i}^{y} =-T_{R(i)}^{y}. \tag{42}\]
Therefore, if \(C_{2}\) and translation symmetries are preserved,
\[Q_{X}^{x/y}=-Q_{X}^{x/y}\implies Q_{X}^{x/y}=0. \tag{43}\]
Hence, the only onsite order parameter allowed by spatial symmetries is \(Q^{z}\).
To evaluate the effect of symmetry operators over \(\Delta^{\lambda\mu}_{ij}\), we first observe that Eq. (41) implies that
\[R\mathbf{\eta}_{i} =\left(-\eta^{y}_{R(i)},-\eta^{x}_{R(i)},-\eta^{z}_{R(i)}\right),\] \[R\mathbf{\theta}_{i} =\left(-\theta^{x}_{R(i)},-\theta^{y}_{R(i)},\theta^{z}_{R(i)} \right). \tag{44}\]
Figure 4: (a) Phase diagram of the spin-orbital model \(H^{\sigma T}_{\text{Kh}}\) on the plane \(J_{x}+J_{y}+J_{z}=1\) with positive coupling constants, in which the dark blue area corresponds to gapless phases. The graphics (b-d) correspond to the spectrum of excitations along high-symmetry lines with the following coupling constants (b) \(J_{x}=J_{y}=J_{z}=1\), (c) \(J_{x}=J_{y}=1.0\), \(J_{z}=0.7\), and (d) \(J_{y}=0.1\), \(J_{x}=J_{z}=0.45\).
After defining
\[s_{\lambda}=\begin{cases}-1,&\text{if }\lambda=x,y\\ 1&\text{if }\lambda=z,\end{cases} \tag{45}\]
Eq. (44) yields
\[M^{b}\left(i\hat{u}^{\gamma}_{ij}\partial^{\lambda}_{i}\theta^{ \mu}_{j}\right) =s_{\lambda}s_{\mu}i\hat{u}^{m(\gamma)}_{M^{b}_{j}M^{i}_{i}} \theta^{\mu}_{M^{b}_{j}}\theta^{\lambda}_{M^{b}_{i}},\] \[C_{2}\left(i\hat{u}^{\gamma}_{ij}\theta^{\lambda}_{i}\theta^{ \mu}_{j}\right) =s_{\lambda}s_{\mu}i\hat{u}^{m(\gamma)}_{C_{2}(i)C_{2}(j)}\theta^{ \lambda}_{C_{2}(i)}\theta^{\mu}_{C_{2}(j)}, \tag{46}\]
in which \(m(x)=y\), \(m(y)=x\), and \(m(z)=z\) are indices related to the bond transformation under \(R\). The \(C_{2}\) symmetry of the Hamiltonian then implies \(\Delta^{\lambda\mu}_{\left\langle ij\right\rangle_{\gamma}}=s_{\lambda}s_{ \mu}\Delta^{\lambda\mu}_{\left\langle ij\right\rangle_{-\left\langle ij \right\rangle_{-\left\langle\gamma\right\rangle}}}\), which leads to
\[\Delta^{zx}_{\left\langle ij\right\rangle_{z}} =\Delta^{xz}_{\left\langle ij\right\rangle_{z}}=\Delta^{zy}_{ \left\langle ij\right\rangle_{z}}=\Delta^{yz}_{\left\langle ij\right\rangle_{ z}}=0,\] \[\Delta^{\lambda\mu}_{\left\langle ij\right\rangle_{y}} =s_{\lambda}s_{\mu}\Delta^{\lambda\mu}_{\left\langle ij\right\rangle _{x}}. \tag{47}\]
Applying a similar reasoning to \(M^{b}\), we find
\[\Delta^{\lambda\mu}_{\left\langle ij\right\rangle_{y}}=s_{\lambda}s_{\mu} \Delta^{\mu\lambda}_{\left\langle ij\right\rangle_{x}},\]
which in combination with Eq. (47) give
\[\Delta^{\lambda\mu}_{\left\langle ij\right\rangle_{x}}=\Delta^{\mu\lambda}_{ \left\langle ij\right\rangle_{x}},\,\Delta^{\lambda\mu}_{\left\langle ij \right\rangle_{y}}=\Delta^{\mu\lambda}_{\left\langle ij\right\rangle_{y}}. \tag{48}\]
The results gathered in this section imply that \(\Delta^{zx}_{\left\langle ij\right\rangle_{x}}\) is the only non-zero mixed-flavor order parameter \(\Delta\), and all others either vanish or are related to it by symmetry. We also confirmed this constraint numerically along the line \(J_{x}=J_{y}\).
#### iv.1.3 \(C_{3}\) symmetry
The isotropic point is a critical point of strong first-order phase transitions [1] which motivates a closer look. The key symmetry distinction of the KHM in this point to others discussed above is its invariance under \(C_{3}\) rotations, whose effect on spins is given by
\[C_{3}\left(\begin{array}{c}S^{x}_{\text{r}\text{X}}\\ S^{x}_{\text{r}\text{X}}\\ S^{z}_{\text{r}\text{X}}\end{array}\right)=\left(\begin{array}{c}S^{y}_{ \left(R_{3}\text{r}\right)X}\\ S^{z}_{\left(R_{3}\text{r}\right)X}\\ S^{z}_{\left(R_{3}\text{r}\right)X}\end{array}\right), \tag{49}\]
in which we see that the sublattices remain invariant under rotation. The corresponding parton transformations are
\[C_{3}\left(\begin{array}{c}\eta^{x}_{\text{r}\text{X}}\\ \eta^{y}_{\text{r}\text{X}}\\ \eta^{z}_{\text{r}\text{X}}\end{array}\right) =\left(\begin{array}{c}\eta^{y}_{\left(C_{3}\text{r}\right)X}\\ \eta^{z}_{\left(C_{3}\text{r}\right)X}\\ \eta^{x}_{\left(C_{3}\text{r}\right)X}\end{array}\right),\] \[C_{3}\left(\begin{array}{c}\theta^{x}_{\text{r}\text{X}}\\ \theta^{y}_{\text{r}\text{X}}\\ \theta^{x}_{\text{r}\text{X}}\end{array}\right) =\left(\begin{array}{ccc}-\frac{1}{2}&0&-\frac{\sqrt{3}}{2}\\ 0&1&0\\ \frac{\sqrt{3}}{2}&0&-\frac{1}{2}\end{array}\right)\left(\begin{array}{c} \theta^{x}_{\left(C_{3}\text{r}\right)X}\\ \theta^{x}_{\left(C_{3}\text{r}\right)X}\\ \theta^{z}_{\left(C_{3}\text{r}\right)X}\end{array}\right). \tag{50}\]
These equations are enough to enforce several constraints between the order parameters that are tabled explicitly in Appendix E. In particular, the quadrupolar order parameters satisfy
\[Q^{z}_{X} =-\frac{1}{2}Q^{z}_{X}+\frac{\sqrt{3}}{2}Q^{x}_{X},\] \[Q^{x}_{X} =-\frac{1}{2}Q^{x}_{X}-\frac{\sqrt{3}}{2}Q^{z}_{X},\]
and therefore
\[Q^{z}_{X}= 0. \tag{51}\]
In other words, if the isotropic KSL does not break symmetries, then we do not expect any pseudo-orbital order at the isotropic point. This result is in sharp contrast to the semiclassical QSL proposed in Ref. [44], since the kekule pattern of the dimers impose an order of \(Q^{z}\) and \(Q^{x}\).
### Constrained Mean-field Hamiltonian
The symmetry constrained PMFT parameters for the zero-flux sector can be summarized as follows
\[Q^{x}=Q^{y} =0,\] \[Q^{z}_{A} =Q^{z}_{B},\] \[\Delta^{ab}_{\left\langle ij\right\rangle_{\gamma}} =\Delta^{ba}_{\left\langle ij\right\rangle_{\gamma}},\] \[\Delta^{ab}_{\left\langle ij\right\rangle_{y}} =s_{a}s_{b}\Delta^{ab}_{\left\langle ij\right\rangle_{x}},\] \[\Delta^{xz}_{\left\langle ij\right\rangle_{z}} =\Delta^{zx}_{\left\langle ij\right\rangle_{z}} =0,\] \[\Delta^{\lambda y}_{\left\langle ij\right\rangle_{\gamma}} =\Delta^{y\lambda}_{\left\langle ij\right\rangle_{\gamma}} =0,\text{ if }\lambda\neq y, \tag{52}\]
i.e., there are only eight independent, non-vanishing parameters to be computed self-consistently
\[Q^{z},\Delta^{aa}_{\left\langle ij\right\rangle_{z}},\Delta^{aa}_{\left\langle ij \right\rangle_{z}},\Delta^{zx}_{\left\langle ij\right\rangle_{x}}. \tag{53}\]
Figure 5: Two plaquettes of the honeycomb lattice. The figure displays (i) the crystallographic axes \((a,b,c)\), (ii) the projection of the \((x,y,z)\) axes onto the \(ab\) plane, (iii) the mirror elements \(M^{b}\) and the \(C_{2}\) rotation axis, and (iv) the distinction between even and odd sublattices.
For SIA preserving mirror, \(C_{2}\), and TRS the results above are valid for \(D_{z}\neq 0\). At the isotropic point, we find only three non-vanishing and independent parameters given by \(\Delta^{aa}_{\left<ij\right>_{z}}\). The order parameters obtained through unconstrained PMFT in Ref. [1] are consistent with these results, thus demonstrating that the \(S=3/2\) KSLs are the most general \(S=3/2\) Majorana QSL preserving all the model's symmetries while minimizing the energy.
We are now ready to give an in-depth description of the different KSL phases starting with the isotropic case [1], as shown in Fig. 1. \(C_{3}\)-symmetry constraints enforce that \(H^{\sigma,T}_{\text{Kit,MFT}}=0\), such that the KHM at this point is described by \(H^{gT}_{\text{Kit}}\) perturbed by a model whose entries are proportional to \(J\left(\Delta^{aa}_{\left<ij\right>_{z}}\right)^{2}\) from the six fermion interaction of \(H^{\sigma}_{\text{Kit}}\), see Eq. (36). The qualitative properties of the isotropic KSL can be understood from the "parent Hamiltonian" \(H^{\sigma T}_{\text{Kit}}\) but with an interaction induced small dispersion to the isotropic QSL flat bands and renormalization of the dispersive bands, as can be seen by comparing Fig. 6(a) and Fig. 4(b).
The symmetry constraint preventing the hybridization of \(\theta^{y}\) with mobile \(\theta^{x,z}\) fermions only appear at the isotropic point and for \(D_{z}=0\). For all other \(\left(J_{z},D_{z}\right)\) points, a nonzero \(Q^{z}=\left<T^{z}\right>\) expectation value appears reducing the energy by strongly affecting the low-energy flat band. The presence of the flat band, therefore, explains the strong first-order phase transitions in the neighborhood of the isotropic point. Figs. 6(b) and (c) show that the Majorana fermion dispersion of both the gapped \(\left(J_{z}>1\right)\) and the gapless \(\left(J_{z}<1\right)\) phases are very different from the isotropic one even for small deviations of \(J_{z}=1\). Once the transition occurs, Fig. 6(d) indicates that \(Q^{z}\) varies slowly as a function of \(J_{z}\).
Let us now consider the gapped KSL exemplified by those on the line \(J_{z}>1\), \(D_{z}=0\). A qualitative picture of this KSL is understood by starting from \(J_{x}=J_{y}=0\) (or \(J_{z}\rightarrow\infty\)), which displays a \(2^{N}\)-fold degenerate ground state composed by all direct products of antiferromagnetic dimers with \(S^{z}=\pm 3/2\). All states in this manifold are characterized by the same quadrupolar order \(Q^{z}=+1\) at all sites. Introducing small values of \(J_{x}\) and \(J_{y}\) allows us to derive a toric code model [2] at the 12th order in perturbation theory for \(S=3/2\). The toric-code exchange coupling thus scales as \(\left(J_{z}\right)^{-11}\), which implies a rapid decay of the flux gap. This feature is manifest in the DMRG simulations, for which the plaquette operators \(W^{\sigma}_{p}\) are disordered in the gapped phase [1]. Indeed, PMFT estimates a flux gap \(\Delta_{\text{flux}}\leq 10^{-6}\) for \(J_{z}\gtrsim 1.2\), an energy difference that is smaller than the truncation error of DMRG simulations with 4000 kept states.
The \(S=3/2\) KSL for \(0<J_{z}<1\) and \(D_{z}=0\) is gapless, characterized by a negative \(Q^{z}\), and can be directly related to the \(S=1/2\) gapless KSL. Recall the discussion in Sec. II, where we showed how the \(S=3/2\) KHM is projected onto the \(S=1/2\) KHM with renormalized \(J_{z}\) when \(D_{z}\rightarrow+\infty\). The gapless \(S=1/2\) KSLs is then adiabatically connected, e.g. without opening a gap, to the \(S=3/2\) KSL phases along the path in the \(\left(J_{z},D_{z}\right)\) region.
In the \(D_{z}\rightarrow\infty\) limit, the point \(J_{z}=8\) marks the phase transition between the gapless and the gapped \(S=1/2\) KHM phases, as shown in Fig. 1. This \(S=1/2\) gapped phase is not adiabatically connected to the \(S=3/2\) discussed above, since they are characterized by \(Q^{z}\) parameters with different signs and any path connecting these phases in the \(\left(J_{z},D_{z}\right)\) parameter space passes through a first-order quantum phase transition.
## IV Effect of out-of-plane single-ion anisotropy
In this section, we study the \(S=3/2\) KHM perturbed by an experimentally relevant out-of-plane SIA. We find that the resulting QSL breaks TRS and displays topologically nontrivial bands which are reminiscent of the chiral QSL of the KHM where it is induced by an out-of-plane magnetic field applied to the gapless \(S=1/2\) KSL [2]. In the present case, TRS-breaking occurs spontaneously without an external magnetic field similar to cases of SU(\(N\)) Heisenberg models in the large-\(N\) limit [94; 95; 96] or Kitaev models on graphs containing plaquettes with an odd number of vertices [52; 53; 63; 82; 97]. However, we will show that in the case of the \(S=3/2\) KHM, the sum of the Chern numbers is equal to zero, resulting in a non-chiral ground state.
### Three-spin interaction induced by single-ion anisotropy
We now consider an out-of-plane SIA given by
\[H_{\text{SIA}}=-D_{c}\sum_{j}\left(S^{c}_{j}\right)^{2}, \tag{54}\]
in which the \(c\) axis is indicated in Fig. 5. Such a SIA is predicted to be relevant for the recently proposed \(S=3/2\) Kitaev materials on the honeycomb lattice [41; 42; 39]. Moreover, Ref. [41] proposes that strain can tune the van der Waals magnets into a model dominated by Kitaev interactions and out-of-plane SIA. Therefore, Eq. (54) is the simplest perturbation to the KHM, which has direct experimental implications. This term can be rewritten in terms of pseudo-dipoles and pseudo-orbitals using Eq. (21) as follows
\[H_{\text{SIA}}=\frac{D_{c}}{3}\sum_{j}\left(\sigma^{x}_{j}+\sigma^{y}_{j}+ \sigma^{z}_{j}\right)T^{y}_{j}, \tag{55}\]
in which we dropped off an unimportant constant. The presence of pseudo-dipoles in this expression shows that \(H_{\text{SIA}}\) does not commute with \(W^{\sigma}_{p}\) and creates flux excitations. Recent studies of the \(S=1/2\) KHM have developed a piece of machinery to study non-flux-conserving
perturbations using variational methods [98; 99] or extensions of PMFT [100; 101; 102]. For simplicity, we will focus on the zero-flux sector within the third-order perturbation theory.
The SIA induces a three spin-orbital interaction preserving the flux sector in analogy to the effect of a magnetic field on the \(S=1/2\) KHM [2]. A straightforward way to show this is to rewrite Eq. (55) as
\[H_{\text{SIA}}=-\frac{D_{c}}{3}\sum_{j}i\left(\eta_{j}^{x}+\eta_{j}^{y}+\eta_ {j}^{z}\right)\theta_{j}^{y}, \tag{56}\]
which is analogous to the representation of an applied magnetic field on \(S=1/2\) systems [2]. Notice that the only matter flavor involved in \(H_{\text{SIA}}\) is \(\theta^{y}\), indicating a direct influence on the flat bands. The third-order perturbation theory of \(H_{\text{SIA}}\) displays a flux-conserving three-body interaction
\[H^{(3)}=\kappa\sum_{\left\langle ij\right\rangle_{\alpha}\left\langle jk \right\rangle_{\beta}}(\sigma_{i}^{\alpha}T_{i}^{y})\left(\sigma_{j}^{\gamma} T_{j}^{y}\right)\left(\sigma_{k}^{\beta}T_{k}^{y}\right), \tag{57}\]
in which \(i\) and \(k\) are second-nearest neighbors, \(j\) is the site bridging them, and \(\kappa\sim(D_{c}/3)^{3}\). The SO(6) Majorana representation also provides an adequate representation of \(H^{(3)}\), as it is clear by rewriting
\[\sigma_{i}^{\alpha}T_{i}^{y} =-i\eta_{i}^{\alpha}\theta_{i}^{y},\] \[\sigma_{k}^{\beta}T_{k}^{y} =-i\eta_{k}^{\beta}\theta_{k}^{y},\] \[\sigma_{j}^{\gamma}T_{j}^{y} =\left(-i\eta_{j}^{\alpha}\eta_{j}^{\beta}\right)\left(-i\theta_ {j}^{\beta}\theta_{j}^{x}\right), \tag{58}\]
which leads to
\[H^{(3)}=-\kappa\sum_{\left\langle ij\right\rangle_{\alpha}\left\langle jk \right\rangle_{\beta}}\hat{U}_{\left\langle ik\right\rangle}\left(i\theta_{i}^ {y}\theta_{j}^{z}\right)\left(i\theta_{j}^{x}\theta_{k}^{y}\right), \tag{59}\]
where where \(\hat{U}_{\left\langle ik\right\rangle}=\hat{u}_{\left\langle ij\right\rangle_ {\alpha}}\hat{u}_{\left\langle jk\right\rangle_{\beta}}\).
The general zero-flux mean-field decoupling of \(H^{(3)}\) is given by
\[H_{\text{MFT}}^{(3)} =\kappa\sum_{\left\langle ij\right\rangle_{\alpha}\left\langle jk \right\rangle_{\beta}}\hat{U}_{\left\langle ik\right\rangle}\left[\Delta_{ij} ^{yz}\left(i\theta_{j}^{x}\theta_{k}^{y}\right)+\Delta_{jk}^{xy}\left(i\theta_ {i}^{y}\theta_{j}^{z}\right)\right]\] \[-\kappa\sum_{\left\langle ij\right\rangle_{\alpha}\left\langle jk \right\rangle_{\beta}}\hat{U}_{\left\langle ik\right\rangle}\left[\Delta_{ij} ^{yx}\left(i\theta_{j}^{x}\theta_{k}^{y}\right)+\Delta_{jk}^{zy}\left(i\theta_ {i}^{y}\theta_{j}^{x}\right)\right]\] \[+\kappa\sum_{\left\langle ij\right\rangle_{\alpha}\left\langle jk \right\rangle_{\beta}}\hat{U}_{\left\langle ik\right\rangle}\left[\xi_{ik}^{ yy}\left(i\theta_{j}^{z}\theta_{j}^{x}\right)+Q_{j}^{y}\left(i\theta_{i}^{y} \theta_{k}^{y}\right)\right], \tag{60}\]
in which we introduced second-nearest neighbor order parameters
\[\xi_{ik}^{yy}=-\left\langle i\theta_{i}^{y}\theta_{k}^{y}\right\rangle. \tag{61}\]
A nonzero \(\kappa\) in Eq. (60) provides a positive feedback loop involving the formation of an octupolar order parameter \(Q^{y}\) and the onset of second-nearest neighbor hoppings between \(\theta_{i}^{y}\) particles. This implies that the isotropic \(S=3/2\) KSL is unstable to breaking time-reversal symmetry under the influence of \(H^{(3)}\). Since \(Q^{y}\neq 0\) implies time-reversal symmetry breaking, parameters such as \(\Delta_{ij}^{yx}\) and \(\Delta_{ij}^{yz}\) can now acquire nonzero values and enhance the hybridization between \(\theta^{y}\) flat band states and itinerant Majorana fermions. The complete hybridization of the low-energy flat bands leads to the first-order phase transition indicated in Fig. 7. For \(\kappa=0.001\), we find that \(Q_{A}^{y}=Q_{B}^{y}\approx 0.28\) and second nearest-neighbor hopping parameters \(\xi_{\mathbf{r},\mathbf{r}+\mathbf{d}_{\alpha},A}^{yy}=-\xi_{\mathbf{r},\mathbf{ r}+\mathbf{d}_{\alpha},B}^{yy}\approx-0.115\), in which \(\mathbf{d}_{\alpha=1,3,5}\) is indicated in Fig. 2. A small value of \(\kappa\) also leads to a large difference between the dispersion of the isotropic model in Fig. 6(a) and the CSL dispersion in Fig. 8(a).
### Topological Properties of the Time-Reversal Symmetry Breaking Spin Liquid
Next, we discuss the topological properties of the TRS breaking \(S=3/2\) KSL. After the sudden jump of the octupolar order parameter for infinitesimal \(\kappa\) it grows slowly; for concreteness, we fix \(\kappa=0.001\). In this case, the CSL is characterized by three narrow bands, in which the one closer to zero is particularly flat, see Fig. 8(a).
Figure 6: The graphics (a-c) exemplify the Majorana dispersion of the \(S=3/2\) KSL on the (a) isotropic, (b) gapped \(J_{z}=1.01\), and (c) gapless \(J_{z}=0.99\) cases. Panel (d) shows the evolution of the quadrupolar expectation value \(Q^{z}\) as a function of \(J_{z}\)
Their topological properties can be quantified by the Berry curvature
\[\mathbf{\Omega}_{n}\left(\mathbf{k}\right)=\nabla_{\mathbf{k}}\times\mathbf{A}_{n }\left(\mathbf{k}\right), \tag{62}\]
in which is the Berry connection of the \(n\)-th eigenstate \(\left|u_{n}\left(\mathbf{k}\right)\right\rangle\) labeled by the wavevector \(\mathbf{k}\). We computed the Berry curvature [103] and Figs. 8(b)-(d) displays the density plot of the \(z\) direction of \(\mathbf{\Omega}_{n}\left(\mathbf{k}\right)\) of the negative energy bands. We compute the Chern number of the three negative energy bands
\[C_{n}=\frac{1}{2\pi}\int_{\mathrm{BZ}}d^{2}\mathbf{k}\Omega_{n}^{z}\left( \mathbf{k}\right), \tag{63}\]
and checked that bands with opposite energy dispersion display opposite Chern numbers. The lowest, intermediate, and highest energy bands have Chern numbers \(C=1\), \(C=0\), and \(C=-1\), respectively. Hence, two of the bands are topologically nontrivial but the whole system has a total Chern number equal to zero. Therefore, no chiral edge mode crosses the gap around zero energy and the system is not a CSL.
Another function that also illustrates the non-trivial properties of the band topology is the Hall conductivity \(\sigma(\epsilon)\)[104]
\[\sigma(\epsilon)=\frac{1}{V}\sum_{\mathbf{k},\epsilon_{\mathbf{k}}<\epsilon} \Omega^{z}(\mathbf{k}), \tag{64}\]
which is indicated in Fig. 9(a). Since the Majorana bands are gapped, \(\sigma(\epsilon)=0\) for low energies. Then it jumps to \(\sigma(\epsilon)=1\) due to the integration of \(\Omega^{z}(\mathbf{k})\) of the lowest positive-energy band, as expected from its Chern number indicated in Fig. 8. The Hall conductivity is kept constant in the gap between the lowest and second-lowest positive-energy bands. After reaching the second band, \(\sigma(\epsilon)\) oscillates in accordance to the nonzero values of \(\Omega^{z}(\mathbf{k})\), then returning to \(\sigma(\epsilon)=1\). Finally, \(\sigma(\epsilon)\) drops sharply to zero as the integration occurs at the highest energy band. The non-trivial topological features in periodic boundary conditions are reflected by the existence of edge states in open boundary conditions, as indicated in Fig. 9(b). In this case, high-energy modes connect the two topologically nontrivial bands. Low-energy edge modes are also observed in Fig. 9(c) but they do not connect the bands and are topologically trivial.
The standard signature for edge states in CSLs is the thermal Hall conductivity, which displays half-quantization due to the presence of zero-energy chiral Majorana edge states [28; 2; 29]. For a flux-fixed background, we can estimate the thermal Hall conductivity through [104]
\[\kappa_{H}(T)=-\frac{1}{T}\int_{0}^{\infty}d\epsilon\,\epsilon^{2}\sigma( \epsilon)\frac{\partial f}{\partial\epsilon}(\epsilon,T), \tag{65}\]
in which \(f\left(\epsilon,T\right)\) is the Fermi-Dirac distribution. Fig. 9(d) shows the numerically computed \(\kappa_{H}(T)/T\). In contrast to CSLs, it vanishes at low temperatures and then rapidly grows to a peak at a temperature scale when the chiral edge modes between the higher energy bands are thermally populated, which is similar to the behavior of topological magnon insulators. The value of the peak can still be quantified in terms of the thermal Hall conductivity of the chiral \(S=1/2\) KSL, which reads [28; 29]
\[\frac{\kappa_{\mathrm{KSL}}^{1/2}}{T}=\frac{1}{2}\left(\frac{\pi^{2}k_{B}^{2} }{3\hbar}\right)C_{h}, \tag{66}\]
in which \(C_{h}=\pm 1\) according to the direction of the applied magnetic field. In contrast to the chiral \(S=1/2\) KSL, the TRS breaking QSL discussed here does not reach the plateau, as indicated in Fig. 9(d).
## V Conclusions and Outlook
In this work, we have provided a detailed study of the \(S=3/2\) KHM emphasizing its similarities with and re
Figure 8: (a) Band structure of the spontaneously TRS breaking QSL at \(\kappa=0.001\). Notice that all bands display narrow dispersions. Panels (b-d) display the z-component of the Berry curvature \(\mathbf{\Omega}\), in which (b) corresponds to the lowest-energy band, (c) to the intermediate-energy band, and (d) to the highest-energy band.
Figure 7: Evolution of the octupolar parameter \(Q^{y}\) as a function of the three-site interaction quantified by \(\kappa\) for \(\kappa\in(0,0.2]\). The \(\kappa=0\) point marks a strong first-order phase transition that is followed by a smooth increase of \(Q^{y}\).
lations to exactly solvable KK models [62; 63; 64; 56; 59; 60; 65; 66; 67; 68; 69; 70; 105]. Our analysis mapped out the local symmetries of the model and analyzed the nature of the \(S=3/2\) KSL phases. We showed that the model still contains an exact static Z\({}_{2}\) gauge field and in a given flux sector it is a sum of bilinear Majorana operators and quartic and sextic interactions. The presence of an exactly soluble part of the \(S=3/2\) KHM, e.g. a kinetic term before the parton mean-field decoupling, also rationalizes the remarkable quantitative agreement between PMFT and DMRG simulations found previously [1].
The symmetry analysis was crucial for understanding the first-order phase transition occurring when introducing anisotropies in the couplings. Namely, it provides tight constraints for the order parameters and shows the emergence of a low-energy Majorana flat band. The pseudo-dipole and pseudo-orbital operators in a KK-like representation of the model were useful for uncovering similarities between the \(S=3/2\)[111] SIA and the \(S=1/2\) out-of-plane magnetic field. The latter motivated us to study the \(S=3/2\) KHM with this experimentally relevant SIA and we argue that the system displays spontaneous TRS breaking. Some of the Majorana bands of resulting QSL acquire nonzero Chern numbers but the TRS phase is different from the standard chiral QSL because the sum over the Chern number of all bands is zero. Hence, no quantization of the thermal Hall conductivity is expected at very low temperatures but only a broad maximum at finite temperatures.
Our work opens a number of avenues for future research. It would be interesting to verify if the techniques that we apply for the \(S=3/2\) KHM in this paper can be generalized for higher-spin systems with \(S=(2^{n}-1)/2\) (\(n\in\mathbb{N}\)), as suggested by the exactly solvable models discussed in Section II. We foresee that such a study can provide a complementary approach to the large-\(S\) limit of this model [44] but within a natural extension of Kitaev's original formalism [2]. It would also be consistent with a recent study showing that half-integer KHMs always display deconfined \(Z_{2}\) fermionic gauge charges [106]. Another open problem concerns the systematic study of the \(S=3/2\) KHM in different flux sectors, in the presence of disorder or vacancies. The introduction of flux excitations would also allow the computation of different dynamical response functions for experimental detection following Ref. [64].
Finally, it would be very worthwhile to systematically study implementations of the \(S=3/2\) KHM in van der Waals magnets. Studies using ab initio [39; 41] and quantum chemistry [42] methods suggest that the Kitaev exchange is present in van der Waals ferromagnets such as CrI\({}_{3}\) and CrXTe\({}_{3}\) (X=Si,Ge) due to their ligands strong spin-orbit coupling. The theoretical studies indicate that the Kitaev interaction should be substantially smaller than the Heisenberg one, a result that is consistent with the data from a recent neutron scattering experiment on CrI\({}_{3}\)[107]. However, the same theories also suggest that strain can dramatically change the exchange constants, and even induce a model dominated by Kitaev interactions and [111] SIA [41]. This strain is experimentally feasible, as it can be applied mechanically or by proximity effects in metal-insulator heterostructures [33; 34]. When combined with better strategies for quantifying exchange constants [108], microscopic studies can help to discover new QSL candidates in higher-spin and spin-orbital systems.
###### Acknowledgements.
We thank F. Pollmann for important discussions and collaboration on previous related work. W.N. would like to thank F. Alcaraz for suggesting a connection to parafermions, and to R. Pereira, E. Andrade, and E. Miranda for works on related projects. W.N. also thanks T. Ziman and M. Zhitomirsky for discussions about van der Waals magnets. W.N. and J.K. acknowledge the support of the Royal Society via a Newton International Fellowship through project NIF-R1-181696, during which many of the results in the manuscript were derived. H.-K. J. is funded by the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No. 771537). JK is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus. We also acknowledge the support of the Imperial-TUM flagship partnership.
Figure 9: Topological characterization of the TRS breaking QSL. Panel (a) displays its Hall conductivity \(\sigma(\epsilon)\) evaluated according to Eq. (64). Panel (b) shows the QSL dispersion in open boundary conditions highlighting emergent low-energy edge states in red that are detailed in Panel (c). Panel (d) displays the thermal Hall conductivity \(\kappa_{H}(T)/T\) and indicates a peak tending to the characteristic half-quantization value before decreasing monotonically with increasing temperature. |
2307.12536 | Algebraic closures and their variations | We study possibilities for algebraic closures, differences between definable
and algebraic closures in first-order structures, and variations of these
closures with respect to the bounds of cardinalities of definable sets and
given sets of formulae. Characteristics for these possibilities and differences
are introduced and described. These characteristics are studied for some
natural classes of theories. Besides algebraic closure operators with respect
to sets of formulae are introduced and studied. Semilattices and lattices for
families of these operators are introduced and characteristics of these
structures are described. | Sergey V. Sudoplatov | 2023-07-24T05:51:27Z | http://arxiv.org/abs/2307.12536v1 | # Algebraic closures and their variations+
###### Abstract
We study possibilities for algebraic closures, differences between definable and algebraic closures in first-order structures, and variations of these closures with respect to the bounds of cardinalities of definable sets and given sets of formulae. Characteristics for these possibilities and differences are introduced and described. These characteristics are studied for some natural classes of theories. Besides algebraic closure operators with respect to sets of formulae are introduced and studied. Semilattices and lattices for families of these operators are introduced and characteristics of these structures are described.
**Key words:** algebraic closure, definable closure, degree of algebraization, algebraic set, \(\Delta\)-algebraic element, semilattice.
## 1 Introduction
The notions of definable and algebraic closures are broadly used in Model Theory and applications [1, 2, 3, 4, 5].
In the paper, we study possibilities for algebraic closures, differences between definable and algebraic closures in first-order structures, and variations of these closures with respect to the bounds of cardinalities of definable sets and given sets of formulae. Characteristics for these possibilities and differences are introduced and described. These characteristics are studied for some natural classes of theories.
The paper is organized as follows. In Section 2, preliminary notions, notations and properties for algebraic and definable closures are considered. In Section 3, we study variations of algebraic closure, properties and possibilities of these variations, of degree of algebraization, and of difference between definable and algebraic closures, illustrating these possibilities by a series of examples. Algebraic sets and their degrees are studied in Section 4. In Section 5, we consider algebraic closures relative sets of formulae, study a hierarchy of operators of algebraic closures relative these sets. We introduce semilattices and lattices for families of these operators and describe some characteristics of these structures.
Throughout we use both the standard model-theoretic terminology and notions of Lattice Theory [6, 7, 8].
## 2 Pregeometries and closures
**Definition**[2, 3, 4, 5], cf. [9]. For a set \(S\), its Boolean \(P(S)\), and an operator cl: \(P(S)\to P(S)\), a pair \(\langle S,\mbox{cl}\rangle\) is called a _pregeometry_ or a _matroid_, if it satisfies the following conditions for any \(X\subseteq S\) and \(a,b\in S\):
(i) \(X\subseteq\mbox{cl}(X)\) (Reflexivity);
(ii) \(\mbox{cl}(\mbox{cl}(X))=\mbox{cl}(X)\) (Transitivity);
(iii) if \(a\in\mbox{cl}(X)\) then \(a\in\mbox{cl}(Y)\) for some finite \(Y\subseteq X\) (Finite character).
(iv) if \(a\in\mbox{cl}(X\cup\{b\})\setminus\mbox{cl}(X)\) then \(b\in\mbox{cl}(X\cup\{a\})\) (Exchange property);
A pregeometry \(\langle S,\mbox{cl}\rangle\) is called a _geometry_ if:
(v) \(\mbox{cl}(\emptyset)=\emptyset\), and for every \(a\in S\), \(\mbox{cl}(\{a\})=\{a\}\) (ES-property).
The operator cl for the pregeometry is called the _closure operator_. By the definition the closure operator for a pregeometry produces a geometry if the closure of the empty set is again empty and the closure of any singleton is again a singleton, which, by reflexivity, equals that singleton.
**Remark 2.1**: [3]. Any pregeometry \(\langle S,\mbox{cl}\rangle\) produces a _canonical_ geometry \(\langle S^{\prime},\mbox{cl}^{\prime}\rangle\) putting \(S^{\prime}=\{\mbox{cl}(\{a\})\mid a\in S\setminus\mbox{cl}(\emptyset)\}\) and for \(X\subseteq S\), \(\mbox{cl}^{\prime}(\{\mbox{cl}(\{a\})\mid a\in X\})=\{\mbox{cl}(b)\mid b\in \mbox{cl}(X)\}\).
In fact the ES-property for \(\mbox{cl}^{\prime}\) is implies by the reflexivity and transitivity for the operator cl.
We fix a big saturated structure \(\mathcal{M}\) and its theory \(T=\mbox{Th}(\mathcal{M})\). Following [10] by a set \(A\) of \(T\) we mean a subset of the universe \(M\) in the structure \(\mathcal{M}\) satisfying some type \(\mbox{tp}(A,\mathcal{M})\). Similarly, a tuple \(\overline{a}\) in \(T\) is a tuple of elements in \(\mathcal{M}\) satisfying given type \(\mbox{tp}(\overline{a})\).
In [1], two types of closures in structures are considered, algebraic and definable, as well as the following concepts related to them:
**Definition.**[1, 2, 5] 1. The tuple \(\overline{b}\) is _defined_ by the formula \(\varphi(\overline{x},\overline{a})\) of \(T\) with parameters \(\overline{a}\), if \(\varphi(\overline{x},\overline{a})\) has unique solution \(\overline{b}\).
The tuple \(\overline{b}\) is _defined_ by the type \(p\) if \(\overline{b}\) is the unique tuple which realizes \(p\). It is _definable_ over a set \(A\) if \(\mbox{tp}(\overline{b}/A)\) defines it.
2. For a set \(A\) of a theory \(T\) the union of sets of solutions of formulae \(\varphi(x,\overline{a})\), \(\overline{a}\in A\), such that \(\models\exists^{=n}x\;\varphi(x,\overline{a})\) for some \(n\in\omega\) (respectively \(\models\exists^{=1}x\;\varphi(x,\overline{a})\)) is said to be an _algebraic_ (_definable_ or _definitional_) _closure_ of \(A\). An algebraic closure of \(A\) is denoted by \(\mbox{acl}(A)\) and its definable (definitional) closure, by \(\mbox{dcl}(A)\).
In such a case we say that the formulae \(\varphi(x,\overline{a})\)_witness_ that algebraic / definable (definitional) closure, and these formulae are called _algebraic / defining_.
Any element \(b\in\mbox{acl}(A)\) (respectively, \(b\in\mbox{dcl}(A)\)) is called _algebraic_ (_definable_ or _definitional_) over \(A\). If the set \(A\) is fixed or empty, we just say that \(b\) is _algebraic_, (_definable_, or definitional).
3. If \(\mbox{dcl}(A)=\mbox{acl}(A)\), \(\mbox{cl}_{1}(A)\) denotes their common value.
4. If \(A=\mbox{acl}(A)\) (respectively, \(A=\mbox{dcl}(A)\) ) then \(A\) is called _algebraically_ (_definably_) closed.
5. The type \(p\) is _algebraic_ (_defining_) if it is realized by finitely many tuples (unique one) only, i.e., it contains an algebraic (defining) formula \(\varphi\). This formula \(\varphi\) can be chosen with the minimal number of solution, and in such a case \(\varphi\) isolates \(p\). The number of these solutions is called the _degree_\(\mbox{deg}(p)\) of \(p\).
6. The complete algebraic types \(p(x)\in S(A)\) are exactly ones of the form \(\mbox{tp}(a/A)\), where \(a\) is algebraic over \(A\). The _degree_ of \(a\) over \(A\), \(\mbox{deg}(a/A)\) is the degree of \(\mbox{tp}(a/A)\).
**Remark 2.2**: [1]. The pairs \(\langle M,{\rm acl}\rangle\) and \(\langle M,{\rm dcl}\rangle\) satisfy the following properties:
(i) the reflexivity: it is witnessed by the formula \(x\approx y\);
(ii) the transitivity: if the formulae \(\varphi_{1}(x_{1},\overline{a}),\ldots,\varphi_{n}(x_{n},\overline{a})\) witnessed that \(b_{1},\ldots,b_{n}\in{\rm acl}(A)\) (respectively, \(b_{1},\ldots,b_{n}\in{\rm dcl}(A)\)) and the formula \(\psi(x,b_{1},\ldots,b_{n})\) witnesses that \(c\in{\rm acl}(\{b_{1},\ldots,b_{n}\})\) (respectively, \(c\in{\rm dcl}(\{b_{1},\ldots,b_{n}\})\)) then the formula
\[\exists x_{1},\ldots,x_{n}\Biggl{(}\psi(x,x_{1},\ldots,x_{n})\wedge\bigwedge \limits_{i=1}^{n}\varphi_{i}(x_{i},\overline{a})\Biggr{)} \tag{1}\]
witnesses that \(c\in{\rm acl}({\rm acl}(A))\) (respectively, \(c\in{\rm dcl}({\rm dcl}(A))\));
(iii) the finite character: if a formula \(\varphi x,\overline{a}\) witnesses that \(a\in{\rm acl}(A)\) (respectively, \(a\in{\rm dcl}(A)\)) then \(a\in{\rm acl}(A_{0})\) for the finite \(A_{0}\subseteq A\) consisting of coordinates in \(\overline{a}\).
## 3 \(n\)-Closures and degrees of algebraization
Now we consider modifications of the algebraic closure related to the sets of solutions of formulas bounded in cardinality by some natural number.
**Definition.** 1. For \(n\in\omega\setminus\{0\}\) and a set \(A\) an element \(b\) is called \(n\)_-algebraic_ over \(A\), if \(a\in{\rm acl}(A)\) and it is witnessed by a formula \(\varphi(x,\overline{a})\), for \(\overline{a}\in A\), with at most \(n\) solutions.
2. The set of all \(n\)-algebraic elements over \(A\) is denoted by \({\rm acl}_{n}(A)\).
3. If \(A={\rm acl}_{n}(A)\) then \(A\) is called \(n\)_-algebraically_ closed.
4. The type \(p\) is \(n\)_-algebraic_ if it is realized by at most \(n\) tuples only, i.e., \({\rm deg}(p)\leq n\).
5. The complete \(n\)-algebraic types \(p(x)\in S(A)\) are exactly ones of the form \({\rm tp}(a/A)\), where \(a\) is \(n\)-algebraic over \(A\), i.e., with \({\rm deg}(a/A)\leq n\). Here \({\rm deg}(a/A)=k\leq n\) defines the \(n\)_-degree_\({\rm deg}_{n}(a/A)\) of \({\rm tp}(a/A)\) and of \(a\) over \(A\).
6. If \({\rm acl}(A)={\rm acl}_{n}(A)\) then minimal such \(n\) is called the _degree of algebraization_ over the set \(A\) and it is denoted by \({\rm deg}_{\rm acl}(A)\). If that \(n\) does not exist then we put \({\rm deg}_{\rm acl}(A)=\infty\). The supremum of values \({\rm deg}_{\rm acl}(A)\) with respect to all sets \(A\) of given theory \(T\) is denoted by \({\rm deg}_{\rm acl}(T)\) and called the _degree of algebraization_ of the theory \(T\).
7. Following [11] theories \(T\) with \({\rm deg}_{\rm acl}(T)=1\), i.e., with defined \({\rm cl}_{1}(A)\) for any set \(A\) of \(T\), are called _quasi-Urbanik_, and the models \({\cal M}\) of \(T\) are _quasi-Urbanik_, too.
**Remark 3.1**: By the definition any algebraic type is \(n\)-algebraic for some \(n\), it is isolated and have a fixed finite set of solutions in any elementary extension of given model. In particular, definable types are \(1\)-algebraic and have unique realizations in any given model. We have \({\rm dcl}(A)={\rm acl}_{1}(A)\) for any \(A\).
Notice that any set \(B\) of realizations of an algebraic type over \(A\) is the union of the subsets \(B_{n}\) of realizations for its \(n\)-algebraic subtypes for all \(n\): \(A=\bigcup\limits_{n\in\omega}B_{n}\). Moreover, the sets \(B_{n}\) form an increasing chain, with \(B_{m}=B_{n}\) for \(m>n\).
Notice also that it is essential in the definition of \(n\)-algebraic types that models for the sets of realizations are arbitrary, since, for instance, a theory \(T\) of infinitely many disjoint nonempty unary predicates \(U_{i}\), \(i\in I\), has models with arbitrarily finitely many realizations of nonisolated type \(p(x)=\{\neg U_{i}(x)\mid i\in I\}\). This type has infinitely many realizations in an appropriate \(T\)-model, and therefore it is not algebraic.
The following proposition allows to transform algebraic types into their finite subtypes.
**Proposition 3.2** (cf. [1, Lemma 6.1]).: 1_. A type \(p\) defines a tuple if and only if for some finite \(q\subseteq p\), \(\bigwedge q\) defines that tuple._
2_. A type \(p\) is \(n\)-algebraic if and only if for some finite \(q\subseteq p\), \(\bigwedge q\) is \(n\)-algebraic; moreover, \(q\) can be chosen with the same set of realizations as for \(p\)._
Recall [2] that for a set \(A\) and an element \(a\) the \(A\)_-orbit_\(\mbox{\rm Orb}_{A}(a)\) of \(a\) is the set of all elements \(b\) in the given structure which are connected with \(a\) by an \(A\)-automorphism.
The following proposition gives an algebraic characterization for \(n\)-algebraic types.
**Proposition 3.3**: _A type \(p\) is \(n\)-algebraic over \(A\) if and only if any / some \((|A|+|T|)\)-saturated model \(\cal M\) containing \(A\) has finitely many \(A\)-orbits \(O\) consisting of realizations of \(p\), all these orbits are finite, and moreover the union \(\bigcup O\) has at most \(n\)-elements. If \(p\) is complete then that \(A\)-orbit is unique in \(\cal M\)._
Proof. Taking a \((|A|+|T|)\)-saturated model \(\cal M\) with \(A\subseteq M\) and a \(n\)-algebraic type \(p\) over \(A\) we have finitely many possibilities for links, by \(A\)-automorphisms, between realizations of \(p\). Moreover, if \(b_{1},b_{2},\ldots,b_{n}\) are all realizations of \(p\) then for any \(b_{i}\), \(b_{j}\) either \(\mbox{\rm tp}(b_{i}/A)\neq\mbox{\rm tp}(b_{j}/A)\) and \(b_{i}\), \(b_{j}\) belong to distinct \(A\)-orbits, or \(\mbox{\rm tp}(b_{i}/A)=\mbox{\rm tp}(b_{j}/A)\) and \(b_{i}\), \(b_{j}\) belong to a common \(A\)-orbit connected by an \(A\)-automorphism \(f\) with \(f(b_{i})=f(b_{j})\). Additionally, if \(c\) is not a realization of \(p\) then \(c\) does not belong to orbits of elements \(b_{1},b_{2},\ldots,b_{n}\).
Conversely, if any / some \((|A|+|T|)\)-saturated model \(\cal M\) containing \(A\) has finitely many \(A\)-orbits \(O\) consisting of realizations of \(p\) and the union \(\bigcup O\) has at most \(n\)-elements then by the saturation of \(\cal M\) the type \(p\) can not have more than \(n\) realizations implying that \(p\) is \(n\)-algebraic over \(A\).
**Remark 3.4**: Taking cycles of arbitrarily large diameters we can find an algebraic type with arbitrary large and arbitrary many finite orbits. At the same time, it is essential in Proposition 3.3 that the model is saturated, since following Remark 3.1 there are non-algebraic types with arbitrarily many realizations, and taking copies of Example in Remark 3.1 we can obtain arbitrarily many finite orbits.
Notice also that the properties of \(n\)-algebraicity over a set \(A\) are both syntactic and semantic, since on the one hand it is written by a family of formulae \(\varphi(x,\overline{a})\wedge\exists^{\leq n}x\varphi(x,\overline{a})\), for \(\overline{a}\in A\), and on the other hand it is checked in any model \(\cal M\) of given theory \(T\) with \(A\subseteq M\) and satisfying the given type \(\mbox{\rm tp}(A)\).
For instance, taking a structure \(\cal M\) with \(A\subseteq M\) such that each element of \(M\) is the unique solution of an appropriate formula \(\varphi(x,\overline{a})\), for \(\overline{a}\in A\), then \(M\) is definably closed, i.e., 1-algebraically closed, with unique automorphism \(f\) fixing \(A\) pointwise and this automorphism is identical. Following [2] the structure \(\cal M\) with unique automorphism is called _rigid_. Thus the 1-algebraicity of each element in \(M\) over the empty set, i.e., the condition \(M=\mbox{\rm dcl}(\emptyset)\) produces the rigid structure \(\cal M\).
We separate the forms of rigidity of a structure as follows: the _semantic_ one is defined in terms of trivial automorphism group, and the _syntactic_ one is in terms of 1-algebraicity of the universe over the empty set.
Thus, any syntactically rigid structure is semantically rigid and it is defined uniquely. But not vice versa, in general. Indeed, taking a structure \(\cal M\) consisting of infinitely many distinct constants \(c_{i}\), \(i\in I\), we have both the semantic and syntactic rigidity. Extending \(\cal M\) by a single realization of the nonisolated type \(p(x)=\{\neg x\approx c_{i}\mid i\in I\}\) we obtain a
structure \({\cal N}\succ{\cal M}\) preserving the semantic rigidity but loosing the syntactic one. Taking disjoint unions [12] of copies of \({\cal M}\) one can obtain semantically rigid elementary extensions with arbitrarily many new elements. Hence, in appropriate cases, the semantic rigidity can be transformed for elementary extensions.
These appropriate cases admits that one-element finite orbits are preserved under elementary extensions. We observe this preservation for structures whose universes consist of constants only. And one-element finite orbits are not preserved if, for instance, a structure consists of infinitely many two-element equivalence classes expanded by constants for all elements of these classes.
Similarly, following Proposition 3.3 homogeneous structures \({\cal M}\) with at most \(n\)-element finite orbits, under appropriate additional conditions, produce the \(n\)-algebraicity. And some opposite conditions imply the negation of \(n\)-algebraicity for \({\cal N}\equiv{\cal M}\).
Thus, in general case, poor automorphism groups, in particular, identical automorphism groups \({\rm Aut}({\cal M})\) can not reflect adequately links for algebraic closures in structures \({\cal N}\equiv{\cal M}\).
Recall that a theory \(T\) is \(n\)_-transitive_, for \(n\in\omega\), if the type \({\rm tp}(\overline{a})\) of each tuple \(\overline{a}\) with \(l(\overline{a})=n\) is forced by formulae describing that coordinates of \(\overline{a}\) coincide or do not coincide.
**Remark 3.5**: Clearly, any \(n\)-transitive theory is \(m\)-transitive for every \(m\leq n\). Besides, algebraic characteristics for \(n\)-transitive theories such as degrees of algebraicity are really defined by sets \(A\) of cardinalities \(\geq n\), since \({\rm acl}(B)=B\) for all smaller sets \(B\).
Thus, in general case, structures for \({\rm acl}(B)\) with small finite cardinalities \(|B|\) does not reflect general behavior of the algebraic closure operator. It is valid for the operators \({\rm acl}_{n}(\cdot)\), too.
**Remark 3.6**: In view of Remarks 3.4 and 3.5, nir poor automorphism groups, for structure similar to rigid, nor rich automorphism groups, for structure similar to transitive, do not reflect the frame of given structures and the frame for their theories in the level of algebraic closure.
**Proposition 3.7**: (cf. [1, Lemma 6.2]).__1.__\(A\subseteq{\rm acl}_{m}(A)\subseteq{\rm acl}_{n}(A)\subseteq{\rm acl}(A)\)_, for any_ \(0<m\leq n\)_. In particular,_ \(\langle M,{\rm acl}_{n}\rangle\) _is reflexive for any_ \(n\)_._
2. _If_ \(A\subseteq B\) _and_ \(n\geq 1\) _then_ \({\rm acl}_{n}(A)\subseteq{\rm acl}_{n}(B)\)_._
3. _If_ \(A\) _is definably_ (_algebraically_) _closed then_ \(A={\rm dcl}(A)\) _(_\(A={\rm acl}(A)\)_)._
4. _If_ \(A\) _is_ \(n\)_-algebraically closed for_ \(n\geq 2\) _then_ \(A={\rm acl}(A)\) _iff any finite orbit over_ \(A\) _has at most_ \(n\) _elements._
5. _A tuple_ \(\overline{b}\) _is definable_ (_algebraic_) _over_ \(A\) _if and only if_ \(\overline{b}\in{\rm dcl}(A)\) _(_\(\overline{b}\in{\rm acl}(A)\)_)._
Proof. Items 1, 2, 3, 5 immediately follow by the definition and [1, Lemma 6.2]. Item 4 is implied by Proposition 3.3.
Proposition 3.7 immediately implies:
**Corollary 3.8**: _If \({\rm cl}_{1}(A)\) exists then it equals \({\rm acl}_{n}(A)\) and it is \(n\)-algebraically closed for each \(n\)._
**Corollary 3.9**: _For any structure \({\cal M}\), the pairs \({\cal S}_{n}=\langle M,{\rm acl}_{n}\rangle\), \(n\in\omega\), define ascending chains \(({\rm acl}_{n}(A))_{n\in\omega}\) for each \(A\subseteq M\), where \({\rm acl}_{0}(A)=A\). These pairs \(S_{n}\) coincide starting with some \(n_{0}\) iff all chains \(({\rm acl}_{n}(A))_{n\geq n_{0}}\) are singletons, and iff any finite orbit over arbitrary \(A\subseteq M\) has at most \(n_{0}\) elements in a homogeneous elementary extension of \({\cal M}\)._
**Remark 3.10**: Since for \(k\leq m\) there are \(A_{m}^{k}=\frac{m!}{(m-k)!}\)\(k\)-tuples consisting of pairwise distinct elements in the given \(m\)-element set we can not assert Item 5 of Proposition 3.7 for \(n\)-algebraic tuples. Indeed, orbits for tuples can be much more greater than orbits for elements. In particular, it holds for structures in the empty language.
**Remark 3.11**: Notice that along with the reflexivity for \(\mbox{acl}_{n}\), by the definition any operator \(\mbox{acl}_{n}\) has the finite character. At the same time \(\mbox{acl}_{n}\) can be non-transitive. Indeed, taking an element \(a\) and \(\mbox{acl}_{n}(\mbox{acl}_{n}(\{a\}))\) one can find a structure \({\cal M}\), say a graph, with \(n\) new elements \(b_{1},\ldots,b_{n}\) as solutions of \(n\)-algebraic formula \(\varphi(x,a)\) isolating an \(n\)-algebraic type \(p(x)\) such that the \(n\)-algebraic formulae \(\varphi(x,b_{1}),\ldots,\varphi(x,b_{n})\) produce \(n^{2}\) new elements \(c_{1},\ldots,c_{n^{2}}\) connected by \(\{a\}\)-orbits. Thus, \(c_{1},\ldots,c_{n^{2}}\in\mbox{acl}_{n}(\mbox{acl}_{n}(\{a\}))\) whereas these elements do not belong to \(\mbox{acl}_{n}(\{a\})\).
**Remark 3.12**: Clearly, Exchange property for \(\mbox{acl}_{n}(\cdot)\) can fail even if \(\mbox{acl}(\cdot)\) satisfies it. Indeed, let \(\Gamma\) be a bipartite graph with parts \(U\) and \(V\) such that \(|U|=n\), \(|V|=m>n\) and each vertex in \(U\) is connected by an edge with each vertex in \(V\). We have \(a\in\mbox{acl}_{n}(\{b\})\) for any \(a\in U\) and \(b\in V\), whereas \(b\in\mbox{acl}(\{a\})\setminus\mbox{acl}_{n}(\{a\})\).
At the same time replacing \(\mbox{acl}_{n}(\cdot)\) by \(\mbox{acl}_{m}(\cdot)\) we obtain that Exchange property. In fact, Exchange property for \(\mbox{acl}_{m}(\cdot)\) is implied by Exchange property for the case \(\mbox{acl}(\cdot)=\mbox{acl}_{m}(\cdot)\).
**Remark 3.13**: Let \(T\) be a theory of an equivalence relation \(E\). By the definition any element \(a\) of a finite \(E\)-class \(X\) is \(|X|\)-algebraic over any singleton \(\{b\}\in X\setminus\{a\}\). Moreover, \(a\) is \(n\)-algebraic over \(\emptyset\) if there are finitely many \(E\)-classes of the cardinality \(|X|\). Otherwise, if there are infinitely many \(E\)-classes of the cardinality \(|X|\) these \(E\)-classes do not contain algebraic elements over \(\emptyset\).
If the \(E\)-class \(X\) is infinite it does not contain algebraic elements and for each \(A\subseteq X\), then \(\mbox{acl}(A)=\mbox{dcl}(A)=A\), and the correspondent theory is quasi-Urbanik.
By Remark 3.13 we immediately obtain the following dichotomy for values \(\mbox{deg}_{\mbox{acl}}(T)\) of theories \(T\) in the language \(\{E\}\) of an equivalence relation:
**Proposition 3.14**: _For any theory \(T\) of an equivalence relation \(E\) and a set \(A\) either \(\mbox{deg}_{\mbox{acl}}(A)\in\omega\), if \(A\) consists of elements with bounded orbits inside finite \(E\)-classes and between these \(E\)-classes, or \(\mbox{deg}_{\mbox{acl}}(A)=\infty\), otherwise._
**Example 3.15**: Let \({\cal M}\) be a structure in the language \(\{E\}\) of equivalence relation and consists of \(E\)-classes with unbounded distinct finite cardinalities then the sets \(\mbox{acl}_{n}(\emptyset)\), \(n\geq 1\), form an unboundedly increasing sequence of \(n\)-algebraically closed sets. We observe the same effect for \(\mbox{acl}_{n}(\emptyset)\), \(n\geq 1\), where \(A\) consists of boundedly many elements in each \(E\)-class producing \(\mbox{deg}_{\mbox{acl}}(A)=\infty\) and \(\mbox{deg}_{\mbox{acl}}(\mbox{Th}({\cal M}))=\infty\) as well.
The following example illustrates the possibility \(\mbox{deg}_{\mbox{acl}}(T)=\omega\).
**Example 3.16**: Let \({\cal M}\) be a structure in the language \(\{E\}\) and consisting of an equivalence relation \(E\) with infinitely many \(n\)-element \(E\)-classes for each \(n\in\omega\setminus\{0\}\). By the definition all finite orbits over a set \(A\subseteq M\) are exhausted by automorphisms transforming elements in \(E\)-classes \(X\subseteq M\) with \(X\cap A\neq\emptyset\). By the arguments in Example 3.15 we have \(\mbox{deg}_{\mbox{acl}}(T)=\infty\) for \(T=\mbox{Th}({\cal M})\).
Now we expand the structure \({\cal M}\) till a structure \({\cal M}^{\prime}\) by countably many ternary predicates \(R\) in the following way:
1) if \(\models R(a,b,c)\) then \(a\) and \(b\) belong to disjoint \(E\)-classes \(E(a)\) and \(E(b)\) and \(c\in E(a)\cup E(b)\) is the unique solution of \(R(a,b,z)\), moreover, we require that each \(c^{\prime}\in E(a)\cup E(b)\) has a ternary symbol \(R^{\prime}\) with \(\models R^{\prime}(a,b,c^{\prime})\wedge\exists^{=1}zR^{\prime}(a,b,z)\);
2) \({\cal M}^{\prime}\) has same orbits as \({\cal M}\) over subsets of \(E\)-classes and one-element orbits only in \(E(a)\cup E(b)\), for \(E(a)\cap E(b)=\emptyset\), over sets \(A\subseteq E(a)\cup E(b)\) which have both elements in \(E(a)\) and in \(E(b)\).
The structure \({\cal M}^{\prime}\) can be formed step-by-step using an appropriate generic construction [13, 14].
By the definition for each \(A\subseteq M^{\prime}\), \(\mbox{deg}_{\rm al}(A)\) is finite, moreover, \(\mbox{deg}_{\rm al}(A)=1\) for \(A\) laying in several finite \(E\)-classes. At the same time the values \(\mbox{deg}_{\rm al}(A)\) are unbounded in \(\omega\) using subsets of finite \(E\)-classes. Therefore, \(\mbox{deg}_{\rm al}(\mbox{Th}({\cal M}^{\prime}))=\omega\).
**Theorem 3.17**: 1. _For any consistent theory \(T\), \(\mbox{deg}_{\rm al}(T)\in((\omega+1)\setminus\{0\})\cup\{\infty\}\)._
2. _For any \(\lambda\in((\omega+1)\setminus\{0\})\cup\{\infty\}\) there is a theory \(T_{\lambda}\) such that \(\mbox{deg}_{\rm al}(T_{\lambda})=\lambda\)._
Proof. Item 1 holds since \(\mbox{deg}_{\rm al}(T)\) is a supremum of values in \((\omega\setminus\{0\})\cup\{\infty\}\).
2. At first we consider theories \(T\) of equivalence relations \(E\). The values \(\mbox{deg}_{\rm al}(T_{n})=n\in(\omega\setminus\{0\})\) are realized by \(E\) on \(n\)-element sets \(M\) with unique equivalence classes. Here each set \(A\subseteq M\) has \(\mbox{acl}(A)=\mbox{acl}_{n}(A)=M\). We obtain a similar effect adding infinite \(E\)-classes, or taking finitely many finite \(E\)-classes having a total of \(n\) elements, or adding both these \(E\)-classes.
The value \(\mbox{deg}_{\rm al}(T_{\infty})=\infty\) is confirmed by an equivalence relation \(E\) with infinitely many finite \(E\)-classes with unbounded finite cardinalities. Here we take a set \(A\) containing elements in all these classes producing \(\mbox{acl}(A)\neq\mbox{acl}_{n}(A)\) for any \(n\in\omega\).
The value \(\mbox{deg}_{\rm al}(T_{\omega})=\omega\) is witnessed by Example 3.16.
**Example 3.18**: By the definition any definable element is \(n\)-algebraic, for any \(n\geq 1\), but not vice versa. For instance, taking a graph \(\Gamma=\langle\{a_{1},a_{2}\};\{e\}\rangle\), where \(e\) is the edge \([a_{1},a_{2}]\), we obtain \(\mbox{acl}(\emptyset)=\{a_{1},a_{2}\}\) witnessed by the formula \(x\approx x\), i.e., \(\mbox{deg}_{\rm al}(\emptyset)=\mbox{deg}_{\rm al}(\mbox{Th}(\Gamma))=2\), whereas \(\mbox{dcl}(\emptyset)=\emptyset\), since \(a_{1}\) and \(a_{2}\) are connected by an automorphism. It is a minimal example, by inclusion, in a graph language.
We observe the same effect for a structure with at least two elements in the empty language, or in the graph language with the empty relation, or in the language of one unary function consisting of loops or of a cycle.
**Example 3.19**: For any linearly ordered structure \({\cal M}\) and a set \(A\subseteq M\),
\[\mbox{acl}(A)=\mbox{dcl}(A), \tag{2}\]
since for any formula \(\varphi(x,\overline{a})\) with finitely many solutions the finite set \(B\) of these solutions is linearly ordered and each element of \(B\) is defined by its position in that finite linear order. Thus, for the linearly ordered structure \({\cal M}\), \(\mbox{deg}_{\rm al}({\cal M})=1\), and it has a quasi-Urbanik theory.
Example 2.12 in [15] illustrates that for the circularly ordered structure \(c(\omega+\omega^{*}+{\bf Q}+\omega+\omega^{*}+{\bf Q})\), \(\mbox{acl}(\emptyset)\) is a proper superset of \(\mbox{dcl}(\emptyset)\) contrasting with the linearly ordered case, since the first elements of the copies of \(\omega\) are connected by an automorphism, and the last elements of the copies of \(\omega^{*}\) are connected by an automorphism, too.
At the same time, for any circularly ordered structure \({\cal M}\) and a nonempty set \(A\subseteq M\), \({\rm acl}(A)={\rm dcl}(A)\), since any element \(a\in A\) defines a linear order on \(M\setminus\{a\}\) satisfying the equality (2). Thus, for the circularly ordered structure \({\cal M}\), \(\deg_{\rm acl}({\rm Th}({\cal M}))=\deg_{\rm acl}(\emptyset)\). Adding copies of \(\omega+\omega^{*}+{\bf Q}\) to the circularly ordered structure \(c(\omega+\omega^{*}+{\bf Q}+\omega+\omega^{*}+{\bf Q})\) we can obtain an arbitrary natural value \(\deg_{\rm acl}(\emptyset)\) for a circularly ordered theory.
Continuing the process with infinitely and densely many copies of \(\omega+\omega^{*}+{\bf Q}\) and marking some of them circularly by unary predicates we can obtain a circularly ordered theory \(T\) with
\[\deg_{\rm acl}(T)=\deg_{\rm acl}(\emptyset)=\infty. \tag{3}\]
We can, for instance, choose a prime number \(p\) and mark \(p\) copies \(C_{0},\ldots,C_{p-1}\) by a unary predicate \(P_{0}\), then choose \(p\) copies \(C_{i,0},\ldots,C_{i,p-1}\) between \(C_{i}\) and \(C_{(i+1)({\rm mod}p)}\), for each \(i\leq p-1\), and mark these copies by a unary predicate \(P_{1}\). Continuing the process with countably many disjoint unary predicates \(P_{n}\), \(n\in\omega\), we obtain increasing finite cardinalities for orbits of the first elements of the marked copies of \(\omega\) witnessing the equalities (3).
We observe the similar effect for any \(n\)-spherically ordered structure \({\cal M}\). In particular, \({\rm acl}(A)={\rm dcl}(A)\) for any set \(A\subseteq M\) of a cardinality \(\geq n-2\), since in such a case a \(n\)-spherical order is reduced to a linear one [16].
**Definition.** If for a theory \(T\), \({\rm dcl}(A)={\rm acl}(A)\) for any set \(A\) with \(|A|\geq n\) then minimal such \(n\) is called the \({\rm acl}\)-\({\rm dcl}\)_-difference_ and denoted by \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T)\). If such natural \(n\) does not exists, i.e., for any \(n\in\omega\) there exists a set \(A\) with \(|A|\geq n\) and \({\rm acl}(A)\supset{\rm dcl}(A)\) then we put \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T)=\infty\).
**Remark 3.20**: By the definition and Example 3.19 we observe that \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T_{0})=0\) for any linearly ordered theory \(T_{0}\), \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T)\leq 1\) for any circularly ordered theory \(T\), with \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T_{1})=1\) if \(T_{1}\) is circularly ordered with unique 1-type and a model with at least two elements. More generally, \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T)\leq n-2\) for any \(n\)-spherically ordered theory \(T\), where \(n\geq 3\), with \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T_{n-2})=n-2\) if, for instance, \(T_{n-2}\) is dense \(n\)-spherically ordered [17].
Taking a disjoint union [12, 13] of models \({\cal M}_{n-2}\) of \(n\)-spherically ordered theories \(T_{n-2}\), with unboundedly many \(n\), we obtain a structure \({\cal M}\) with \({\rm acl}\)-\({\rm dcl}_{\rm dif}({\rm Th}({\cal M}))=\infty\).
In view of Remark 3.20 we have the following:
**Theorem 3.21**: _For any \(\lambda\in\omega\cup\{\infty\}\) there is a theory \(T_{\lambda}\) with \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T_{\lambda})=\lambda\)._
## 4 Algebraic sets and their degrees
**Definition.** Let \({\cal M}\) be a \(L\)-structure, \(A,B\subseteq M\). The set \(B\) is called _algebraic_ over \(A\), or \(A\)_-algebraic_ if it is the finite set of solutions in \({\cal M}\) of a \(L(A)\)-formula \(\varphi(x,\overline{a})\), where \(\varphi(x,\overline{y})\) is a \(L\)-formula and \(\overline{a}\in A\), i.e., \(B\) consists of algebraic elements over \(A\) witnessed by a fixed formula \(\varphi(x,\overline{a})\). If \(A\) is empty then an \(A\)-algebraic set is called _algebraic_.
The set \(B\) is called _\((A,u)\)-algebraic_ if it is a union of \(A\)-algebraic sets.
If \(B\) is a union of \(A\)-algebraic subsets of cardinalities at most \(n\), for \(n\in\omega\), then \(B\) is called _\((A,u,n)\)-algebraic_. If \(B\) is both \(A\)-algebraic and \((A,u,n)\)-algebraic then \(B\) is called _\((A,n)\)-algebraic_.
The least \(n\) for the \((A,u,n)\)-algebraicity of \(B\) is called the _degree_ of the \((A,u,n)\)-algebraicity of \(B\) and it is denoted by \(\deg_{{\rm alg},u}(B/A)\). If \(B\) is \((A,n)\)-algebraic then the value \(\deg_{{\rm alg},u}(B/A)\) is defined, too, with \(\deg_{{\rm alg},u}(B/A)\leq n\).
At the same time an \((A,u)\)-algebraic set \(B\) admits the value \(\deg_{\mathrm{alg},u}(B/A)=\infty\), if \(B\) is not represented as a union of \(A\)-algebraic sets in bounded finite cardinalities.
The value \(\sup\{\deg_{\mathrm{alg},u}(B/A)\mid B\) is \((A,u)\)-algebraic\(\}\) is called the degree of the \((A,u,n)\)-algebraicity and denoted by \(\mathrm{DEG}_{\mathrm{alg},u}(A)\).
We denote by \({\cal A}_{A,{\cal M}}\) the set of all \(A\)-algebraic sets in \({\cal M}\), by \({\cal A}^{u}_{A,{\cal M}}\) the set of all \((A,u)\)-algebraic sets in \({\cal M}\), by \({\cal A}^{n}_{A,{\cal M}}\) the set of all \((A,u,n)\)-algebraic sets in \({\cal M}\).
We omit \(A\) above if it is empty.
Notice that the sets \({\cal A}_{A,{\cal M}}\), \({\cal A}^{u}_{A,{\cal M}}\), \({\cal A}^{n}_{A,{\cal M}}\), \({\cal A}^{u,n}_{A,{\cal M}}\) are preserved under elementary extensions of \({\cal M}\). Therefore we omit the indexes \({\cal M}\) above, denoting the sets above by \({\cal A}_{A}\), \({\cal A}^{u}_{A}\), \({\cal A}^{n}_{A}\), \({\cal A}^{u,n}_{A}\).
The following assertion collects some properties of algebraic sets.
**Proposition 4.1**: 1. _For any set \(A\) and \(m\leq n<\omega\), \({\cal A}^{m}_{A}\subseteq{\cal A}^{n}_{A}\cap{\cal A}^{u,m}_{A}\subseteq{\cal A }_{A}\subseteq{\cal A}^{u}_{A}\), \({\cal A}^{u,m}_{A}\subseteq{\cal A}^{n}_{A}\cap{\cal A}^{u,n}_{A}\subseteq{\cal A }^{u}_{A}\)._
2. _If \(\deg_{\mathrm{alg},u}(B/A)\) is defined, i.e. \(B\in{\cal A}^{u}_{A}\), then \(B\subseteq\mathrm{acl}(A)\). The converse holds if \(B\) consists of some finite orbits_ (_in a saturated extension of \({\cal M}\)_) _of elements under \(A\)-automorphisms._
3. (Monotony) _For any \(B_{1},B_{2}\in{\cal A}^{u}_{A}\) with \(B_{1}\subseteq B_{2}\), \(\deg_{\mathrm{alg},u}(B_{1}/A)\leq\deg_{\mathrm{alg},u}(B_{2}/A)\)._
4. (Monotony) _If \(B\in{\cal A}^{u}_{A_{1}}\) and \(A_{1}\subseteq A_{2}\) then \(\deg_{\mathrm{alg},u}(B/A_{1})\geq\deg_{\mathrm{alg},u}(B/A_{2})\)._
5. _For any set \(A\), \(\mathrm{DEG}_{\mathrm{alg},u}(A)=\deg_{\mathrm{alg},u}(\mathrm{acl}(A)/A)\)._
6. _For any sets \(A\) and \(B\), \(\deg_{\mathrm{alg}}(B/A)=0\) iff \(B=\emptyset\)._
7. _For any sets \(A\) and \(B\), \(\deg_{\mathrm{alg}}(B/A)=1\) iff \(\emptyset\neq B\subseteq\mathrm{dcl}(A)\)._
8. _If \(A\) is algebraically closed then \(\mathrm{DEG}_{\mathrm{alg},u}(A)=0\) if \(A\) is empty, and \(\mathrm{DEG}_{\mathrm{alg},u}(A)=1\) if \(A\) is nonempty._
Proof. Item 1 holds since any \(m\)-algebraic element over \(A\) is algebraic over \(A\), and, moreover, \(n\)-algebraic over \(A\), for any \(m\leq n\).
Item 2 is satisfied as elements of \({\cal A}^{u}_{A}\) consist of some finite orbits of elements under \(A\)-automorphisms.
The monotonies are true by the definition.
Item 5 is implied by Item 2 and Monotony (Item 3).
Item 6 is obvious.
Item 7 holds by Item 6 and the property both for \(\deg_{\mathrm{alg}}(B/A)=1\) and for \(B\subseteq\mathrm{dcl}(A)\) that \(B\) is composed as a union of \(A\)-definable singletons.
Item 8 is implies by \(A=\mathrm{dcl}(A)\) and Items 6 and 7.
**Corollary 4.2**: _If \({\cal M}\) is an algebra, then for any \(A\subset M\) the universe \(M(A)\) of the subalgebra \({\cal M}(A)\) of \({\cal M}\) generated by \(A\) is \((A,u)\)-algebraic with \(\deg_{\mathrm{alg},u}(M(A)/A)=1\)._
Proof. Any element \(a\) of \(M(A)\) is represented by a term \(t(a_{1},\ldots,a_{n})\), where \(a_{1},\ldots,a_{n}\in A\). Therefore \(M(A)\subseteq\mathrm{dcl}(A)\). Thus \(M(A)\) is represented as a union of singletons \(\{a\}\in{\cal A}^{1}_{A}\subseteq{\cal A}^{u}_{A}\) whence \(M(A)\in{\cal A}^{u}_{A}\) and \(\deg_{\mathrm{alg},u}(M(A)/A)=1\) in view of Item 7 in Proposition 4.1.
**Remark 4.3**: For any set \(A\), \(\mathrm{DEG}_{\mathrm{alg},u}(A)\in\omega\cup\{\infty\}\), since any \((A,u)\)-algebraic set is either represented as a union of \((A,u,n)\)-algebraic sets for some \(n\in\omega\), or such \(n\) does not exists for
some \((A,u)\)-algebraic set \(B\). In the first case we have \({\rm DEG}_{{\rm alg},u}(A)\in\omega\), and in the second one \({\rm DEG}_{{\rm alg},u}(A)=\infty\).
For an illustration we show that all values \({\rm DEG}_{{\rm alg},u}(A)\in\omega\cup\{\infty\}\) are realized for appropriate sets \(A\).
Indeed, taking an equivalent relation \(E\) with infinitely many equivalence classes of cardinality \(n+1\), we obtain \({\rm DEG}_{{\rm alg},u}(\{a\})=n\) for any singleton \(A=\{a\}\), where there are exactly four \(A\)-algebraic sets: \(\emptyset\), \(\{a\}\), \(E(a)\setminus\{a\}\), \(E(a)\).
Now let \(E_{n}\), \(n\in\omega\), be an ascending chain of equivalence relations such that \(E_{0}\)-classes are singletons and each \(E_{n+1}\)-class consists of exactly two \(E_{n}\)-classes. Taking an arbitrary singleton \(A=\{a\}\) we obtain \({\rm DEG}_{{\rm alg},u}(\{a\})=\infty\), witnessed by the set \(B=\bigcup\limits_{n\in\omega}E_{n}(a)\), since all \(E_{n}\)-classes are finite and have unboundedly increasing cardinalities.
We obtain a similar effect taking disjoint unary predicates \(P_{n}\) with \(|P_{n}|=n\), \(n\in\omega\). Here \({\rm DEG}_{{\rm alg},u}(\emptyset)=\infty\), and \({\rm DEG}_{{\rm alg},u}(\emptyset)\in\omega\), if the language is restricted to finitely many predicate symbols.
## 5 Further variations of algebraic and definable closures and their connections
**Definition.** 1. Let \(\Delta\) be a set of formulae of a theory \(T\). For a models \({\cal M}\models T\) and a set \(A\subseteq M\) the union of sets of solutions of formulae \(\varphi(x,\overline{a})\), where \(\varphi(x,\overline{y})\in\Delta\) and \(\overline{a}\in A\), such that \(\models\exists^{=n}x\,\varphi(x,\overline{a})\) for some \(n\in\omega\) (respectively, \(\models\exists^{=1}x\,\varphi(x,\overline{a})\)) is said to be a \(\Delta\)_-algebraic_ (\(\Delta\)_-definable_ or \(\Delta\)_-definitional_) _closure_ of \(A\). The \(\Delta\)-algebraic closure of \(A\) is denoted by \({\rm acl}^{\Delta}(A)\) and its \(\Delta\)-definable (\(\Delta\)-definitional) closure, by \({\rm dcl}^{\Delta}(A)\).
In such a case we say that the formulae \(\varphi(x,\overline{a})\)_witness_ that \(\Delta\)-algebraic / \(\Delta\)-definable (\(\Delta\)-definitional) closure, and these formulae are called \(\Delta\)_-algebraic / \(\Delta\)-defining_.
Any element \(b\in{\rm acl}^{\Delta}(A)\) (respectively, \(b\in{\rm dcl}^{\Delta}(A)\)) is called \(\Delta\)_-algebraic_ (\(\Delta\)_-definable_ or \(\Delta\)_-definitional_) over \(A\). If the set \(A\) is fixed or empty, we just say that \(b\) is \(\Delta\)_-algebraic_, \(\Delta\)_-definable_, or \(\Delta\)-definitional.
2. If \({\rm dcl}^{\Delta}(A)={\rm acl}^{\Delta}(A)\), \({\rm cl}^{\Delta}_{1}(A)\) denotes their common value. In such a case we say that \(A\) is \(\Delta\)_-quasi-Urbanik_.
3. If \(A={\rm acl}^{\Delta}(A)\) (respectively, \(A={\rm dcl}^{\Delta}(A)\)) then \(A\) is called \(\Delta\)_-algebraically_ (\(\Delta\)_-definably_) closed.
We combine \(\Delta\)-algebraic closures and \(n\)-algebraic closure as follows.
**Definition.** 1. For \(n\in\omega\setminus\{0\}\), a set \(A\subseteq M\) and a set \(\Delta\) of formulae an element \(b\) is called \((\Delta,n)\)_-algebraic_ over \(A\), if \(b\in{\rm acl}^{\Delta}(A)\) and it is witnessed by a formula \(\varphi(x,\overline{a})\), for some \(\overline{a}\in A\) and \(\varphi(x,\overline{y})\in\Delta\), with at most \(n\) solutions.
2. The set of all \((\Delta,n)\)-algebraic elements over \(A\) is denoted by \({\rm acl}^{\Delta}_{n}(A)\).
3. If \(A={\rm acl}^{\Delta}_{n}(A)\) then \(A\) is called \((\Delta,n)\)_-algebraically_ closed.
4. If \({\rm acl}^{\Delta}(A)={\rm acl}^{\Delta}_{n}(A)\) then minimal such \(n\) is called the \(\Delta\)_-degree of algebraization_ over the set \(A\) and it is denoted by \({\rm deg}^{\Delta}_{\rm acl}(A)\). If that \(n\) does not exist then we put \({\rm deg}^{\Delta}_{\rm acl}(A)=\infty\). The supremum of values \({\rm deg}^{\Delta}_{\rm acl}(A)\) with respect to all sets \(A\) of given theory \(T\) is denoted by \({\rm deg}^{\Delta}_{\rm acl}(T)\) and called the \(\Delta\)_-degree of algebraization_ of the theory \(T\).
5. A theory \(T\) with \({\rm deg}^{\Delta}_{\rm acl}(T)=1\), i.e., with quasi-Urbanik sets \(A\) of \(T\) only, is called \(\Delta\)_-quasi-Urbanik_, and the models \({\cal M}\) of \(T\) are \(\Delta\)_-quasi-Urbanik_, too.
The following remark collects some obvious properties related to the operators \(\mbox{acl}^{\Delta}\) and \(\mbox{acl}^{\Delta}_{n}\).
**Remark 5.1**: 1. (Monotony) If \(m\leq n\), \(\Delta_{1}\subseteq\Delta_{2}\), and \(A_{1}\subseteq A_{2}\) then \(\mbox{acl}^{\Delta_{1}}_{m}(A_{1})\subseteq\mbox{acl}^{\Delta_{2}}_{n}(A_{2})\).
2. (\(\Delta\)-reduction) For any set \(A\), \(\mbox{acl}(A)=\mbox{acl}^{\Delta}(A)\) (respectively, \(\mbox{acl}_{n}(A)=\mbox{acl}^{\Delta}_{n}(A)\), where \(n\in\omega\setminus\{0\}\)) iff for any \(a\in\mbox{acl}(A)\) (\(a\in\mbox{acl}_{n}(A)\)) it is witnessed by a formula in \(\Delta\).
**Definition**[21]). A theory \(T\) is said to be \(\Delta\)_-based_, where \(\Delta\) is some set of formulas without parameters, if any formula of \(T\) is equivalent in \(T\) to a Boolean combination of formulae of \(\Delta\).
For \(\Delta\)-based theories \(T\), it is also said that \(T\) has _quantifier elimination_ or _quantifier reduction_ up to \(\Delta\).
Let \(\Delta\) be a set of formulae of a theory \(T\).
**Remark 5.2**: 1. For the theory \(T\), the operators \(\mbox{acl}^{\Delta}\) and \(\mbox{acl}^{\Delta}_{n}\) coincide iff finite definable sets \(A\) for formulae \(\varphi(x,\overline{y})\in\Delta\) with \(A=\varphi({\cal M},\overline{a})\), \({\cal M}\models T\) are covered by definable sets \(A_{i}\) for formulae \(\varphi_{i}(x,\overline{y})\in\Delta\) with \(A_{i}=\varphi_{i}({\cal M},\overline{a})\) having at most \(n\) elements.
2. If a theory \(T\) is \(\Delta\)-based then both \(\mbox{acl}^{\Delta^{\prime}}=\mbox{acl}\) and \(\mbox{acl}^{\Delta^{\prime}}_{n}=\mbox{acl}_{n}\), where
\[\Delta^{\prime}=\Biggl{\{}\bigwedge_{i}\varphi_{i}^{\delta_{i}}\ |\ \varphi_{i}\in \Delta,\delta_{i}\in\{0,1\}\Biggr{\}}.\]
**Proposition 5.3**: 1. _The inclusion \(A\subseteq\mbox{acl}^{\Delta}(A)\)_(_respectively, \(A\subseteq\mbox{acl}^{\Delta}_{n}(A)\)_) _holds for any set \(A\) if and only if \(\Delta\) is \(\mbox{acl}\)-reflexive_(\((\mbox{acl},n)\)-reflexive)_, i.e., for any element \(a\) there is \(\varphi(x,y)\in\Delta\) with \(\models\varphi(a,a)\wedge\exists^{\leq m}x\,\varphi(x,a)\) for some \(m\in\omega\)_(_for some \(m\leq n\)_, where \(n\in\omega\setminus\{0\}\))_._
2. _The inclusion \(\mbox{acl}^{\Delta}(\mbox{acl}^{\Delta}(A))\subseteq\mbox{acl}^{\Delta}(A)\)_(_respectively, \(\mbox{acl}^{\Delta}_{n}(\mbox{acl}^{\Delta}_{n}(A))\subseteq\mbox{acl}^{\Delta} _{n}(A)\)_) _holds for any set \(A\) if and only if \(\Delta\) is \(\mbox{acl}\)-transitive_(\((\mbox{acl},n)\)-transitive)_, i.e., for any tuples \(\overline{a}\in A\), \(\overline{b}\in\mbox{acl}^{\Delta}(A)\)_(\(\overline{b}\in\mbox{acl}^{\Delta}_{n}(A)\)) _and a formula \(\varphi(x,\overline{z})\in\Delta\) with \(\models\varphi(c,\overline{b})\wedge\exists^{\leq\omega}x\varphi(x,\overline{ b})\)_(\(\models\varphi(c,\overline{b})\wedge\exists^{\leq n}x\varphi(x,\overline{b})\)) _there is a formula \(\psi(x,\overline{y})\in\Delta\) with \(\models\psi(c,\overline{a})\wedge\exists^{<\omega}x\psi(x,\overline{a})\)_(\(\models\psi(c,\overline{a}\wedge\exists^{\leq n}x\psi(x,\overline{a})\), where \(n\in\omega\setminus\{0\}\))_._
3. _The operators \(\mbox{acl}^{\Delta}\) and \(\mbox{acl}^{\Delta}_{n}\) satisfy Finite character._
4. _The operator \(\mbox{acl}\)\((\mbox{acl}_{n})\) has Exchange property iff any operator \(\mbox{acl}^{\Delta}_{n}\)_(\(\mbox{acl}^{\Delta}_{n}\)) _is extensible till some \(\mbox{acl}^{\Delta^{\prime}}\)_(\(\mbox{acl}^{\Delta^{\prime}}_{n}\)) _with \(\Delta\subseteq\Delta^{\prime}\) and Exchange property. Here either \(\mbox{acl}^{\Delta^{\prime}}\)_(\(\mbox{acl}^{\Delta^{\prime}}_{n}\)_with given \(n\)_) _has a proper extension or does not have proper extensions depending on the given theory._
Proof is obvious using the definition of operators \(\mbox{acl}^{\Delta}\) and \(\mbox{acl}^{\Delta}_{n}\).
Proposition 5.3 characterizes basic properties of closure operators. If the operator \(\mbox{acl}^{\Delta}\) (respectively, \(\mbox{acl}^{\Delta}_{n}\)) satisfies the conditions described in Items 1 and 2 of Proposition 5.3 then it is called _regular_. If, additionally, the condition in Item 4 holds, then that operator is called _pregeometric_.
Proposition 5.3 immediately implies:
**Corollary 5.4**: _The operator \(\mbox{acl}^{\Delta}\)_(_respectively, \(\mbox{acl}^{\Delta}_{n}\)_) _defines a pregeometry on the universe of given structure iff it is pregeometric, i.e., it is regular and satisfies Exchange property._
**Remark 5.5**: Any operator \({\rm acl}^{\Delta}\) is extensible till transitive one \({\rm acl}^{\Delta^{\prime}}\) by the transitive closure forming \(\Delta^{\prime}\) by adding to \(\Delta\) all formulae
\[\exists x_{1},\ldots,x_{n}\bigg{(}\psi(x,x_{1},\ldots,x_{n})\wedge\bigwedge_{i=1 }^{n}\varphi_{i}(x_{i},\overline{y})\bigg{)} \tag{4}\]
as in (1), where \(\varphi_{i}\) and \(\psi\) either belong to \(\Delta\) or were added before, i.e., \(\Delta^{\prime}\) is the set of formulae obtained from \(\Delta\) in the calculus with the rules:
\[\frac{\varphi_{1},\ldots,\varphi_{n},\psi}{\exists x_{1},\ldots,x_{n}\bigg{(} \psi(x,x_{1},\ldots,x_{n})\wedge\bigwedge_{i=1}^{n}\varphi_{i}(x_{i},\overline {y})\bigg{)}},\ \ {\rm for}\ n\in\omega.\]
Using Monotony in Remark 5.1 we have \({\rm acl}^{\Delta}(A)\subseteq{\rm acl}^{\Delta^{\prime}}(A)\) for any set \(A\) in the given structure. Taking \(\Delta^{\prime\prime}=\Delta^{\prime}\cup\{x\approx y\}\) we obtain a regular operator \({\rm acl}^{\Delta^{\prime\prime}}\).
We have a similar regular operator \({\rm acl}_{1}^{\Delta^{\prime\prime}}\) starting with the operator \({\rm acl}_{1}^{\Delta}\).
In contrast with \({\rm acl}^{\Delta}\) and \({\rm acl}_{1}^{\Delta}\) the transitivity of \({\rm acl}_{n}^{\Delta^{\prime\prime}}\), for \(n\geq 2\), can fail under the transitive closure above, since compositions of \(n\)-algebraic formulae may not be \(n\)-algebraic.
In view of properties above a hierarchy of regular and pregeometric algebraic closure operators \({\rm acl}^{\Delta}\), \({\rm acl}_{n}^{\Delta}\) arises depending on chosen sets \(\Delta\) of formulae and bounds \(n\) of cardinalities for sets of solutions of algebraic formulae in \(\Delta\). This hierarchy includes various degrees \({\rm acl}\)-\({\rm dcl}_{\rm dif}(T)\) of algebraization.
**Definition.** Two sets \(\Delta_{1}\) and \(\Delta_{2}\) of formulae are called \({\rm acl}\)_-equivalent_ (respectively, \({\rm acl}_{n}\)-equivalent), denoted by \(\Delta_{1}\sim\Delta_{2}\) (\(\Delta_{1}\sim_{n}\Delta_{2}\)), if \({\rm acl}^{\Delta_{1}}={\rm acl}^{\Delta_{2}}\) (\({\rm acl}_{n}^{\Delta_{1}}={\rm acl}_{n}^{\Delta_{2}}\)).
**Remark 5.6**: Equivalent sets preserve the regularity and pregeometricity. Moreover, equivalence classes \(\sim\)(\(\Delta_{1}\)) and \(\sim\)(\(\Delta_{2}\)) (respectively, \(\sim_{n}\)(\(\Delta_{1}\)) and \(\sim_{n}\)(\(\Delta_{2}\))) allow to choose _coordinated_ representatives \(\Delta_{1}^{\prime}\in\sim\)(\(\Delta_{1}\)) and \(\Delta_{2}^{\prime}\in\sim\)(\(\Delta_{2}\)) (respectively, \(\Delta_{1}^{\prime}\in\sim_{n}\)(\(\Delta_{1}\)) and \(\Delta_{2}^{\prime}\in\sim_{n}\)(\(\Delta_{2}\))) such that for regular \({\rm acl}^{\Delta_{1}^{\prime}}\) and \({\rm acl}^{\Delta_{2}^{\prime}}\), \({\rm acl}^{\Delta_{1}^{\prime}}\cap\ {\rm acl}^{\Delta_{2}^{\prime}}_{n}\) (\({\rm acl}_{n}^{\Delta_{1}^{\prime}}\cap\ {\rm acl}_{n}^{\Delta_{2}^{\prime}}\)) is regular, too, and \({\rm acl}^{\Delta_{1}^{\prime}}\cap\ {\rm acl}^{\Delta_{2}^{\prime}}_{n}={\rm acl}^{\Delta_{1}^{ \prime}\cap\Delta_{2}^{\prime}}_{n}\) (\({\rm acl}_{n}^{\Delta_{1}^{\prime}}\cap{\rm acl}_{n}^{\Delta_{2}^{\prime}}={ \rm acl}_{n}^{\Delta_{1}^{\prime}\cap\Delta_{2}^{\prime}}\)). Indeed, it suffices to add to \(\Delta_{1}^{\prime}\) and to \(\Delta_{2}^{\prime}\) same formulae witnessing common finite definable sets with respect to \({\rm acl}^{\Delta_{1}}\) and \({\rm acl}^{\Delta_{2}}\) (\({\rm acl}_{n}^{\Delta_{1}}\) and \({\rm acl}_{n}^{\Delta_{2}}\)).
Thus, the binary operations \(\wedge\) and \(\wedge_{n}\) of intersection arise mapping the pairs \(({\rm acl}^{\Delta_{1}},{\rm acl}^{\Delta_{2}})\) (respectively \(({\rm acl}_{n}^{\Delta_{1}},{\rm acl}_{n}^{\Delta_{2}})\)) to \({\rm acl}^{\Delta_{1}^{\prime}\cap\Delta_{2}^{\prime}}\) (\({\rm acl}_{n}^{\Delta_{1}^{\prime}\cap\Delta_{2}^{\prime}}\)).
For a structure \({\cal M}\) we define derived structures \({\cal SL}_{\rm acl}({\cal M})\) and \({\cal SL}_{{\rm acl}_{n}}({\cal M})\), \(n\in\omega\), in the following way. The universe \({\rm SL}_{\rm acl}({\cal M})\) (respectively, \({\rm SL}_{{\rm acl}_{n}}({\cal M})\)) of \({\cal SL}_{\rm acl}({\cal M})\) (\({\cal SL}_{{\rm acl}_{n}}({\cal M})\)) consists of all regular operators \({\rm acl}^{\Delta}\) and \({\rm acl}_{n}^{\Delta}\) for \({\cal M}\), and the language consists of the symbol \(\wedge\) (\(\wedge_{n}\)) for the operation of intersection.
If \({\cal M}\) is sufficiently saturated then the operators \({\rm acl}^{\Delta}\) and \({\rm acl}_{n}^{\Delta}\) admit syntactical descriptions by families of formulae, and it induces structures \({\cal SL}_{\rm acl}(T)\) and \({\cal SL}_{{\rm acl}_{n}}(T)\), for the theory \(T={\rm Th}({\cal M})\), which are isomorphic to \({\cal SL}_{\rm acl}({\cal M})\) and \({\cal SL}_{{\rm acl}_{n}}({\cal M})\), respectively.
**Proposition 5.7**: \(1.\) _Any structure \({\cal SL}_{\rm acl}(T)\) is a lower semilattice with the least and the greatest elements._
\(2.\) _Any structure \({\cal SL}_{{\rm acl}_{n}}(T)\) is a lower semilattice with the least element. It has the greatest element if \({\rm acl}_{n}\) is regular._
Proof. By the definition both intersections \(\mathrm{acl}^{\Delta^{\prime}_{1}}\cap\ \mathrm{acl}^{\Delta^{\prime}_{2}}= \mathrm{acl}^{\Delta^{\prime}_{1}\cap\Delta^{\prime}_{2}}\) and \(\mathrm{acl}^{\Delta^{\prime}_{1}}_{n}\cap\mathrm{acl}^{\Delta^{\prime}_{2}}_{n}= \mathrm{acl}^{\Delta^{\prime}_{1}\cap\Delta^{\prime}_{2}}_{n}\) in Remark 5.6 are infimum for \((\mathrm{acl}^{\Delta_{1}},\mathrm{acl}^{\Delta_{2}})\) and \((\mathrm{acl}^{\Delta_{1}}_{n},\mathrm{acl}^{\Delta_{2}}_{n})\), respectively. The operator \(\mathrm{acl}^{\{x\approx y\}}\) is the least element both for \(\mathcal{SL}_{\mathrm{acl}}(T)\) and \(\mathcal{SL}_{\mathrm{acl}_{n}}(T)\). The operator \(\mathrm{acl}\) is the greatest element for \(\mathcal{SL}_{\mathrm{acl}}(T)\). And \(\mathrm{acl}_{n}\) is the greatest element for \(\mathcal{SL}_{\mathrm{acl}_{n}}(T)\) if \(\mathrm{acl}_{n}\) is regular.
Since \(\mathrm{acl}_{1}\) is always regular, Proposition 5.7 implies the following:
**Corollary 5.8**: _Any structure \(\mathcal{SL}_{\mathrm{acl}_{1}}(T)\) is a lower semilattice with the least and the greatest elements._
The following example illustrates that \(\mathcal{SL}_{\mathrm{acl}_{n}}(T)\) may not have the greatest element.
**Example 5.9**: Let \(\mathcal{H}=\langle M,Z,E_{1},E_{2}\rangle\), \(Z\subseteq\mathcal{P}(M)\), be a hypergraph [9] with colored hyperedges, being equivalence classes of equivalence relations \(E_{1}\) and \(E_{2}\), and satisfying the following conditions:
i) each hyperedge consists of three elements;
ii) any element belongs to exactly two hyperedges, and one of these hyperedges is marked by \(E_{1}\) and another one by \(E_{2}\);
iii) the hypergraph does not have cycles.
For the structure \(\mathcal{M}=\langle M,E_{1},E_{2}\rangle\) we have regular \(\subseteq\)-incomparable operators \(\mathrm{acl}^{\{x\approx y,E_{i}(x,y)\}}_{2}\), \(i=1,2\), which do not have proper regular extensions, since the operator \(\mathrm{acl}_{2}\) on \(M\) is not transitive. Indeed, taking an arbitrary element \(a\in M\) we have a \(2\)-algebraic formulae \(E_{1}(x,a)\wedge\neg(x\approx a)\) and \(E_{1}(x,a)\wedge\neg(x\approx a)\). But their composition is not \(2\)-algebraic having \(4\) solutions.
Thus the semilattice \(\mathcal{SL}_{\mathrm{acl}_{2}}(\mathcal{M})\) consists of three operators: the least one \(\mathrm{acl}^{\{x\approx y\}}_{2}\) and two its \(\subseteq\)-incomparable extensions \(\mathrm{acl}^{\{x\approx y,E_{i}(x,y)\}}_{2}\), \(i=1,2\). Moreover, since cardinalities of isolating algebraic formulae \(\varphi(x,a)\) are unbounded, for iterated compositions of \(E_{1}(x,y)\) and \(E_{2}(x,y)\), operators \(\mathrm{acl}_{n}\), for \(n\geq 3\), are not regular, too, implying that the semilattices \(\mathcal{SL}_{\mathrm{acl}_{n}}(\mathcal{M})\) also consist of three operators and do not have the greatest elements. Adding to \(\mathcal{SL}_{\mathrm{acl}_{n}}(\mathcal{M})\) the operator \(\mathrm{acl}\) we obtain the \(4\)-element semilattice \(\mathcal{SL}_{\mathrm{acl}}(\mathcal{M})\) forming a Boolean algebra. Notice also that \(\mathcal{SL}_{\mathrm{acl}_{1}}(\mathcal{M})\) is a singleton consisting of the operator \(\mathrm{acl}^{\{x\approx y\}}_{1}\).
We obtain the similar effect increasing the cardinality of \(E_{i}\)-classes till natural \(m\geq 4\). At the same time, if \(m=2\), then \(\mathcal{SL}_{\mathrm{acl}_{1}}(\mathcal{M})\) forms a \(4\)-element lattice with the least and the greatest elements.
Thus for the theories \(T_{m}\) of structures \(\mathcal{M}_{m}=\langle M_{m},E_{1},E_{2}\rangle\) consisting of \(m\)-element hyperedges marked by \(E_{1}\) and \(E_{2}\) as above, either \(\mathrm{deg}_{\mathrm{acl}}(T_{m})=1\), if \(m=1\) or \(m=2\), or \(\mathrm{deg}_{\mathrm{acl}}(T_{m})=\infty\), if \(m\geq 3\).
Now we define expansions of the structures \(\mathcal{SL}_{\mathrm{acl}}(\mathcal{M})\) and \(\mathcal{SL}_{\mathrm{acl}_{n}}(\mathcal{M})\), \(n\in\omega\), by the operation of union \(\vee\). For the regular operators \(\mathrm{acl}^{\Delta_{1}}\) and \(\mathrm{acl}^{\Delta_{2}}\) (respectively, \(\mathrm{acl}^{\Delta_{1}}_{n}\) and \(\mathrm{acl}^{\Delta_{2}}_{n}\)) we put \(\mathrm{acl}^{\Delta_{1}}\vee\mathrm{acl}^{\Delta_{2}}=\mathrm{acl}^{\Delta}\) (\(\mathrm{acl}^{\Delta_{1}}_{n}\vee\mathrm{acl}^{\Delta_{2}}_{n}=\mathrm{acl}^{\Delta }_{n}\)), where \(\Delta\) is the transitive closure of \(\Delta_{1}\cup\Delta_{2}\). We denote these expansions of \(\mathcal{SL}_{\mathrm{acl}}(\mathcal{M})\) and \(\mathcal{SL}_{\mathrm{acl}_{n}}(\mathcal{M})\) by \(\mathcal{L}_{\mathrm{acl}}(\mathcal{M})\) and \(\mathcal{L}_{\mathrm{acl}_{n}}(\mathcal{M})\), respectively. Again choosing \(\mathcal{M}\) sufficiently saturated we obtain structures \(\mathcal{L}_{\mathrm{acl}}(T)\) and \(\mathcal{L}_{\mathrm{acl}_{n}}(T)\) for \(T=\mathrm{Th}(\mathcal{M})\).
Since the union \(\vee\) and the intersection \(\wedge\) correspond to appropriate union and intersection of \(\Delta_{i}\), in view of Proposition 5.7 we have the following:
**Theorem 5.10**: _The structure \({\cal L}_{\rm acl}(T)\) is a distributive lattice with the least and the greatest elements._
Notice that the structures \({\cal L}_{\rm acl_{n}}({\cal M})\) can be a distributive lattices, too, taking, for instance, finite \({\cal M}\) and \(n\geq|M|\). At the same time, as Example 5.9 shows, in general case the structures \({\cal L}_{\rm acl_{n}}({\cal M})\) may be not lattices.
Recall that for a lattice \({\cal L}\), the lattice _height_ (_width_) is the supremum of cardinalities for (anti)chains. We denote these characteristics by \(h({\cal L})\) and \(w({\cal L})\), respectively,
The following theorem shows that height and width of \({\cal L}_{\rm acl}(T)\) can be unbounded.
**Theorem 5.11**: _For any cardinality \(\lambda>0\) there is a lattice \({\cal L}_{\rm acl}(T)\) with \(h({\cal L}_{\rm acl}(T))=\lambda\) and \(w({\cal L}_{\rm acl}(T))=\lambda\)._
Proof. The case \(\lambda=1\) is realized by an arbitrary theory \(T\) of a singleton. The case \(\lambda=2\) is realized in Example 5.9. Now for \(\lambda\geq 3\) we extend a structure \({\cal N}\equiv{\cal M}=\langle M,E_{1},E_{2}\rangle\) for Example 5.9 till a structure \({\cal N}^{\prime}\) with \(\lambda\) equivalence relations \(E_{i}\), \(i<\lambda\), satisfying the following conditions:
i) each \(E_{i}\)-class contains 3 elements;
ii) for every \(E_{i}\)-class \(X\) and \(E_{j}\)-class \(Y\), where \(i\neq j\), \(|X\cap Y|\leq 1\);
iii) the hypergraph \(\langle M,Z\rangle\), where \(Z\) is the set of all \(E_{i}\)-classes, \(i<\lambda\).
For the theory \(T={\rm Th}({\cal N}^{\prime})\) the lattice \({\cal L}_{\rm acl}(T)\) has \(\lambda\) atoms \({\rm acl}^{\{x\approx y,E_{i}(x,y)\}}\), \(i<\lambda\), witnessing \(w({\cal L}_{\rm acl}(T))=\lambda\). Collecting formulae \(E_{i}(x,y)\) we obtain a chain of \(\lambda\) operators \({\rm acl}^{\Delta_{\mu}}\), where \(\Delta_{\mu}=\{x\approx y\}\cup\{E_{i}\mid i<\mu\}\), \(\mu\leq\lambda\), witnessing that \(h({\cal L}_{\rm acl}(T))=\lambda\).
## 6 Conclusion
We studied variations of algebraic closure, properties and possibilities of these variations, of degree of algebraization, and of difference between definable and algebraic closures. These possibilities are illustrated by a series of examples. Algebraic sets and their degrees are studied. Hierarchy of operators of algebraic closures relative sets is considered, semilattices and lattices for families of these operators are introduced and some characteristics of these structures are described.
It would be natural to describe degrees of algebraization and difference between definable and algebraic closures for natural classes of structures and their theories. It would be also interesting to describe possibilities of Hasse diagrams for semilattices and lattices for families of operators of algebraic closure.
|
2304.04604 | Probing Gluons at the Spin Physics Detector | The Spin Physics Detector (SPD) at the Nuclotron based Ion Collider fAcility
(NICA) is a multi-purpose experiment designed to study nucleon spin structure
in the three dimensions. With capabilities to collide polarized protons and
deuterons with center of mass energy up to 27 GeV and luminosity up to $10^{32}
\rm cm^{-2} \ s^{-1}$ for protons (an order of magnitude less for deuterons),
the experiment will allow measurements of cross-sections and spin asymmetries
of hadronic processes sensitive to the unpolarized and various polarized
(helicity, Sivers, Boer-Mulders) gluon distributions inside the nucleons.
Results from the SPD will be complementary to the present high energy spin
experiments at the RHIC facility or future experiments like the EIC (at BNL)
and AFTER (at LHC). SPD will provide data in moderate and large Bjorken-x for
much improved global analyses of spin structures of the basic building blocks
of Nature. With polarized deuteron collisions, SPD will be the unique
laboratory for probing tensor polarized gluon distributions. In addition, there
are also possibilities of colliding other light nuclei like Carbon at reduced
collision energy and luminosity at the first stage of the experiment. | Alexey Guscov, Amaresh Datta, Anton Karpishkov, Igor Denisenko, Vladimir Saleev | 2023-04-10T14:20:11Z | http://arxiv.org/abs/2304.04604v3 | # Probing Gluons at the Spin Physics Detector
###### Abstract
The Spin Physics Detector (SPD) at the Nuclotron based Ion Collider fAcility (NICA) is a multi-purpose experiment designed to study nucleon spin structure in the three dimensions. With capabilities to collide polarized protons and deuterons with center of mass energy up to 27 GeV and luminosity up to \(10^{32}\)cm\({}^{-2}\) s\({}^{-1}\) for protons (an order of magnitude less for deuterons), the experiment will allow measurements of cross-sections and spin asymmetries of hadronic processes sensitive to the unpolarized and various polarized (helicity, Sivers, Boer-Mulders) gluon distributions inside the nucleons. Results from the SPD will be complimentary to the present high energy spin experiments at the RHIC facility or future experiments like the EIC (at BNL) and the AFTER (at LHC) in understanding the spin structure of the basic building blocks of visible matter. Monte Carlo simulation based results presented here demonstrate the impact of the SPD asymmetry mesurements on gluon helicity PDF and gluon Sivers functions. With polarized deuteron collisions, the SPD will be the unique laboratory for probing tensor polarized gluon distributions. In addition, there are also possibilities of colliding other light nuclei like Carbon at reduced collision energy and luminosity at the first stage of the experiment.
\({}^{1}\)_Joint Institute for Nuclear Research, Joliot-Curie 6, Dubna-141980, Moscow Region, Russia._
\({}^{2}\)_Samara National Research University, Moskovskoye Hwy 34, Samara-443086, Samara Region, Russia_
\({}^{\rm a)}\)[email protected]_
\({}^{\rm b)}\)[email protected]_
\({}^{\rm c)}\)[email protected]_
\({}^{\rm d)}\)[email protected]_
\({}^{\rm c)}\)[email protected]_
**Keywords:** particles, detectors, high energy physics, parton spin, gluon PDF, gluon TMD, Sivers
## 1 Introduction
Over the last few decades, experimental results have often surprised the physics community and opened up new windows to the intricate details of the structure of the fundamental building blocks of Nature. European Muon Collaboration (EMC) results [1] shed light on the importance of the possible gluonic contributions to the nucleon spin. E704 and other results [2, 3] of large single spin asymmetries inspired the community to think of the motions of the quarks and gluons inside the nucleons.
Visible matter made of quarks and gluons is mostly described with the help of Quantum Chromodynamics (QCD), the theory of strong force. Our present understanding of the quarks and gluons
comes from the high energy limit of perturbative QCD (pQCD) [4, 5]. Decades of experimental measurements of inclusive and semi-inclusive Deep Inelastic Scattering (DIS) (at COMPASS, HERMES), electron positron scattering (at HERA), hadron scattering (at RHIC) have so far given us a fairly precise description of quarks [6, 7] inside the nucleons using pQCD as the preferred tool for interpretations. However, the gluonic component (which accounts for \(\sim 99\%\) of all nucleon mass and therefore, all visible matter) is still poorly understood. It is imperative for the physics community to experimentally access the gluons inside the nucleons to be able to consistently describe the baryonic matter and their interactions.
Gluon distributions inside nucleons are harder to access than those of the quarks in semi-inclusive DIS scattering of leptons off hadrons as gluons do not interact with leptons directly via strong force. Hadronic scattering at high energies has been, in the recent years, the best tool for probing gluon spin distributions inside protons [8]. Understanding of the gluon helicity distributions have changed over the first couple of decades of the twenty first century as the analyses included more and more experimental data form various sources [9, 10, 11, 12].
A more complete picture of the three dimensional partonic structure has been emerging [13] in the last decade or so with more and more data to access the transverse momentum dependent (TMD) parton distribution functions (PDF). Large transverse asymmetries in hadron production necessitated a closer look at the partonic structure including transverse momentum dependent distribution functions [14] and fragmentation [15].
The SPD [16] is a proposed experiment at the Nuclotron based Collider fAcility (NICA) at the Joint Institute for Nuclear Research (JINR) in Dubna. It is particularly focused at probing the gluons inside protons and deuterons. The SPD will make cross-section and asymmetry measurements of several hadronic processes sensitive to various (unpolarized and polarized) gluon distributions.
The SPD will operate at medium energy ranges (up to 10GeV in the initial stage and up to 27 GeV in the later stage) that are complementary to the present and future experiments (Figure 1a) with higher center-of-mass energies (i.e. PHENIX, STAR, AFTER [17], LHCspin [18]). As a consequence, measurements at the SPD will probe high momentum fraction \(x\) and low to medium energy scale \(Q^{2}\) that will provide access to gluonic distribution in a kinematic regime (illustrated by
Figure 1: (a) Luminosity and centre of mass energy of collision for the SPD and other relevant spin experiments. (b) Kinematic coverage of the SPD and other future spin experiments.
Figure 1b) that is complementary to those accessed in other upcoming major spin physics experiment like the Electron Ion Collider (EIC) [19, 20].
## 2 Materials and Methods
### Physics of Stage I
In the initial stage NICA will provide proton beams up to 5 GeV with collision luminosity up to \(10^{31}cm^{-2}s^{-1}\) for the \(pp\) collisions and up to 4.5 GeV/n (per nucleon) deuteron beams with collision luminosity up to \(10^{30}cm^{-2}s^{-1}\) for the first few years. There are also possibilities of asymmetric collisions like \(pd\) as well as light nuclei (i.e. \(C,Ca\)) collisions.
The SPD will take advantage of the low energies at the initial stage to look for compelling and interesting physics effects in \(pp\), \(dd\) and possibly in the light nuclei collisions. Various physics goals and programs for this initial stage are discussed in detail in the published work [21].
#### 2.1.1 Spin effects in elastic collisions
Measurements of the \(pp\) elastic scattering cross-sections in small angles (\(\theta\sim 3-10^{\circ}\)) will access a kinematic region of momentum transfer \(|t|\sim 0.1-0.8\) GeV\({}^{2}\). Small oscillations in the \(t\)-dependence probe the proton structure involving mesons in the periphery (pion cloud model). The SPD will provide high precision data in this region to test the models of two-pion exchange process in the elastic scattering.
Glauber models with Gribov inelastic corrections have been successful in describing elastic \(dd\) scattering data at a few tens of GeV. At the first stage energies of up to \(\sqrt{s}=9\) GeV/n, unpolarized \(dd\) cross-section measurements and subsequent comparisons with calculations will test if the inelastic corrections are relevant for this kinematic regime.
At large angles \(\theta\sim 90^{\circ}\), \(dd\to dd\) processes are sensitive to the six-quark structure of the deuterons. The SPD will make cross-section measurements from \(dd\) elastic collisions at large \(\theta_{CM}\) to search for non-nucleonic degrees of freedom.
#### 2.1.2 Charmonium production
The SPD will measure light and charm meson productions near the production threshold. Of particular interest is the charmonium (\(J/\Psi\)) formation near threshold for \(pp\) and \(dd\) collisions as it will test the isotopic dependence (involvement of protons or neutrons) on the production due to different spin structure of the corresponding matrix elements.
Threshold production of charmonia in ion-ion collisions is also considered as a promising probe of the quark-gluon plasma (QGP).
#### 2.1.3 Strange hypernuclei production
Although there has been no evidence of stable hypernuclei of baryon number \(A=2\), there are measurements [28] of candidates (\({}^{3}_{\Lambda}He,^{3}_{\Lambda}H\)) with baryon number \(A=3\). There have been proposals to look for neutral hypernucleus \({}^{4}_{\Lambda\Lambda}n\) in the \(dd\) collisions at the SPD. Calculations predict a peak in the production at \(\sqrt{s}=5.2\) GeV. A measurement of this hypernuclei with strangeness \(S=-2\) would be the first of its kind.
#### 2.1.4 Other interesting physics at stage I
Measurements during the stage I of the SPD will also test various effects that can be broadly categorized as multi-quark correlations. These include nuclear PDFs involving fluctons or multi-quark degrees of freedom, higher twist contributions of two or three quark correlations in PDFs, multi-parton scattering in hadronic and nuclear collisions and formation of exotic multi-quark resonance states (i.e. tetraquark and pentaquark).
### Physics of Stage II
For stage II when NICA will reach its full potential of luminosity (\(10^{32}cm^{-2}s^{-1}\) for the \(pp\) collisions), energy and polarization capacities, the SPD will focus primarily on making measurements of observables from polarized \(pp\) and \(dd\) collisions that are sensitive to the gluon distributions inside nucleons. Detailed discussions of the access to gluon contents from the measurements at the SPD can be found in the article [23]. At peak luminosities, one year of data at the SPD will correspond to integrated luminosities of 1.0 and 0.1 fb\({}^{-1}\) respectively for p-p collisions at \(\sqrt{s}=27\) and 13.5 GeV.
Measurements of asymmetries and correlations from the polarized proton-proton collisions at the SPD will, in particular, be sensitive to gluon helicity, Sivers and Boer-Mulders distributions. Measurements from the polarized deuteron collisions will access gluon transversity and tensor polarized gluon distribution inside deuterons. NICA will be the first facility to provide polarized deuteron beams in such energy range and the SPD will have the unique ability to access quantities that have not been measured before.
Unpolarized cross-section measurements at the SPD will provide data sensitive to the unpolarized gluon distributions (\(g(x)\)). Double-helicity asymmetry measurements (\(A_{LL}\)) at the SPD will probe gluon helicity distribution function(\(\Delta g(x)\)), single transverse spin asymmetries (\(A_{N}\)) will provide access to the gluon Sivers function (\(\Delta_{N}^{g}(x,k_{T})\)) and measurements of the azimuthal correlations of hadron pair production from unpolarized \(pp\) collisions will probe the Boer-Mulders distributions (\(h_{1}^{\perp}(x,k_{T})\)). Double and single vector/tensor asymmetries from polarized \(dd\) collisions will respectively probe the gluon transversity (\(\Delta g_{T}(x)\)) and tensor polarized gluon PDF (\(C_{G}^{T}(x)\)).
Figure 2: (a) PDFs in red (color online) will be accessed in measurements at the SPD. (b) Expected luminosity, energy and bunch intensity for proton beams at NICA.
### Detectors for Stage I
The SPD detector system [29] will have complete \(4\pi\) coverage in solid angle. The design includes a barrel part and two end-caps. In the barrel part, the SPD will feature a solenoid magnet providing a field up to 1.2 T at the interaction point. The magnetic field provides charge separation of the particle tracks and also helps in determination of the charged particle momentum.
Going outward from the aluminium beam-pipe, the detectors in the barrel part of the SPD at this stage will include :
1. Micromegas tracker that will help charged particle momentum reconstruction.
2. Multi-layer tracker system with PET (metal coated polyethylene terephthalate) straws arranged along Z,U,V (U,V are stereo layers at \(5^{\circ}\) with Z straws along the beam direction) with a spatial resolution \(\sim 150\ \mu m\). Tracker will provide charged particle momentum as well as limited particle identification using energy depositions \(-\frac{dE}{dx}\) in the straw layers with an energy resolution \(\frac{\delta E}{E}=8.5\%\).
3. A range system (RS) just outside the magnet consisting of layers of mini Drift Tubes (MDT) and absorbing material (\(Fe\)). RS will provide muon-to-hadron separation of the charged tracks and hadronic calorimetry.
End-caps of the the SPD detector system at stage I will consist of : micromegas, straw tracker, beam-beam counter (BBC) that will provide local polarimetry, luminosity control and collision timing information, range system and zero-degree calorimeter (ZDC) in far forward and backward positions that will provide local polarimetry, luminosity control and event selection criteria for elastic collisions.
### Detectors for Stage II
For the second stage of the operations, due to different requirements of the physics in focus at this stage, some parts will be replaced and new detectors will be included [29].
Figure 3: Schematic of the SPD detector at stage I.
For stage II, the barrel part of the SPD will consist of :
1. An improved silicon vertex detector to replace the Micromegas from stage I. Two options being considered are (1) monolithic active pixel sensor (MAPS) and (2) double silicon strip detector (DSSD). The new component will contribute to tracking, momentum determination and specifically in reconstructing secondary vertices for the decays of short lived particles. MAPS silicon tracker will provide a secondary vertex position resolution of \(40-60\ \mu m\).
2. Straw tracker. Tracking system will provide a momentum resolution \(\frac{\delta p_{T}}{p_{T}}=2\%\) for 1 GeV/c momentum tracks (same resolution in stage I).
3. Time-of-flight (TOF) detector for particle identification with a timing resolution of 50 ps and \(\pi/K\) separation for charged tracks up to 1.5 GeV/c momentum.
4. Electromagnetic calorimeter for the determination of photon energies with an energy resolution \(\frac{\delta E}{E}=\frac{5\%}{\sqrt{E}}\oplus 1\%\) and electron/positron identification.
5. Range system.
The endcaps will also have some new components : silicon vertex detector, straw tracker, BBC, TOF detector, Aerogel detector for extending the \(\pi/K\) separation up to 2.5 GeV/c momentum, electromagnetic calorimeter and ZDC.
### Detector Performance
Figure (5) shows Monte Carlo simulation performance of some of the detectors to be used in key measurements. From the left, in Figure (5a) two photon invariant mass spectra using the electromagnetic calorimeter shows the pion mass resolution (\(\delta_{m}=9.8\) MeV) that can be achieved. Figure (5b) illustrates the particle identification using the time-of-flight detector. Pion-kaon separation
Figure 4: Schematic of the SPD detector at stage II.
can be achieved for particle momentum up to 1.5 GeV/c. Figure (5c) illustrates the secondary vertex resolution along the beam direction for three possibilities of central tracking detectors, namely micromegas for the first stage and DSSD or MAPS for the second stage. MAPS based detectors is clearly the best performing detector providing a secondary vertex position resolution of \(\delta_{z}\sim 50\)\(\mu\)m.
## 3 Results
In order to access various gluon distributions, the SPD will focus on processes involving gluonic interactions. Three major channels of interest at the the SPD are :
* **Quark-gluon scattering to prompt photons**. This is a particularly clean channel for theoretical interpretations as it does not involve hadronization.
* **Gluon fusion to charmonia production\((J/\Psi,\Psi(2S),\chi_{c1/c2})\)**. Measurements at the SPD will be primarily via di-muon decay channels of the charmonia.
* **Gluon fusion to open-charm mesons**. \(D\) mesons at the SPD will be detected via hadronic decay channels. This is the highest statistics channel but a challenging measurement due to large amount of combinatorial background.
Figure 5: (a) Mass resolution of pion mass reconstruction from two photons. (b) Mass-squared vs. momentum at the time-of-flight detector. (c) \(D^{0}\) secondary vertex resolution along beam direction.
As illustrated above (Figure 7), at the peak SPD energy of \(\sqrt{s}=27\) GeV, open charm processes are the most abundant among these three. However, the hadronic channels of charmed meson decays are notoriously difficult to measure because of the orders of magnitudes more combinatorial background arising from random combinations of hadrons from other hard processes. Charmonia are comparatively easier to measure via di-muon decay channels with good muon-hadron separation (using Range System at the SPD). Prompt photons, while the rarest among these processes, has the advantage of being the cleanest probe for theoretical interpretations. This channel also requires careful estimation of background arising from decays of light neutral mesons (\(\pi^{0},\eta\)).
### Prompt Photon Measurements
Prompt photon productions in the leading order may occur via gluon Compton scattering and quark-antiquark annihilation. However, at SPD energies, the \(q\bar{q}\to g\gamma\) contribution is small. Fragmentation contribution (from scattered (anti-)quarks) to the prompt photon production is also
Figure 6: Schematics of partonic sub-processes of interest : (a) quark-gluon scattering to prompt photon production (b) gluon fusion to charmonia production (c) gluon fusion to open charm production.
Figure 7: Cross-sections of three channels of interest at SPD kinematics [22].
estimated to be small (15-30 %) [23] making prompt photons an excellent tool to probe gluon distributions inside nucleons. Measurements will be made using the electromagnetic calorimeter and the photons from neutral light meson (i.e. \(\pi^{0},\eta\)) decays are the largest source of background. Untagged photons from \(\pi^{0}\) can be estimated using MC simulations.
Figure 8: Prompt photon double helicity asymmetry as function of transverse momentum calculated using NNPDF3.0 unpolarized and DSSV2014 polarized PDF with projected uncertainties from measurements with one year of data at the SPD.
Figure 9: Estimated impact of \(A_{LL}^{\gamma}\) measured at the SPD. Black and red solid lines are respectively the mean of the thousand replicas of the DSSV2014 polarized gluon PDF before and after the re-weighting using the projected SPD measurements. Light blue and grey bands respectively are the spread of replicas indicating the statistical uncertainties.
Double helicity asymmetry of the prompt photons at the SPD is sensitive to the gluon distribution in the high momentum fraction range. Recent works [24] have tested inclusion of new measurements with Monte Carlo re-weighting instead of full extraction of PDFs, creating an efficient technique to estimate the impact of new data points on a global analysis of PDF extraction. A similar study [27] using the NNPDF3.0 unpolarized and DSSV2014 as the polarized PDF set estimates the impact of the \(A_{LL}^{\gamma}\) measurement with the projected statistical uncertainties (Figure 8) for one year of recorded data at the SPD. Figure 9 illustrates that in the high-x region (\(0.2\leq x\leq 0.8\)) SPD measurements can be used to reduce the uncertainties of the gluon helicity distribution (\(\Delta g(x)\)) by a factor of \(\sim 2\).
Theoretical estimates of the single transverse spin asymmetries show (Figure 9(b)) that the asymmetries in the forward \(x_{F}\) region are dominated by the quark-antiquark annihilation process whereas the gluon dominated process generates asymmetries in the backward \(x_{F}\) region.
### Charmonia Measurements
Charmonia production at the SPD energies (\(10-27\) GeV) is dominated by the gluon-gluon fusion process [23]. Charmonia measurement via di-muon invariant mass spectra using the Range System as muon identifier is a powerful tool at the SPD. Mass resolution of \(\sim 40\) MeV or better is expected for \(J/\Psi\) from di-muon invariant mass spectra.
Figure 10: (a) Estimated uncertainties for \(A_{N}^{\gamma}\) as function of \(x_{F}\). (b) Theoretical estimates of \(A_{N}^{\gamma}\) at the SPD. Calculations performed using SIDIS1 [26] parameterization of the Gluon Sivers Function.
About \(\sim 12\) M events with \(J/\Psi\) are expected from one year of data at peak luminosity at the SPD [23]. It will also be possible to perform measurements of rarer charmonia. \(\Psi(2S)\) can be detected via \(\mu^{+}\mu^{-}\) and \(\mu^{+}\mu^{-}\pi^{+}\pi^{-}\) decay channels. About seven hundred thousand events producing \(\Psi(2S)\) are expected in one year of data at the SPD. Moreover, \(\chi_{c1}\) and \(\chi_{c2}\) can also be measured via \(\gamma\mu^{+}\mu^{-}\) channel and about 2.5 M events including both types are expected in one year of data at the peak luminosity.
Alongside unpolarized cross-section, which can be used to compare with theoretical estimations to shed light on the poorly understood hadronization models of charmonia, double helicity asymmetry (\(A_{LL}\)) and single transverse spin asymmetry (\(A_{N}\)) will also be measured. It will be possible to perform high precision \(A_{LL}^{J/\Psi}\) measurements (Figure 13(b)) at the SPD that will improve on the current knowledge of the gluon helicity PDF.
Figure 11: From Monte Carlo simulation studies at the SPD : (a) Di-muon invariant mass spectra for \(J/\Psi\) measurements. (b) Invariant mass spectra for \(\Psi(2S)\) measurements. (c) Invariant mass spectra showing \(\chi_{c1/c2}\) peaks.
Figure 12: (a) Estimated \(A_{LL}^{J/\Psi}\) as function of \(p_{T}\) calculated using unpolarized NNPDF NLO and polarized NNPDFpol1.1 sets. The green band indicates uncertainties due to hadronization parameters (LDME) and the brown band indicates scale uncertainties. (b) Projected statistical uncertainties of \(A_{LL}^{J/\Psi}\) for three different cases of selection cuts on the muon polar angles are shown.
In a recent simulation study, \(A_{LL}^{J/\Psi}\) were calculated using NNPDF NLO unpolarized and NNPDF-pol1.1 polararized sets. Using a technique similar to [24], the resulting calculations of \(A_{LL}^{J/\Psi}\) were used with the projected uncertainties at the SPD for one year of data to re-weight the polarized PDF replicas to estimate the impact of the measurement at the SPD. Figure 13 illustrates the impact of \(J/\Psi\) double helicity asymmetry measurements at the SPD. Expected measurements will reduce uncertainties in the Bjorken-x range of \(0.2\leq x\leq 0.5\) significantly.
Figure 13: Impact of \(A_{LL}^{J/\Psi}\) measurements at the SPD. Blue and red lines respectively show the mean of the NNPDFpol1.1 replica sets before and after the re-weighting using the projected uncertainties of SPD measurements. Light blue and light orange bands respectively show the spread of hundred replicas of the PDF before and after the re-weighting.
Transverse single spin asymmetries of \(J/\Psi\) are sensitive to the gluon Sivers distributions. Theoretical estimations [30] of the \(A_{N}^{J/\Psi}\) depend very strongly on the choice of the parton models and hadronization models and estimations can differ by an order of magnitude depending on the phenomenological parameterization used as shown in Figure 14. High precision measurements of \(A_{N}^{J/\Psi}\) can be extremely useful in reducing such model dependence in this kinematic regime.
### Open Charm Measurements
Open charm meson spin asymmetries have been measured in DIS experiment like COMPASS [31] to estimate gluon polarization but it has not been measured as yet in \(pp\) collider experiments. At the SPD, open charm D mesons will be detected through their hadronic decay channels i.e. \(D^{0}\to\pi^{+}K^{-}\), \(D^{+}\to\pi^{+}\pi^{+}K^{-}\) and their antiparticle counterparts. Figure (7) shows that the open charm production cross-sections are almost two orders of magnitude larger than the charmonium production cross-sections, making them quite abundant at the SPD kinematics. However, the abundance of charged pions and kaons from other hard scattering processes make it a particularly challenging measurement. The combinatorial background from pions and kaons from other processes is more than four orders of magnitude larger than the signal.
Figure 14: Estimated single transverse spin asymmetries of \(J/\Psi\) as function of \(x_{F}\) using generalized parton model (GPM) with two different parameterizations ((a) D’Alesio [25] and (b) SIDIS1 [26]) of the Gluon Sivers Function. Projected statistical uncertainties (for three different selection cuts on muon polar angles) for one year of data at the SPD are displayed.
Theoretical calculations of the transverse single spin asymmetry for inclusive D mesons at the SPD kinematics (Figure 15) using the color gauge invariant generalized parton model (CGI-GPM) show significant expected asymmetries in the forward region (\(x_{F}>0.2\)) whereas for backward \(x_{F}\) the asymmetry is compatible with zero. However, the size of the asymmetry depends strongly on the parameterization used for the parton model.
Figure 16: (a) Invariant mass spectra of random combinations of pions and kaons and those from \(D^{0}\) decays for \(x_{F}\geq 0.2\). (b) Selections based on the vertex detector used to suppress combinatorial background.
Figure 15: (a) Estimated transverse single spin asymmetry of inclusive D meson productions at the SPD using color-gauge-invariant generalized parton model (CGI-GPM). The first two panels (a) and (b) show the dependence on different sets of parameters of the Gluon Sivers Function. (c)
High precision measurements of the secondary vertex using silicon based central trackers can help reduce the combinatorial random background. Figure (16a) shows the relative sizes of the background and signal in the pion-kaon invariant mass spectra intended for \(D^{0}\) decay reconstructions for one year of data at the SPD. Figure (16b) illustrates the effects of the vertex detector in reducing the background. Monte Carlo simulation based studies are in progress to reduce the background further improving the figure of merit making the measurements viable to be compared to theoretical estimates.
As can be observed (Figures 15a, 15b), the two sets of parameters used to describe the Gluon Sivers Function (GSF) in the theoretical calculations predict peak asymmetries differing by an order of magnitude [32]. D'Alesio parameters [25] predict asymmetries of the size of 1% whereas SIDIS1 parameters [26] predict asymmetries of \(\sim 17\%\). SPD measurements can be extremely helpful in reducing such parameter dependence with high enough statistical precision. From the recent Monte Carlo studies of neutral D mesons, the projected statistical uncertainties of the transverse single spin asymmetries for one year of data show (Figure 17) that measurements at the SPD will provide enough precision to be able to reduce such strong model dependence of theoretical calculations and provide valuable data points for future extraction of the Gluon Sivers Function.
### Deuteron Measurements
The SPD will be a unique laboratory to access information about the unpolarized and polarized structure of deuterons as it will have the capacity to collide polarized deuterons over a range of energies.
Comparisons of unpolarized gluon PDFs of deuterons and that of protons (Figure 18a) show steep deviations above \(x>0.6\) indicating non baryonic contributions. High precision cross-section measurements at the SPD can be compared with theoretical calculations to test the predictions and the size of such deviations.
Tensor polarization of quarks in deuterons has been formerly accessed via asymmetry measurements in DIS experiments (at HERMES). However, Figure 18b shows that DGLAP energy evolution of PDFs suggest that at a higher energy scale (i.e. \(Q^{2}=30\) GeV) a non-zero tensor polarized gluon
Figure 17: Projected statistical uncertainty of \(D^{0}\) measurements at the SPD for one year of data.
component is possible. Vector and tensor single spin asymmetry measurements at the SPD can test such predictions from the perturbative QCD calculations.
## 4 Discussion
The SPD experiment at the NICA collider facility at JINR is going to be a unique laboratory to provide a large variety of possible measurements from collisions of polarized proton and deuteron beams over a range of energies and luminosities. In the early stage of operations, measurements at the SPD will probe a wide swathe of interesting physics phenomena encompassing spin effects in low energy nucleon collisions, hyperon and hypernuclei formation, threshold production of charmonia, multi-parton scattering and multi-quark correlations. In the later stage of operations, the SPD experiment will focus on its most prominent goal of accessing the gluon contents inside protons and deuterons via measurements of unpolarized cross-sections and various spin asymmetries in the production of different probe particles.
Physics programs at the SPD aim to test various phenomena at low to medium energies and provide high precision data to improve present understanding of nucleon structure in general and various spin structures in particular. Results will test QCD in general and will specifically focus on providing data in a kinematic range not probed well as yet to access the gluon content of the nucleons.
Results of detailed Monte Carlo studies are presented in the current work for all three flagship channels of measurements at the SPD aimed at probing gluon content of the nucleons.
For prompt photons, statistical re-weighting technique of PDFs illustrates the impact of the helicity asymmetry measurements on the \(\Delta g(x)\) for one year of data at the SPD.
For charmonium (\(J/\Psi\)) also, work presented here illustrates the impact of the measurements of double helicity asymmetries. For both measurements, results presented here demonstrate that the SPD will have a significant impact in improving our knowledge of the helicity PDF in the large
Figure 18: (a) Comparisons of the gluon contents of deuterons and protons inside deuterons ([33]). (b) Tensor polarized gluon PDF from DGLAP energy evolution of quark/anti-quark PDFs ([23]).
Bjorken-\(x\) region as expected from the design and proposal.
For the open-charm channel, work presented in this article show the significant improvement from the early designs by inclusion of the MAPS silicon detector as the central tracking device. Results presented here illustrate the effect of the high precision secondary vertex reconstruction in reducing orders of magnitude higher combinatorial background. The statistical uncertainties of transverse single spin asymmetries have been shown to be able to distinguish between the severe model dependence in the description of the Gluon Sivers Function.
Fixed target Deep Inelastic Scattering experiments have estimated [34, 35] separate Sivers and Collins contributions to the transverse single spin asymmetries but \(pp\) collider experimental results so far lacked precision to separate between the two effects. For certain probes (i.e. meson production), the SPD will allow investigations of the contributions of Sivers and Collins effects in the single transverse spin asymmetries. A recent analysis [36] of TMD asymmetries measured in various SIDIS experiments (COMPASS, HERMES) and collider (BRAHMS, STAR) has attempted for the first time to extract Quark Sivers Function. Works [38, 37] studying the gluon TMD distributions and their contribution to the transverse spin asymmetry measurements of produced hadrons point out the lack of experimental data in this budding field of interest. Attempts to extract gluon Sivers distribution will require data from different kinematic ranges. At present RHIC is the only proton-proton collider capable of colliding polarized beams. In future, the SPD will be able to provide some of the much needed data for such phenomenological global analyses aimed to extract gluon TMD distributions in future.
## 5 Conclusion
The SPD is an international collaboration involving 32 institutes from 14 countries and boasts about three hundred members so far. The collaboration is still growing and is open to participation of experts from different parts of the world.
The conceptual design report (CDR) [16] of the experiment was published in early 2021 and was reviewed by the JINR Program Advisory Committee (PAC) in January 2022. Favourable reports from the PAC made it possible for the collaboration to move to the next step of producing a detailed technical design report (TDR) [29].
A tentative schedule expects building of the first stage of the detector to commence in 2026 and possibly take first data sometime around 2028. After a couple of years of data at lower energy and luminosity for the first stage of physics goals, the SPD is scheduled to move to the next stage of upgrades with a focus towards measurements accessing gluon components inside nucleons and light nuclei.
**Acknowledgements**
We would like to thank Alexander Korzenev from the Joint Institute for Nuclear Research for his contributions to the hardware designs of the SPD detector and Alexey Zhemchugov from the Joint Institute for Nuclear Research and Vladimir Andreev from the Lebedev Physical Institute RAS for their valuable contributions in the software infrastructure and simulated data reconstruction.
|
2303.01903 | Prophet: Prompting Large Language Models with Complementary Answer
Heuristics for Knowledge-based Visual Question Answering | Knowledge-based visual question answering (VQA) requires external knowledge
beyond the image to answer the question. Early studies retrieve required
knowledge from explicit knowledge bases (KBs), which often introduces
irrelevant information to the question, hence restricting the performance of
their models. Recent works have resorted to using a powerful large language
model (LLM) as an implicit knowledge engine to acquire the necessary knowledge
for answering. Despite the encouraging results achieved by these methods, we
argue that they have not fully activated the capacity of the blind LLM as the
provided textual input is insufficient to depict the required visual
information to answer the question. In this paper, we present Prophet -- a
conceptually simple, flexible, and general framework designed to prompt LLM
with answer heuristics for knowledge-based VQA. Specifically, we first train a
vanilla VQA model on a specific knowledge-based VQA dataset without external
knowledge. After that, we extract two types of complementary answer heuristics
from the VQA model: answer candidates and answer-aware examples. Finally, the
two types of answer heuristics are jointly encoded into a formatted prompt to
facilitate the LLM's understanding of both the image and question, thus
generating a more accurate answer. By incorporating the state-of-the-art LLM
GPT-3, Prophet significantly outperforms existing state-of-the-art methods on
four challenging knowledge-based VQA datasets. To demonstrate the generality of
our approach, we instantiate Prophet with the combinations of different VQA
models (i.e., both discriminative and generative ones) and different LLMs
(i.e., both commercial and open-source ones). | Zhou Yu, Xuecheng Ouyang, Zhenwei Shao, Meng Wang, Jun Yu | 2023-03-03T13:05:15Z | http://arxiv.org/abs/2303.01903v3 | # Prompting Large Language Models with Answer Heuristics for
###### Abstract
Knowledge-based visual question answering (VQA) requires external knowledge beyond the image to answer the question. Early studies retrieve required knowledge from explicit knowledge bases (KBs), which often introduces irrelevant information to the question, hence restricting the performance of their models. Recent works have sought to use a large language model (i.e., GPT-3 [3]) as an implicit knowledge engine to acquire the necessary knowledge for answering. Despite the encouraging results achieved by these methods, we argue that they have not fully activated the capacity of GPT-3 as the provided input information is insufficient. In this paper, we present Prophet--a conceptually simple framework designed to **prom**t_ GPT-3 with answer **heuristics** for knowledge-based \(\overline{\text{VQA}}\). Specifically, we first train a vanilla VQA model on a specific knowledge-based VQA dataset without external knowledge. After that, we extract two types of complementary answer heuristics from the model: answer candidates and answer-aware examples. Finally, the two types of answer heuristics are encoded into the prompts to enable GPT-3 to better comprehend the task thus enhancing its capacity. Prophet significantly outperforms all existing state-of-the-art methods on two challenging knowledge-based VQA datasets, OK-VQA and A-OKVQA, delivering 61.1% and 55.7% accuracies on their testing sets, respectively.
## 1 Introduction
Recent advances in deep learning have enabled substantial progress in visual question answering (VQA) which requires a machine to answer free-form questions by reasoning about given images. Benefiting from large-scale vision-language pretraining, the state-of-the-art methods have even surpassed human level on several representative benchmarks [1, 43, 51]. Despite the success of these methods, their reasoning abilities are far from satisfactory, especially when _external knowledge_ is required to answer the questions. In this situation, the task of knowledge-based VQA is introduced to validate models' abilities to leverage external knowledge. Early knowledge-based VQA benchmarks additionally provide structured knowledge bases (KBs) and annotate required knowledge facts for all the questions [40, 41]. More recently, benchmarks emphasizing on _open-domain_ knowledge have been established [29, 32], which means KBs are no longer provided and any external knowledge resource can be used for answering. We focus on the task with open-domain knowledge in this paper.
A straightforward solution for knowledge-based VQA is to retrieve knowledge entries from explicit KBs, _e.g._,
Figure 1: Conceptual comparisons of three knowledge-based VQA frameworks using a frozen GPT-3 model [3]. While PICa [46], KAT [11], and REVIVE [22] directly feed the caption (C) and question (Q) into GPT-3 as the prompt, we argue that the information they provide for GPT-3 is insufficient thus cannot fully activate GPT-3’s potential. In contrast, our Prophet learns a vanilla VQA model without external knowledge to produce _answer heuristics_, which endows GPT-3 with richer and more task-specific information for answer prediction.
Wikipedia and ConceptNet [23]. Then a KB-augmented VQA model performs joint reasoning over the retrieved knowledge, image, and question to predict the answer [7, 8, 28, 45, 54]. However, the performance of these retrieval-based approaches is limited for two reasons: (i) the required knowledge may not be successfully retrieved from the KBs; and (ii) even if the required knowledge is retrieved, plenty of irrelevant knowledge is inevitably introduced, which hampers the learning of VQA models.
Apart from those studies using explicit KBs, another line of research resorts to pretrained large language models, _e.g_., GPT-3 [3], as implicit knowledge engines for knowledge acquisition. A pioneering work by PICa employs the frozen GPT-3 model to answer the question with formatted prompt as its input [46]. Given a testing image-question pair, PICa first translates the image into a caption using an off-the-shelf captioning model. The question, caption, and a few in-context examples are then integrated into a textual prompt that can induce GPT-3 to predict the answer directly. Thanks to the powerful knowledge reasoning ability of GPT-3, PICa achieves significant performance improvements compared to those retrieval-based methods using explicit KBs. Inspired by PICa, KAT [11] and REVIVE [22] learn KB-augmented VQA models to exploit both the implicit knowledge from GPT-3 and explicit knowledge from KBs for answer prediction. The synergy of the two knowledge resources brings further improvements to their models. Despite the promising results achieved by these methods, they have not fully activated GPT-3 due to the following limitations:
1. The generated captions cannot cover all the necessary information in the image. Consider the example in Figure 1, the caption "a group of people walk in a city square" contribute nothing to answering the question "what fruit comes from these trees". In this situation, GPT-3 has to make an aimless and biased guess to answer the question.
2. GPT-3 employs a few-shot learning paradigm that requires a few in-context examples to adapt to new tasks. Therefore, the choice of these examples is critical to model performance. As reported in [46], all its example selection strategies achieve far inferior performance to the oracle strategy that uses the similarity of ground-truth answers.
We ask: _Is it possible to endow GPT-3 with some **heuristics** to enhance its capacity for knowledge-based VQA?_
In this paper, we present **Prophet**--a conceptually simple framework designed to **prompt** GPT-3 with answer **he**uristics for knowledge-based VQA. By answer heuristics, we mean some promising answers that are presented in a proper manner in the prompt. Specifically, we introduce two types of answer heuristics, namely _answer candidates_ and _answer-aware examples_, to overcome the limitations in (i) and (ii), respectively. Given a testing input consisting of an image and a question, the answer candidates refer to a list of promising answers to the testing input, where each answer is associated with a confidence score. The answer-aware examples refer to a list of in-context examples, where each example has a similar answer to the testing input. Interestingly, these two types of answer heuristics can be simultaneously obtained from any vanilla VQA model trained on a specific knowledge-based VQA dataset. A schematic of Prophet is illustrated at the bottom of Figure 1.
Without bells and whistles, Prophet surpasses all previous state-of-the-art single-model results on the challenging OK-VQA and A-OKVQA datasets [29, 32], including the heavily-engineered Flamingo-80B model trained on 1.8B image-text pairs [1]. Moreover, Prophet is friendly to most researchers, as our results can be reproduced using a single GPU and an affordable number of GPT-3 invocations.
## 2 Related Work
**Visual Question Answering (VQA).** VQA has been of growing interest over the last few years. Recent studies in VQA research can be roughly divided into the following categories: better visual features [2, 15, 33, 52], more powerful model architectures [13, 17, 48, 49, 50], and more effective learning paradigms [53, 5, 19, 4, 25, 21, 35]. Most current state-of-the-art VQA methods employ the Transformer architecture [37]. By incorporating vision-language pretraining on large-scale datasets, they have approached or even surpassed human-level performance on several representative benchmarks [1, 42, 43, 47, 51]. Besides these studies on general-purpose VQA, there is also a growing trend towards exploring more granular VQA tasks with specific reasoning skills, _e.g_., neural-symbolic reasoning [16, 14] and knowledge utilization [29, 30, 40].
**Knowledge-based VQA.** The core of this task lies in knowledge acquisition and integration. Early explorations parse the inputs into structured queries and retrieve supporting knowledge from fixed knowledge bases (KBs) to obtain the answers [41, 40]. As the provided knowledge resources are not sufficient to represent general knowledge, subsequent research mainly focuses on acquiring explicit knowledge from multiple open-domain knowledge resources, _e.g_., ConceptNet [23], Wikipedia [38], and Google Images [45]. This retrieved knowledge is integrated with the image-question pair for answer prediction [8, 45, 27]. Motivated by the promising capacities of large language models (_e.g_., GPT-3 [3]) in knowledge reasoning, recent state-of-the-art approaches regard GPT-3 as an implicit knowledge engine. They either utilize it to get answer prediction directly [46] or to extract answer candidates with evidence to improve
answer prediction [11, 22]. Similar to [46], our Prophet uses GPT-3 to predict answers directly. We believe the few-shot learning capability of GPT-3 has not been fully activated and this motivates us to prompt GPT-3 with answer heuristics.
**In-context learning.** Unlike the _pretrain-then-finetune_ paradigm for language models like BERT [6], GPT-3 introduces a novel in-context few-shot learning paradigm. To adapt to a new task, GPT-3 only needs to concatenate a few examples of the task with the input as the _prompt_ at inference time and requires no parameter updates. This appealing property has inspired research on training multimodal few-shot learners [1, 36]. Empirical studies show that a huge model (_e.g._, 80B parameters in Flamingo [1]) is required for effective few-shot learning, which is unaffordable for most people to reproduce their results.
## 3 The Prophet Framework
Our Prophet is a conceptually simple two-stage framework. In the answer heuristics generation stage, a vanilla VQA model is learned to generate two types of answer heuristics, _i.e_., answer candidates and answer-aware examples (detailed in SS3.2). In the heuristics-enhanced prompting stage, the answer heuristics, question, and caption are integrated into a formatted prompt to instruct GPT-3 to predict an answer (detailed in SS3.3). An overview of the Prophet framework is depicted in Figure 2.
### Preliminaries
Before presenting the Prophet, we briefly introduce the in-context learning paradigm developed by GPT-3 and its adaptation to knowledge-based VQA by PICa [46].
GPT-3 is an autoregressive language model pretrained on a tremendous dataset. During inference, in-context few-shot learning formulates a new downstream task as a text sequence generation task on the frozen GPT-3 model. Given a testing input \(\mathbf{x}\), its target \(\mathbf{y}\) is predicted conditioned on a formatted prompt \(\mathbf{p}(\mathbf{h},\mathcal{E},\mathbf{x})\), where \(\mathbf{h}\) refers to a prompt head, _aka_ instruction, that describes the task, \(\mathcal{E}=\{\mathbf{e}_{1},\mathbf{e}_{2},...,\mathbf{e}_{n}\}\) corresponds to \(n\) in-context examples. Denoting the target \(\mathbf{y}=(y^{1},y^{2},...,y^{L})\) as a text sequence of \(L\) tokens, at each decoding step \(l\), we have:
\[y^{l}=\underset{\hat{y}^{l}}{\text{argmax}}\,p_{\text{GPT-3}}(\hat{y}^{l}|\bm {p},y^{<l}) \tag{1}\]
where each in-context example \(\mathbf{e}_{i}=(\mathbf{x}_{i},\mathbf{y}_{i})\) contains an input-target pair of the task, which is constructed manually or sampled from the training set.
To adapt GPT-3 to address the knowledge-based VQA task, the key is to design the appropriate prompts. Given a question \(q\) and an image \(v\) as inputs, the VQA task aims to predict a target answer \(a\). Since GPT-3 does not understand images intrinsically, the image needs to be translated into a caption \(c\) using an off-the-shelf captioning model. PICa formulates the testing input \(\mathbf{x}\) as the following template:
Context: \(c\)\(\backslash\)n Question: \(q\)\(\backslash\)n Answer:
where the variables marked in blue will be substituted by specific testing inputs. \(\backslash\)n stands for a carriage return in the template. Accordingly, each in-context example \(\mathbf{e_{i}}\) is formulated into a similar template as follows:
Context: \(c_{i}\)\(\backslash\)n Question: \(q_{i}\)\(\backslash\)n Answer: \(a_{i}\)
where \(c_{i}\), \(q_{i}\), and \(a_{i}\) refer to an image-question-answer triplet collected from the training set. The complete prompt
Figure 2: **Our Prophet framework** has two stages: answer heuristics generation and heuristics-enhanced prompting. In the answer heuristics generation stage, a vanilla VQA model trained on the knowledge-based VQA dataset is employed to generate two types of answer heuristics, _i.e._, answer candidates and answer-aware examples. In the heuristics-enhanced prompting stage, the answer heuristics, question, and caption are integrated into a formatted prompt to instruct GPT-3 to predict an answer. As shown in the example, both answer heuristics contribute to the answer of “helium”.
of PICa consists of a fixed prompt head, a few in-context examples, and a testing input. This prompt is fed into GPT-3 for answer prediction.
Our Prophet inherits the pipeline of PICa. In addition, we introduce answer heuristics into the prompt structure to better activate the capacity of GPT-3, which leads to more accurate answers.
### Stage-1. Answer Heuristics Generation
We introduce two types of answer heuristics: answer candidates and answer-aware examples. Given a testing input consisting of an image and a question, the answer candidates refer to a list of promising answers to the testing input, where each answer is associated with a confidence score. The answer-aware examples refer to a list of in-context examples, where each example has similar answers to the testing input.
Interestingly, these two types of answer heuristics can be obtained simultaneously from any vanilla VQA model trained on the knowledge-based VQA task.
Denote a VQA dataset as \(\mathcal{D}=\{(v_{i},q_{i},a_{i})\}_{i=1}^{M}\), where \(v_{i},q_{i},a_{i}\) refer to the image, question, and answer, respectively. The most frequent answers in the training set form an answer vocabulary \(\mathcal{W}=\{w_{j}\}_{j=1}^{S}\). A vanilla VQA model \(\mathcal{M}\) is learned from \(\mathcal{D}\) to perform an \(S\)-way classification over the answers. Generally, the VQA model can be separated into two submodels, _i.e._, a backbone \(\mathcal{M}_{b}\) and a classification head \(\mathcal{M}_{h}\). The backbone \(\mathcal{M}_{b}\) acts as an encoder to fuse multimodal inputs \(v\) and \(q\) and obtain a fused feature \(z\):
\[z=\mathcal{M}_{b}(v,q) \tag{2}\]
The classification head \(\mathcal{M}_{h}\) simply adopts a linear layer followed by a sigmoid function to project the fused feature \(z\) into a prediction vector \(y\in\mathbb{R}^{S}\) over the answer vocabulary:
\[y=\mathcal{M}_{h}(z) \tag{3}\]
where \(y_{[i]}\) denotes the \(i\)-th element of \(y\), representing the confidence score for answer \(w_{i}\). Based on the above definitions, we explain how to generate the two types of answer heuristics below. Note that although the learned VQA model \(\mathcal{M}\) does not incorporate any external knowledge, it can be used for knowledge-based VQA when trained properly. We regard it as a reference model and compare its performance to Prophet in the experiments to show the effectiveness of GPT-3 for knowledge-based VQA.
**Answer candidates.** Given a testing input \((v,q)\), we obtain its prediction vector \(y\) from Eq.(3). After that, we select the top-\(K\) answers with the highest scores:
\[\mathcal{I}_{\text{AC}}=\underset{j\in\{1,2,\dots,S\}}{\text{argTopK}}\;y_{[j]} \tag{4}\]
where \(\mathcal{I}_{\text{AC}}\) denotes an index set of the top-\(K\) answer candidates. The answer candidates \(\mathcal{C}\) are defined as follows:
\[\mathcal{C}=\{(w_{j},y_{[j]})\;|\;j\in\mathcal{I}_{\text{AC}}\} \tag{5}\]
where \(w_{j}\) and \(y_{[j]}\) are an answer candidate and its confidence score, respectively. To make the formats of the in-context examples and testing input consistent, for each example \(\mathbf{e_{i}}\) we also calculate and provide a set of answer candidates \(\mathcal{C}_{i}\).
**Answer-aware examples.** Several previous studies have shown that the choice of in-context examples is crucial for GPT-3's few-shot learning performance [24, 46]. Their results motivate us to devise an _answer-aware_ example selection strategy.
Given a testing input \((v,q)\) and any training input \((v_{i},q_{i})\), we can obtain their corresponding fused features \(z\) and \(z_{i}\) from Eq.(2) using the trained model. Since the fused features are linearly projected for answer prediction, we conjecture that these fused features lie in a _latent answer space_ that contains rich semantics of the answers to the given image-question pairs. If \(z\) and \(z_{i}\) are close in the latent space, they are more likely to share similar answers and image-question inputs.
We calculate the cosine similarity of the fused feature between the testing input and each training input, then select top-\(N\) nearest neighbors in the latent space as the answer-aware examples:
\[\mathcal{I}_{\text{AE}}=\underset{i\in\{1,2,\dots,M\}}{\text{argTopN}}\; \frac{z^{T}z_{i}}{\|z\|_{\text{2}}\|z_{i}\|_{\text{2}}} \tag{6}\]
where \(\mathcal{I}_{\text{AE}}\) is an index set of the top-\(N\) similar samples in \(\mathcal{D}\). The answer-aware examples \(\mathcal{E}\) are defined as follows:
\[\mathcal{E}=\{(v_{i},q_{i},a_{i})\;|\;i\in\mathcal{I}_{\text{AE}}\} \tag{7}\]
Note that the fused features of the training inputs can be computed and stored beforehand, allowing efficient answer-aware example selection.
### Stage-2. Heuristics-enhanced Prompting
In this stage, we use the obtained answer heuristics, _i.e._, answer candidates \(\mathcal{C}\) and answer-aware examples \(\mathcal{E}\), to obtain a heuristics-enhanced prompt that facilitates the few-shot learning capacity of GPT-3 for knowledge-based VQA.
A prompt consists of a prompt head, a set of in-context examples, and a testing input. The prompt head describe the VQA task in natural language. We refer to the prompt head designed in PICa and supplement it with a new description of the answer candidates. Although we encourage GPT-3 to generate answers according to the answer candidates, we also allow it to take broad explorations and generate answers beyond the candidates. The complete format of our prompt head is shown in the yellow box of Figure 2.
Our in-context examples are derived from the obtained \(N\) answer-aware examples \(\mathcal{E}=\{\mathbf{e}_{1},\mathbf{e}_{2},...,\mathbf{e}_{N}\}\). Based on PICA's template in SS3.1, for example \(\mathbf{e}_{i}\), we introduce its answer candidates \(\mathcal{C}_{i}\) by adding _one_ line of code as follows:
\begin{tabular}{|l|} \hline Context: \(c_{i}\)\(\backslash\)n Question: \(q_{i}\)\(\backslash\)n \\ Candidates: \(w_{j_{1}}\)(\(y_{[j_{1}]}\)), \(w_{j_{2}}\)(\(y_{[j_{2}]}\)),...,\(w_{j_{K}}\)(\(y_{[j_{K}]}\)) \(\backslash\)n \\ Answer: \(a_{i}\) \\ \hline \end{tabular} where \(j_{1},j_{2},\cdots,j_{K}\) correspond to the actual indices of the elements in \(\mathcal{C}_{i}\). Each answer candidate \(w_{j_{k}}\) is paired with its confidence score \(y_{[j_{k}]}\) within a bracket. The confidence scores additionally offer the reliability of the corresponding answer candidates, which helps GPT-3 focus more on the promising candidates and be more tolerant of the less relevant candidates. For the testing input, its template is similar to that for the in-context examples, except that the answer slot is left blank for GPT-3 to fill with.
To better exploit available examples, we use the multi-query ensemble strategy [46]. Specifically, we increase the number of answer-aware examples to \(N\)*\(T\) to obtain \(T\) paralleled prompts, where each prompt still contains \(N\) examples. By prompting GPT-3 for \(T\) times, we obtain \(T\) answer predictions. The majority voting is performed over the \(T\) predictions to determine the final answer. The effects of different \(N\) and \(T\) will be verified in the experiments.
## 4 Experiments
We evaluate the performance of Prophet on two prevalent knowledge-based VQA datasets: OK-VQA [29] and A-OKVQA [32]. We conduct comprehensive ablation experiments to explore the effectiveness of Prophet. By taking the ablation results into account, we perform thorough comparisons of Prophet and state-of-the-art methods.
### Datasets
**OK-VQA** is a commonly used knowledge-based VQA dataset [29]. The dataset contains 9K and 5K image-question pairs for training and testing, respectively. All questions are manually filtered to ensure that outside knowledge is required to answer the questions. Each data sample is annotated with ten open-ended answers. The accuracy computed by the soft scores is used as the evaluation metric [10]. We use the 1.1 version of OK-VQA in the experiments.
**A-OKVQA** is currently the largest knowledge-based VQA dataset [32]. The dataset is split into three subsets: 17K training, 1K validation, and 7K testing. Each question is annotated with ten open-ended answers for direct answer (DA) evaluation. In addition, it provides a multiple choice (MC) evaluation to ask models to choose the correct answer from four choices.
### Implementation Details
We use the MCAN-large [49] as our default VQA model to generate answer heuristics. To improve the model capability, we modify the original MCAN model by: (i) replacing the original bottom-up-attention region-based features with the grid-based features extracted from CLIP's visual encoder with a RN50\(\times\)64 backbone [31]; and (ii) replacing the original LSTM network with a pretrained BERT-large model [6].
Similar to [28], we apply the transfer learning paradigm to further enhance the model capability. The model is first pretrained on the VQAv2 dataset [10] and Visual Genome dataset [18]. To prevent data contamination, we remove those samples from the pretraining dataset, whose images are used in the testing split of OK-VQA. After that, the pretrained model is further finetuned on the training split of OK-VQA to obtain our final VQA model. Note that the answer vocabulary of the pretrained model (with 3,129 answers) is quite different from the vocabulary of OK-VQA. To bridge this gap, we merge the answer vocabulary of OK-VQA1 with the existing vocabulary, resulting in an expanded answer vocabulary with 4,477 answers for model finetuning. This model is trained on a _single_ Nvidia RTX 3090 GPU, which is affordable for most people.
Footnote 1: Similar to [2], we collect answers that appear more than eight times in the training set of OK-VQA, resulting in 2,794 answers.
To show the improvements of the above strategies over the original MCAN model, we report the accuracies on the testing set of OK-VQA as follows:
\begin{tabular}{c c c} from scratch, original model [49] & from scratch, improved model & transfer learning, improved model \\ \hline
31.5 & 35.6 & 53.0 \\ \end{tabular} More details are provided in the supplementary material.
During the prompting stage, we follow PICA to use OSCAR+ as the captioning model [52]. Unless otherwise noted, we set the number of answer candidates \(K\)=10, the number of in-context examples \(N\)=16, and the number of queries \(T\)=5 as our default settings. The version of GPT-3 used in our experiments is text-davinci-002. Sampling temperature is set to 0.
The settings and strategies for OK-VQA can be directly transferred to A-OKVQA to address its DA task. For the MC task, we follow the strategy in [32] to project the predicted answer to the nearest answer choice. Moreover, we design a Prophet variant for the MC task. It uses a slightly different prompt by adding the multiple choices to in-context examples and testing input, and instructs GPT-3 to _choose_ the correct one from four choices.
### Ablation Studies
We conduct ablation experiments for Prophet on OK-VQA using the default settings above. Results shown in
Table 1 and Figure 3 are discussed in detail below.
**Prompting _vs._ retrieval.** Prophet uses a prompting-based paradigm to predict the answer based on a set of promising answer candidates. In contrast, a previous work MAVEx [45] exploits answer candidates but adopts a retrieval-based paradigm to search knowledge from external KBs to determine the answer. As both Prophet and MAVEx train a VQA model to generate answer candidates (stage-1), we can compare the superiority of the two paradigms (stage-2). In Table 0(a), we show the performance of the two paradigms in terms of stage-1 accuracy and final accuracy, respectively.
For a fair comparison, we re-implement the VQA model used in MAVEx, _i.e._, ViLBERT [25], to generate answer heuristics for our Prophet. From the results, we can see that based on the same VQA model, our Prophet outperforms MAVEx by a large margin (44.97% _vs._ 40.28%), showing the superiority of our prompting-based paradigm over MAVEx's retrieval-based paradigm in external knowledge acquisition and integration.
**Capability of VQA models.** In Table 0(b) we study how the VQA models of different capabilities impact the performance of Prophet. To better control the model capability, we use the same MCAN model trained with four visual features: region-based Bottom-Up [2] and VinVL [52] features and grid-based CLIP features from two backbones (ViT-L/14 and RN50\(\times\)64) [31]. Results show that more powerful VQA models (reflected in the stage-1 accuracies) lead to better performance of Prophet, as they provide answer heuristics of higher quality. Combining the results in Table 0(a), we also observe that more powerful VQA models achieve less relative improvements from GPT-3, which can be explained by the intrinsic diminishing return property. As a by-product, we verify that the visual features are important to the performance of knowledge-based VQA, which is consistent with the observations in [22]. The models with CLIP-based visual features significantly outperform those with region-based features, indicating that the CLIP's visual features contain richer visual knowledge due to large-scale pretraining.
We have observed a significant performance improvement of Prophet over its corresponding MCAN model in stage-1 (60.84% _vs._ 53.04%). To better understand this improvement, we conduct a statistical analysis of Prophet's prediction behaviors. As Prophet takes \(K\) answer candidates from MCAN as inputs, we define three prediction behaviors for Prophet: "keep top-1", "in top 2-\(K\)", and "beyond top-\(K\)". All the testing samples can be categorized into one of the three classes. The statistical results in Figure 3 show that: (i) for 68.1% of the testing samples (the green slice), Prophet keeps the top-1 predictions of MCAN. These samples achieve a 69% accuracy and are
\begin{table}
\end{table}
Table 1: **Ablation experiments for Prophet**. All the reported results are evaluated on the testing set of OK-VQA v1.1. The best result in each table is bolded and the result with the default settings is marked in gray.
Figure 3: We conduct a statistical analysis of Prophet’s prediction behaviors in terms of (a) distribution and (b) per-type accuracy. As Prophet takes \(K\) answer candidates from MCAN as inputs, we define three prediction behaviors for Prophet as follows: “keep top-1”, “in top 2-\(K\)”, and “beyond top \(K\)”. All the testing samples can be categorized into one of the three classes.
mostly easy samples; (ii) for 21.8% of the testing samples (the blue slice), Prophet selects answers from the top 2-\(K\) answer candidates. These samples are relatively hard, so that MCAN delivers a 24% accuracy while Prophet has a much higher 40% accuracy; (iii) for the remaining 10.1% of the testing samples (the yellow slice), Prophet predicts answers beyond the answer candidates2. These are the most difficult samples that MCAN delivers a 12% accuracy while Prophet magnificently achieves a 42% accuracy. In a word, Prophet acts like a real _prophet_ that adaptively selects the essence and discards the dress from MCAN.
Footnote 2: The probability that Prophet’s prediction is constituted of the combination of candidates is rare that can be neglected.
**Answer candidates.** Table (c)c varies the number of answer candidates \(K\) from 0 to 10 to explore its effect on Prophet. For each testing sample, if the ground-truth answer is hit by one of the \(K\) answer candidates, we accumulate the soft score of that ground-truth answer3. The hit rate is calculated over the testing set by dividing the accumulated score by the number of samples.
Footnote 3: In practice, multiple ground-truth answers are provided. If multiple answers are hit simultaneously, we choose the answer with the largest soft score for accumulation.
From the results, we can see that: (i) without any answer candidates, Prophet's accuracy drops by 6.4 points (\(K\)=0 _vs._\(K\)=1), showing the importance of answer candidates in Prophet; (ii) with the increase of answer candidates, the hit rate and final accuracy grow accordingly but they exhibit a tendency to saturate. This is because the quality of answer candidates eventually meets saturation as \(K\) increases; (iii) when \(K\)=1, the final accuracy is even higher than the hit rate (56.04% _vs._ 53.04%), which implies that GPT-3 has a strong capability to correct the wrong answer candidates while keeping the correct ones.
**Example selection strategy.** To show the effectiveness of our answer-aware example selection strategy, we compare it to other example selection strategies in Table (d)d. The compared strategies include: (a) _rand_: examples that are randomly selected; (b) _ques + img_: examples that are selected based on the joint similarity of question and image features, which is used in PICa; (c) _fused_: our default strategy that selects examples based on the similarity of fused features; (d) _fused + ques + img_: a combination of our default strategy and PICa's strategy; and (e) _answer logits_: examples that are selected based on the similarity of answer logits obtained in Eq.(3). Besides the final accuracy, we also report the hit rate of the answers within the selected examples for each strategy.
The results show that the accuracy is positively correlated with the hit rate of answers, which verifies our hypothesis that answer-aware examples contribute significantly to the performance of Prophet. Compared with other strategies, our default strategy (c) achieves the best performance with the highest hit rate. The strategy (d) that integrates other information (ques + img) into the (c) leads to worse performance due to the introduction of irrelevant and noisy information. Finally, strategy (e) reports slightly worse performance than (c). We conjecture that is because the answer logits have lost too much information of the input question and image, which is also useful for GPT-3 to perform knowledge reasoning.
**Numbers of examples and queries.** Table (d)d contains the ablation studies for the numbers of examples and queries. We choose different numbers of examples \(N\in\{0,1,8,16,20\}\) for each query and different numbers of queries \(T\in\{1,5\}\), respectively. The results show that the performance of Prophet improves with the increase of \(N\) and \(T\), which is consistent with the results in PICa. By increasing \(T\) from 1 to 5, the entries with larger \(N\) enjoy greater performance improvements at the expense of linearly increasing overheads.
Interestingly, the Prophet variant with \(N\)=0 delivers worse performance than the VQA model in stage-1 (49.97% _vs._ 53.04%), even though answer candidates are provided. Meanwhile, when given one example (\(N\)=1), the Prophet variant distinctly surpasses the VQA model (56.75% _vs._ 53.04%). This suggests the necessity of few-shot in-context examples for GPT-3 to activate its capability to adapt to the knowledge-based VQA task.
**Prompt contents.** In Table (f)f, we ablate the prompt contents in the default settings by: (b) removing the prompt head; (c) removing the confidence scores for answer candidates; (d) removing image captions; and (e) adding predicted tags from external models [46].
The results lead to the following observations: First, the confidence scores are of critical importance to the performance of our Prophet. This is because they carry the necessary information for GPT-3 to understand the answer candidates. Second, without image captions, Prophet still works steadily. This reflects the fact that our answer heuristics in prompts already provide sufficient information for Prophet to solve the task. Third, the prompt head is of less importance, indicating that GPT-3 is capable of understanding the task directly from the in-context examples. Finally, introducing extra information like object tags leads to a slight performance drop, which is contrary to the results in PICa. We conjecture this information has already been encoded in answer heuristics implicitly.
### Main Results
We use most of the default settings for the comparisons below, except that the number of examples \(N\) is set to 20.
**Comparative results on OK-VQA.** Table 2 contains the comparisons of our Prophet and existing state-of-the-art methods on OK-VQA. The table is split into three sections.
The first section lists the retrieval-based methods leveraging external KBs [8, 9, 27, 28, 45, 54]. The second section contains the methods that are directly pretrained on a large-scale multimodal corpus [1, 26]. The last section shows the methods that incorporate the large language model GPT-3, which is publicly available via an online API [11, 22, 46].
Our Prophet belongs to the last section. It outperforms all the compared methods by a distinct margin. Prophet is 13.1 points higher than PICa [46] when both methods use GPT-3 as the only knowledge resource. This confirms our hypothesis that the capacity of GPT-3 has not been fully activated in previous studies. Compared to KAT [11] and REIVIVE [22], which utilize GPT-3 and other external KBs together in sophisticated systems, our Prophet is much simpler and more effective. Moreover, KAT and REIVIVE need to use GPT-3 to process all the training samples for their model training, which significantly increases the costs. In contrast, our Prophet only uses GPT-3 at inference time, which is more economical. Compared to the Flamingo-80B [1], Prophet delivers a 3.3 point improvement and is more resource-efficient from the perspective of reproducibility4.
Footnote 4: Flamingo-80B is trained on 1,536 TPUv4 for 15 days which is unaffordable for most researchers, but Prophet uses one RTX-3090 to train a VQA model for 4 days and a certain number of GPT-3 invocations.
**Comparative results on \(\Lambda\)-OKVQA.** Table 3 contains the comparative results on the challenging A-OKVQA dataset. We compare our Prophet to the strong baselines in [32] and the current state-of-the-art method Unified-IO [26]. The results show the superiority of our Prophet over the counterparts on both the DA and MC tasks, reflecting the effectiveness and generalization of our method. Furthermore, we also provide a Prophet variant called Prophet-MC, which is specifically designed for the MC task. Specifically, we slightly modify the prompt in Prophet by adding the information of multiple choices into the in-context examples and testing input, and instruct GPT-3 to _choose_ the correct one from four choices. More details are provided in the supplementary material. Compared to the original Prophet, Prophet-MC achieves significantly higher accuracy on the MC task, showing the enormous potential of Prophet to be applied to other related tasks.
## 5 Conclusion
We present Prophet, a conceptually simple framework which uses GPT-3 as the knowledge engine for knowledge-based VQA. To better activate the few-shot learning capacity of GPT-3, we introduce a novel paradigm to prompt GPT-3 with answer heuristics. Extensive ablations, comparative experiments, and comprehensive analyses on two challenging datasets show the superiority of Prophet over all existing state-of-the-art methods, including the heavily-engineered Flamingo-80B model. Notably, Prophet is implemented with limited resources--a single GPU and an affordable number of GPT-3 invocations. We hope that our work will serve as a solid baseline to inspire future research on the knowledge-based VQA task and beyond.
## Acknowledgment
This work was supported in part by the National Key R&D Program of China (2020YFB1406701), in part by the NSFC (62125201), in part by the Fundamental Research Funds for the Provincial Universities of Zhejiang (GK229909299001-001), in part by the NSFC (62072147, 62020106007, 61836002), and in part by the Zhejiang Provincial Natural Science Foundation of China (LR22F020001, DT23F020007).
\begin{table}
\begin{tabular}{l c} method & accuracy \\ \hline _methods with external knowledge bases_ & \\ Mucko [54] & 29.2\({}^{*}\) \\ ConceptBERT [9] & 33.7\({}^{*}\) \\ KRISP [28] & 38.9 \\ Visual Retriever-Reader [27] & 39.2 \\ MAVEx [45] & 40.3 \\ TRiG [8] & 49.4 \\ UnifER [12] & 42.1 \\ \hline _methods with multimodal pretraining_ & \\ Unified-IO (2.8B) [26] & 54.0 \\ Flamingo (80B) [1] & 57.8 \\ \hline _methods with GPT-3 API_ & \\ PICa [46] & 48.0 \\ KAT\({}^{\dagger}\)[11] & 53.1 \\ REIVIVE\({}^{\dagger}\)[22] & 56.6 \\
**Prophet (ours)** & **61.1** \\ \end{tabular}
\end{table}
Table 2: **Comparisons to the state-of-the-art methods on OKVQA**. The compared methods are split into three groups based on their knowledge resources and usages. \({}^{*}\): accuracy is evaluated on OK-VQA v1.0. \({}^{\dagger}\): method needs to query GPT-3 during training.
\begin{table}
\begin{tabular}{l|c c|c c} method & \multicolumn{2}{c|}{DA} & \multicolumn{2}{c}{MC} \\ & val & test & val & test \\ \hline ClipCap [32] & 30.9 & 25.9 & 56.9 & 51.4 \\ ViLBERT [32] & 30.6 & 25.9 & 49.1 & 41.5 \\ LXMERT [32] & 30.7 & 25.9 & 51.4 & 41.6 \\ KRISP [32] & 33.7 & 27.1 & 51.9 & 42.2 \\ GPV-2 [32] & 48.6 & 40.7 & 60.3 & 53.7 \\ Unified-IO [26] & - & 45.2 & - & - \\ \hline
**Prophet** & **58.2** & **55.7** & 59.3 & 57.3 \\
**Prophet-MC** & - & - & **76.4** & **73.6** \\ \end{tabular}
\end{table}
Table 3: **Comparisons to previous results on A-OKVQA**. DA and MC refer to the direct-answer and multiple-choice tasks, respectively. Prophet-MC is a variant of Prophet that is specifically designed for the MC task. |
2306.15170 | Gathering Galaxy Distances in Abundance with Roman Wide-Area Data | The extragalactic distance scale is fundamental to our understanding of
astrophysics and cosmology. In recent years, the surface brightness fluctuation
(SBF) method, applied in the near-IR, has proven especially powerful for
measuring galaxy distances, first with HST and now with a new JWST program to
calibrate the method directly from the tip of the red giant branch (TRGB). So
far, however, the distances from space have been gathered slowly, one or two at
a time. With the Roman Space Telescope, we have the opportunity to measure
uniformly high-quality SBF distances to thousands of galaxies out to hundreds
of Mpc. The impact of these data on cosmology and galaxy studies depends on the
specifics of the survey, including the filter selection, exposure depth, and
(especially) the sky coverage. While the baseline HLWAS survey in four filters
plus the grism would yield useful data, the impact would be limited by the
relatively small area. A more optimal approach would concentrate on the most
efficient passband (F146), adopt an exposure time sufficient to measure good
quality distances well out into the Hubble flow, and then maximize the sky
coverage within the total time constraints. Grism observations over the same
area can provide the needed information on redshifts and spectral energy
distributions for compact sources, while colors for larger objects can be
obtained from lower resolution surveys. The proposed plan will enable accurate
determination of the physical properties of thousands of nearby galaxies, an
independent measure of the Hubble constant $H_0$ with negligible statistical
error, and competitive constraints on $S_8{\,=\,}\sigma_8(\Omega_m/0.3)^{0.5}$.
The resulting data set will be a phenomenal resource for a wide range of
studies in astrophysics and cosmology. | John P. Blakeslee, Michele Cantiello, Michael J. Hudson, Laura Ferrarese, Nandini Hazra, Joseph B. Jensen, Eric W. Peng, Gabriella Raimondo | 2023-06-27T03:08:26Z | http://arxiv.org/abs/2306.15170v2 | # Gathering Galaxy Distances in Abundance with Roman Wide-Area Data
###### Abstract
The extragalactic distance scale is fundamental to our understanding of astrophysics and cosmology. In recent years, the surface brightness fluctuation (SBF) method, applied in the near-IR, has proven especially powerful for measuring galaxy distances, first with HST and now with a new JWST program to calibrate the method directly from the tip of the red giant branch (TRGB). So far, however, the distances from space have been gathered slowly, one or two at a time. With the Roman Space Telescope, we have the opportunity to measure uniformly high-quality SBF distances to thousands of galaxies out to hundreds of Mpc. The impact of these data on cosmology and galaxy studies depends on the specifics of the survey, including the filter selection, exposure depth, and (especially) the sky coverage. While the baseline HLWAS survey in four filters plus the grism would yield useful data, the impact would be limited by the relatively small area. A more optimal approach would concentrate on the most efficient passband (F146), adopt an exposure time sufficient to measure good quality distances well out into the Hubble flow (\(z\gtrsim 0.03\)), and then maximize the sky coverage within the total time constraints. Grism observations over the same area can provide the needed information on redshifts and spectral energy distributions for compact sources, while colors for larger objects can be obtained from lower resolution surveys. The proposed plan will enable accurate determination of the physical properties of thousands of nearby galaxies, an independent measure of the Hubble constant \(H_{0}\) with negligible statistical error, and competitive constraints on \(S_{8}=\sigma_{8}(\Omega_{m}/0.3)^{0.5}\). The resulting data set will be a phenomenal resource for a wide range of studies in astrophysics and cosmology.
**Roman Core Community Survey:** High Latitude Wide Area Survey
**Scientific Categories:** galaxies - large-scale structure of the universe: cosmological parameters
## 1 Measuring the Universe
The fields of extragalactic astronomy and observational cosmology began in earnest a century ago with the identification of Cepheid variables in spiral nebulae (Hubble, 1925), leading to the discovery of the expanding universe (Lemaitre, 1927; Hubble, 1929). The distances at the time were crude: Hubble's data set had four galaxies in the Virgo cluster, which he took to be at 2 Mpc. But as the distances improved, so did our understanding of the universe. By the turn of the millennium, relative distances from Type Ia supernovae (SN Ia), corrected for decline rate, led to the discovery that the universe was not only expanding but accelerating (Riess et al., 1998; Perlmutter et al., 1999), while the Cepheid-based calibration of a variety of distance indicators constrained the Hubble constant \(H_{0}\) to within 10% (Ferrarese et al., 2000; Freedman et al., 2001).
Soon afterwards, results from WMAP on the cosmic microwave background (CMB) seemed to confirm the \(\Lambda\)CDM "concordance cosmology" with \(H_{0}\approx 70\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Bennett et al., 2003). But cracks in this edifice began to show about a decade later when the early universe CMB measurements and the late-universe SN Ia distances had improved enough that the margins of error on \(H_{0}\) no longer overlapped. This "Hubble tension" has grown progressively worse and now exceeds \(5\sigma\) in significance, with the latest SN Ia-Cepheid analysis giving \(H_{0}=73.0\pm 1.0\)(Riess et al., 2022), as compared to \(H_{0}=67.4\pm 0.5\)(Planck Collaboration et al., 2020) predicted from analysis of the CMB (see the review by Abdalla et al., 2022). This may point towards physics beyond the standard cosmological model (e.g., Di Valentino et al., 2021), but another analysis of the SN Ia distances, using tip of the red giant branch (TRGB) distances for calibration, finds \(H_{0}=69.8\pm 1.8\)
consistent with the CMB prediction (Freedman, 2021). Clearly, we need other independent routes, not involving Cepheids or SN Ia, to \(H_{0}\) in the local universe.
One promising path involves the surface brightness fluctuations (SBF) method calibrated from the TRGB. A recent work reports \(H_{0}=73.3\pm 2.5\) from 63 SBF distances out to 100 Mpc observed with WFC3/IR on Hubble (Blakeslee et al., 2021; Jensen et al., 2021). The calibration was mainly based on Cepheids, a precarious scaffolding for SBF, which works best for early-type galaxies. However, a new JWST Cycle 2 program will establish a firmer footing for the method using NIRCam to measure TRGB and SBF distances for an optimally selected set of 14 nearby ellipticals. This will enable a fully independent value of \(H_{0}\) with a precision rivaling that of SN Ia calibrated via Cepheids. To bring this approach to full fruition will require hundreds of SBF distances spread across the sky and reaching to at least \(z\sim 0.03\), where bulk flows are thought to be negligible.
## 2 Galaxy Properties and Dark Matter
Of course, reliable distances tell us about more than "just" cosmology. They are essential for converting observed properties into physical quantities such as size, mass, luminosity and energy. Yet, except for very nearby, resolved systems, they are notoriously difficult to estimate, with occasional "factor-of-two" controversies (e.g., Schweizer et al., 2008; Trujillo et al., 2019). In their review of black hole scaling relations, Kormendy and Ho (2013) point out that distance errors dominate the uncertainty for many black hole mass estimates, even though authors neglect it in their final quoted errors. And if distance is a major source of error in black hole mass, which scales linearly with \(d\), it is almost always the dominant error for galaxy luminosity, which scales as \(d^{2}\). This can have important implications for understanding the nature of some systems.
To take one example, the diffuse galaxy NGC 1052-DF2 was claimed by van Dokkum et al. (2018) to be devoid of dark matter, based on an SBF distance of \(\sim\) 20 Mpc. A subsequent study argued that the galaxy had a relatively normal dark matter content based on a distance of 13 Mpc, estimated mainly from the globular clusters (Trujillo et al., 2019). Thus, the interpretation was wildly different, depending on the distance. A subsequent measurement of the tip of the red giant branch (TRGB) yielded \(d=22.1\)\(\pm\) 1.2 Mpc (Shen et al., 2021), consistent with the SBF distance \(d=20.4\)\(\pm\) 2.0 Mpc (Blakeslee and Cantiello, 2018).
The remarkable thing is that the TRGB distance used 40 HST orbits, while the SBF distance was based on a single orbit. Unlike other precision methods (Cepheids, TRGB, SN Ia, masers), SBF requires only modest depth and no monitoring. Besides the WFC3/IR \(H_{0}\) study mentioned above, SBF has been used with HST to study the structure of nearby galaxy clusters (Mei et al., 2007; Blakeslee et al., 2009), convert the observed "shadow" of the M 87 supermassive black hole into a physical size and mass (EHT Collaboration, 2019), measure the most precise distance to the host galaxy of the gravitational wave source GW170817 (Cantiello et al., 2018), and explore possible nonlinearity in the SN Ia peak luminosity versus decline rate (Garnavich et al., 2022).
However, with HST, optical and near-IR SBF measurements have accumulated one pointing at a time in the course of a dozen GO programs over two decades. With JWST, the exceptional imaging capabilities make it possible to establish a rock-solid calibration for the method and extend the range to twice that reached with HST. But in most cases, the JWST/NIRCam field of view also only accommodates one target at a time. Consequently, it is best suited for determining precise distances for specific individual targets, rather than "harvesting" SBF distances in the thousands. For this purpose, we require the Roman Observatory, guided by a well-defined wide-area survey observing strategy.
## 3 "Nottional" Roman Hlaws Numbers
Roman Observatory, with its Wide Field Instrument, presents unprecedented opportunities for distance studies using SBF to constrain cosmology and galaxy properties. Although bands at the red end of the optical spectrum minimize the intrinsic scatter in the SBF method, near-IR bands like \(J,H,K\) offer several advantages. The fluctuations themselves are inherently brighter in the near-IR, with at least ten times higher amplitude in \(K\) than \(I\)(e.g., Jensen et al., 1998). The near-IR also gives a much more favorable contrast compared to globular clusters, the main contaminant in measuring SBF distances for giant ellipticals (as discussed below). Finally, the effects of residual dust contamination is greatly reduced. For all these reasons, recent space-based SBF studies have focused on the near-IR.
To illustrate, the ACS Virgo and Fornax Cluster Surveys yielded SBF distances for over 130 galaxies in these two clusters, with one HST orbit dedicated to each galaxy (Blakeslee et al., 2009). This enabled an exquisite calibration of the stellar population dependence of the method, along with a precise value of the relative distance of the clusters. With Roman/WFI we can measure a similar number of galaxies in the more distant Coma cluster with only \(\sim\) 7 pointings, and with a similar exposure time per pointing (i.e., \(\sim\) 5% of the total time) because of the brightness of the SBF signal in the near-IR and the wide area of WFI instrument.
Roman's High Latitude Wide Area Survey (HLWAS) promises to revolutionize this field. For a given survey specification, we wish to quantify both the maximum distance \(d_{\rm Max}\) to which SBF measurements can be made and the number of reliable galaxy distances. For giant ellipticals, contamination of the power spectrum by globular clusters (GCs) is the main limiting factor (Moresco et al., 2022). Thus, \(d_{\rm Max}\) is the distance to which the GCs can be detected (at 5\(\sigma\)) and removed to
a faint enough limit, and the residual contamination reliably estimated, so that the uncertainty in the correction drops below the intrinsic scatter in the method.
For instance, in the \(I\) band, where the peak of the GC luminosity function (GCLF) is at \(M_{I}\approx-8.0\) AB mag, accurate SBF measurements require detecting and removing sources to \(+0.5\) mag fainter than the GCLF peak, or \(M_{I}\approx-7.5\) AB. In the \(H\) band, the same relative amount of GCs contamination can be reached for a detection limit relative to the GCLF peak of about \(-1.0\) mag (i.e., 1 mag _brighter_ than the peak; Jensen et al., 2021). We estimate the near-IR GCLF peak absolute magnitudes based on Nantais et al. (2006), converted to AB. In addition, because they are projected against the bright galaxy background, we have found that the \(5\sigma\) detection limit for the GCs is on average \(\sim 0.5\) mag brighter than for isolated point sources. With these assumptions, and an adopted space density of early-type galaxies, we can estimate \(d_{\rm Max}\) and the expected yield of SBF distances for a given survey design.
To estimate the number of early-type galaxies suitable for SBF measurement, we first use the 2MASS Redshift Survey (2MRS), which contains 43,000 galaxies with \(K_{s}<11.75\) mag covering 91% of the sky (Huchra et al., 2012). For reference, an \(L^{*}\) galaxy will be included in the 2MRS for \(d\lesssim 135\) Mpc. The 2MRS includes the morphological \(T\)-type, and we select only galaxies with \(T\leq-1\), indicating early-type, and with an absolute magnitude \(K_{s}<-20\) mag, estimated from the redshift. To improve completeness for \(L^{*}\) galaxies at distances of interest, we repeat the calculations using the 2M\(++\) catalog (Lavaux and Hudson, 2011), which augments the 2MRS with deeper data over much of the sky. The 2M\(++\) does not provide \(T\)-type, so we assume the same early-type percentage (38%) as in the 2MRS for our adopted absolute magnitude limit. Finally, we assume a flat distribution of galaxies on the sky, as the eventual location of the HLWAS is unknown.
For the observational details, we first adopt the "notional" HLWAS as specified online for the F106, F129, and F158 bands. We omit the lower efficiency F184 band and instead report expectations for the broad, high-throughput F146 band with a similar exposure time. The first four rows of Table 1 give the estimated maximum distances and expected numbers of suitable SBF targets, along with all the input assumptions, for each of these bands with the notional 2000 deg\({}^{2}\) area coverage.
The numbers are impressive. Even in the \(J\) band, \(\sim\!300\) galaxy distances reaching out to \(\sim\!110\) Mpc would significantly reduce the statistical error on the present value of \(H_{0}\) from SBF. However, limiting the coverage to \(\sim\!5\%\) of the sky in a single direction opens the possibility for systematic errors due to flow motions, clustering, and potentially other forms of cosmic variance. In the final section, we propose a more optimal survey design for constraining cosmological parameters.
## 4 Optimizing the survey strategy
There are two prominent "tensions" in cosmology: the Hubble tension, discussed above, and the \(S_{8}\) tension, where \(S_{8}\equiv\sigma_{8}(\Omega_{m}/0.3)^{0.5}\) quantifies the level of matter inhomogeneity in the universe. In both cases, the tension is between the predicted value extrapolated from the CMB (assuming \(\Lambda\)CDM) and the value measured in the local universe. These are the biggest problems in cosmology today; they may well have a common underlying explanation (e.g., Abdalla et al., 2022).
The strongest evidence for the \(S_{8}\) tension comes from weak lensing (e.g., Amon et al., 2023), but results from peculiar velocities (Boruah et al., 2020; Said et al., 2020) point in the same direction, with larger uncertainties. Different weak lensing surveys may share common systematics, such as intrinsic alignments; thus, it is critical to confirm the \(S_{8}\) tension using diverse methods. For peculiar velocity studies, it is most important to maximize the sample size and volume, while keeping the distance errors at a level comparable to the peculiar velocities themselves. SBF distance errors are typically 5-6%, or \(\sim\!400\) km s\({}^{-1}\) at 100 Mpc. This is vastly better than typical errors of 20-25% from other galaxy-based methods such as the Fundamental Plane or Tully-Fisher.
The final row of Table 1 shows the expected results for an illustrative wide survey covering a quarter of the sky in F146, the most efficient of the WFI bands, to a depth similar to those envisioned for the four filters in the notional HLWAS shown in the top part of the table. This strategy would deliver a peculiar velocity sample with 2500 to 5000 galaxies, reaching out to 150 Mpc with 6% error. Such a data set would be unprecedented in its combination of precision and sample size. SN Ia distances have similar precision but are much rarer, with only about 500 available within the same volume. Following the methodology of Boruah et al. (2020), we predict uncertainties on \(S_{8}\) from this hypothetical SBF distance sample to be below 2%, limited by cosmic variance uncertainties in the density field. This is competitive with the best current weak lensing results.
Our proposal then is to cover as wide an area as possible using Roman's most efficient filter to a depth where detector noise becomes negligible. Analysis of the S/N curve for F146 suggests \(\sim\!100\) s per exposure; a 3-point dither pattern then gives 5 min per pointing. Comprehensive grism data would provide redshifts and SEDs for compact sources; colors for larger objects can be obtained from ground-based optical and near-IR surveys. While a \(\pi\)-sterradian HLWAS may be overly ambitious, even coverage of 10% of the sky would greatly reduce systematics from cosmic variance. This approach would yield high-quality distances for thousands of galaxies, an independent measure of \(H_{0}\) with negligible statistical error, and competitive constraints on \(S_{8}\). The data set would be an enormously rich resource for a wide range of studies in astrophysics and cosmology. |
2305.03242 | Resequencing the Hubble sequence and the quadratic (black hole
mass)-(spheroid stellar mass) relation for elliptical galaxies | One of the most protracted problems in astronomy has been understanding the
evolution of galaxy morphology. Much discussion has surrounded how lenticular
galaxies may form a bridging population between elliptical and spiral galaxies.
However, with recourse to a galaxy's central black hole mass, accretion-built
spiral galaxies have emerged as the bridging population between low-mass
lenticular galaxies and the dusty merger-built lenticular galaxies contiguous
with elliptical galaxies and `brightest cluster galaxies' in the black
hole/galaxy mass diagram. Spiral galaxies, including the Milky Way, appear
built from gas accretion and minor mergers onto what were initially lenticular
galaxies. These connections are expressed as a new morphology sequence, dubbed
the `Triangal', which subsumes elements of the Hubble sequence and the van den
Bergh trident and reveals the bridging nature of the often overlooked ellicular
galaxies. Furthermore, a quadratic black hole/galaxy mass relation is found to
describe ordinary elliptical galaxies. The relation is roughly parallel to the
quadratic-like relations observed for the central spheroidal component of
spiral galaxies, dust-rich lenticular galaxies, and old dust-poor lenticular
galaxies. The brightest cluster galaxies are offset according to expectations
from an additional major merger. The findings have implications for feedback
from active galactic nuclei, mapping morphology into simulations, and
predicting gravitational wave signals from colliding supermassive black holes.
A new galaxy speciation model is presented. It disfavours the `monolithic
collapse' scenario for spiral, dusty lenticular, and elliptical galaxies. It
reveals substantial orbital angular momentum in the Universe's first galaxies
and unites dwarf and ordinary `early-type' galaxies. | Alister W. Graham | 2023-05-05T02:00:57Z | http://arxiv.org/abs/2305.03242v1 | Resequencing the Hubble sequence and the quadratic (black hole mass)-(spheroid stellar mass) relation for elliptical galaxies
###### Abstract
One of the most protracted problems in astronomy has been understanding the evolution of galaxy morphology. Much discussion has surrounded how lenticular galaxies may form a bridging population between elliptical and spiral galaxies. However, with recourse to a galaxy's central black hole mass, accretion-built spiral galaxies have emerged as the bridging population between low-mass lenticular galaxies and the dusty merger-built lenticular galaxies contiguous with elliptical galaxies and 'brightest cluster galaxies' in the black hole/galaxy mass diagram. Spiral galaxies, including the Milky Way, appear built from gas accretion and minor mergers onto what were initially lenticular galaxies. These connections are expressed as a new morphology sequence, dubbed the 'Triangular', which subsumes elements of the Hubble sequence and the van den Bergh trident and reveals the bridging nature of the often overlooked ellicular galaxies. Furthermore, a quadratic black hole/galaxy mass relation is found to describe ordinary elliptical galaxies. The relation is roughly parallel to the quadratic-like relations observed for the central spheroidal component of spiral galaxies, dust-rich lenticular galaxies, and old dust-poor lenticular galaxies. The brightest cluster galaxies are offset according to expectations from an additional major merger. The findings have implications for feedback from active galactic nuclei, mapping morphology into simulations, and predicting gravitational wave signals from colliding supermassive black holes. A new galaxy speciation model is presented. It disfavours the'monolithic collapse' scenario for spiral, dusty lenticular, and elliptical galaxies. It reveals substantial orbital angular momentum in the Universe's first galaxies and unites dwarf and ordinary 'early-type' galaxies.
keywords: galaxies: bulges - galaxies: elliptical and lenticular, cD - galaxies: structure - galaxies: interactions - galaxies: evolution - (galaxies:) quasars: supermassive black holes
## 1 Introduction
Since the first spiral 'nebula' was discovered (Rosse, 1850), astronomers have ponded their formation and connection with other nebulae (Alexander, 1852; Roberts, 1895; Aitken, 1906; Jeans, 1919). Indeed, how the extragalactic nebulae, the 'island universes'1 of Curtis (1917), nowadays referred to as galaxies, may evolve and transform from their primordial incarnation remains an active and open topic of research (e.g., Boselli and Gavazzi, 2006; Park et al., 2008; van den Bosch et al., 2008; Cameron and Pettit, 2012; Conselice, 2014; Schawinski et al., 2014). With roots in the 1700s, today's most well-known galaxy sequence was stripped of its evolutionary pathways almost as soon as it was conceived a century ago. As revealed in the well-referenced papers by Hart and Berendzen (1971) and Way (2013), see also Graham (2019b), credit for the E-to-S (Hubble, 1926) and E-S0-S (Hubble, 1936) 'Hubble sequence' resides with many.
Footnote 1: von Humboldt (1845) used the German word Weltinsel (world island) to refer to our Galaxy and everything floating in space. M\(\ddot{\rm a}\)lder (1846) subsequently referred to the nebulae as “world islands”, which Mitchell (1847) translated and adapted into “island universes”.
Jeans (1919) popularised the notion of amorphous round/elliptical-shaped nebulae evolving into ringed/spiral nebulae. Such a concept originated from the 18th century 'nebular hypothesis' in which elliptical-shaped nebulae were thought to rotate and throw-off rings and spiral arms at the expense of a dwin-dling central bulge. Building on this, Reynolds (1920) introduced the essence of the early-to-late type spiral sequence, based in part2 on the dominance of the bulge, with this central concentration later adopted as criteria by Lundmark (1925, 1926) and Hubble (1926). The notion of an initial (early) and latter (late) type of
nebulae was initially entertained by Hubble3 but later disfavoured by many in the early-to-mid 1920s, including Hubble (1926). Nonetheless, the early- and late-type nomenclature was retained. The practice of quantifying the roundness of a'regular' nebula -- encompassing the early-type galaxies -- with designations ranging from 1 for round nebulae to 5 for elongated nebulae had been used since Herschel (1847). While Hubble expanded upon this practice, he also sought to distil the key elements from the detailed scheme of Wolf (1998).4 Furthermore, Curtis (1918) had recently introduced the barred versus non-barred designation following the murky identification of bars by Knox-Shaw (1915). This led to the bifurcation of the S galaxies that Hubble (1926) included in his first table. Hart & Berendzen (1971, their footnote 42) and van den Bergh (1997) remind us that Jeans (1928) was the first to express this visually, as a Y-shaped diagram before Hubble (1936) turned it sideways to give us the so-called 'tuning fork', with the intermediary spindle/fineticular class, i.e., the armless disc galaxies introduced by Reynolds (1925), being the S0 galaxies at the junction.
Footnote 3: As noted by Hart & Berendzen (1971), in 1923, in preparation for the second International Astronomical Union, Hubble submitted a manuscript to Slipher in which we wrote “there is some justification in considering the elliptical nebulae as representing an earlier stage of evolution” and he also now listed them _before_ the spiral nebula, reversing the spiral-spindle/ovate order in his original scheme (Hubble, 1922).
Footnote 4: While the nebulae classification scheme of Wolf (1998) is not in use, Charles Wolf is remembered through Wolf-Rayet stars (Wolf & Rayet, 1867), the central stars of _planetary_ nebulae (Wright, 1914).
The longevity of the Jeans/Reynolds/Hubble sequence of galaxies, encapsulated by the tuning fork -- taught in most introductory astronomy courses -- is perhaps surprising given that doubt over the direction of potential morphological transformations led Hubble (1926) to recast that this was an evolutionary sequence (Hart & Berendzen, 1971). However, a wealth of additional characteristics was subsequently grafted onto and coded in by de Vaucouleurs (1959) and Buta et al. (2007), and this does reflect some of the formation histories of galaxies (Buta, 2013).
While some galaxies, and their spheroidal component, are known to be built by collisions (e.g., Naab & Burkert, 2003; Merritt, 2006; Conselice et al., 2022), vital clues from the coevolution of their massive black holes (BHs) have only now come to light. It has recently been revealed how gas-poor, aka 'dry', mergers of S0 galaxies (Reynolds, 1925; van den Bergh, 2009) have created the offset population of E galaxies (including brightest cluster galaxies, BCGs) in the diagram of BH mass, \(M_{\rm th}\), versus spheroid3 stellar mass, \(M_{\rm th,sph}\)(Graham & Sahu, 2023a). The merger-induced transition of stars from ordered rotating discs in S0 galaxies into a somewhat chaotic 'dynamically hot' spheroidal-shaped swarm, coupled with the steep \(M_{\rm th}\)-\(M_{\rm th,sph}\) mass scaling relation for S0 galaxies, explains why, at a given mass, the E galaxies have an order-of-magnitude lower \(M_{\rm th}\)-\(M_{\rm th,sph}\) ratio than the S0 galaxies, as first observed by Sahu et al. (2019a).
Footnote 4: Here, the spheroidal component of a galaxy refers to either a central bulge of a disc galaxy or the bulk of an E galaxy.
Feedback from 'active galactic nuclei' (e.g., Salpeter, 1964; Silk & Rees, 1998; Heckman & Best, 2014) has typically been heralded as the driving force behind the BH/galaxy scaling relations (e.g., Magorrian et al., 1998; Ferrarese & Merritt, 2000; Gebhardt et al., 2000; Graham et al., 2001), with the initially small scatter about the \(M_{\rm th}\)-(stellar velocity dispersion, \(\sigma\)) relation taken as proof. However, it is now apparent that dry mergers, rather than BH feedback, have dictated the behaviour of the E galaxies in the \(M_{\rm th}\)-\(M_{\rm th,sph}\) diagram (Graham & Sahu, 2023a). Furthermore, the virial theorem, coupled with the best measurements of spheroid size and stellar mass, has revealed how dry mergers explain why the \(M_{\rm th}\)-\(\sigma\) relation does not have much scatter at the high-mass end where the E galaxies reside (Graham, 2023a).
Inspecting _Hubble Space Telescope_ (_HST_) images available at the Hubble Legacy Archive (HLA)5, it has also recently been revealed that there are two populations of S0 galaxy: dust-poor and dust-rich (Graham, 2023b). Based on the results in Appendix A, this may mirror a division previously detected as low- and high-luminosity S0 galaxies (van den Bergh, 1990), which needed to be explained and integrated into a joint evolutionary and galaxy morphology classification scheme. The existence of two populations helps explain a century of confusion, different formation scenarios, and physical properties for S0 galaxies (Aguerri, 2012). The dusty S0 galaxies are major-merger remnants that involved gas and star formation, referred to as 'wet' mergers. As the star formation fades, these S0 galaxies will migrate across the 'green valley' and on to the'red sequence' in diagrams of colour versus stellar mass (Powell et al., 2017). The S galaxies are observed to reside between the dust-poor and dust-rich S0 galaxies in the \(M_{\rm th}\)-\(M_{\rm th,sph}\) diagram. Furthermore, the S galaxies are known to have been built, or rather renovated, by minor mergers, which may encompass the accretion of gas clouds from surrounding HI (e.g., Block et al., 2007; Clover et al., 2010; Wang et al., 2015), the devouring of satellite galaxies, and the capture of dwarf galaxies (e.g., Searle & Zinn, 1978; Pickering et al., 1997; Gallagher, 2010; Garling et al., 2018; Li et al., 2018; Kruijssen et al., 2019; Mao et al., 2021). While such gravitational disturbances may invoke a spiral pattern (Julian & Toomre, 1966), too large a merger is likely to dynamically overhent the disc, destroy or prevent any spiral, and produce a dusty S0 galaxy such as NGC 3108 (Hau et al., 2008) or Centaurus A (Ebheter & Balick, 1983), which, in time, may resemble something more like the Sombrero galaxy.
Footnote 5: Here, the spheroidal component of a galaxy refers to either a central bulge of a disc galaxy or the bulk of an E galaxy.
Given that the BCGs are likely to be E galaxies built by multiple mergers (e.g., Laine et al., 2003), the E galaxy population is explored more thoroughly here. The BCGs may be offset in the \(M_{\rm th}\)-\(M_{\rm th,sph}\) diagram from the (non-BCG, or simply 'ordinary') E galaxies. An offset will not be observed if the E galaxies follow the near-linear \(M_{\rm th}\)-\(M_{\rm th,sph}\) relation they have been thought to follow for a quarter of a century (Magorrian et al., 1998; Kormendy & Ho, 2013; Saglia et al., 2016). However, for a steeper than linear non-BCG E galaxy \(M_{\rm th}\)-\(M_{\rm th,sph}\) relation, the addition of two ordinary E galaxies' stars and their central BH would lead to a merger-induced jump, referred to as 'punctuated equilibrium' (Graham & Sahu, 2023a)", taking them off the ordinary E galaxy \(M_{\rm th}\)-\(M_{\rm th,sph}\) relation. This jump is detected here, and, for the first time, all of the merger-built morphological transformations are shown to map into a triangular-like diagram revealing fundamental connections between the galaxy types. The 'Triangal', presented herein, supersedes the Hubble sequence by (i) redrawing the connections and (ii) including evolutionary pathways. It is, nonetheless, a development
based on the works of many, in particular van den Bergh (1976, 1990).
## 2 Data: Ordinary elliptical galaxies versus BCGs
The data for this investigation consists of published BH masses, spheroid (and galaxy) stellar masses, and the galaxies' morphological type, including whether the E galaxies are BCG or cD8 (Graham & Sahu 2023a,b), and S0 galaxy 'dust bin' (Graham 2023b). For ease of reference, the cD and BCG will often collectively be referred to as BCG in the text. The spheroid masses were obtained from careful9 multicomponent decompositions, which separate bars and inner discs from spheroids and other galaxy components. X/(peanut shell)-shaped structures were captured by a Fourier analysis of the isophotal shapes (Carter 1978; Ciambur 2015; Ciambur & Graham 2016) and effectively folded back into the bar component. As such, components that some may call a 'pseudobulge' or a false bulge are not considered the spheroid. The spheroid may, however, have a Sersic index of less than 2\(\pm\)0.5, a divide that has been questioned (e.g., Graham 2013, 2019a). For each galaxy, the decomposition has been plotted and published (Savorgnan & Graham 2016; Sahu et al. 2019a; Davis et al. 2019; Graham & Sahu 2023b).
Footnote 8: cD galaxies may be the first or second brightest galaxy in a cluster. They have a centrally-dominant location, and as such, they are immersed at the centre of the intracluster light, which appears as a diffuse halo (Conroy et al. 2007).
Footnote 9: Rather than blindly fitting multiple Sérsic functions, (galaxy component)-specific functions were fit after inspecting the images and consulting with the literature, including kinematic maps.
## 3 Results
The current focus explores separating the spheroid-dominated E and ES,e galaxies10 into BCGs and non-BCGs (Graham & Sahu 2023b). Given that galaxy groups can be small in number (3-5), a brightest group galaxy (BGG) may be an ordinary E galaxy, a dusty S0 galaxy or occasionally an S galaxy if no major merging has occurred. As such, the BGG tend not to not distinguish themselves in the \(M_{\rm th}\)-\(M_{\rm,*ph}\) diagram. As indicated in Section 1, the author has been evolving this diagram piecewise to make the changes more digestible and to better emphasise the importance of galaxy morphology and origin. Readers familiar with only a single regression line in this diagram may like to review figure 1 in Graham & Sahu (2023b), which separates the galaxies into S, S0 and E types, and figure 4 in Graham (2023b), which further separates the S0 galaxies into dust-poor and dust-rich bins. These 'dust bins' or classes are illustrated in figure 1 of Graham (2023b).11 In what follows,
Figure 1: Morphologically-aware \(M_{\rm th}\)-\(M_{\rm,*ph}\) diagram. The ten cD and BCG (including the dusty S0 galaxy NGC 1316 and the ES,e galaxy NGC 1275), along with NGC 3377 and NGC 6251, were excluded from the fit to the (remaining) 24 E/ES,e galaxies (right-most solid red line: Eq. 1). The one BCG above this line is NGC 4486. The (non-BCG) E galaxy with the highest BH mass is NGC 1600. The dashed red line represents the BCG. The lines for the (S0 and S) disc galaxies have come from (Graham 2023b). Using the S0 galaxy ‘dust bins’ (Graham 2023b), the left-most red solid line represents S0 galaxies without visible signs of dust (dust = N), while the orange dashed line additionally includes S0 galaxies with only a nuclear dust disc or ring (dust = n). The green dashed line is the orange dashed line shifted horizontally by an arbitrary log(3.5) \(\approx\) 0.54 dex, while the solid green line is a fit to the dusty (dust = Y) S0 galaxies, excluding those with only a little widespread dust (dust = y). The blue line represents the S galaxy data. Labelled galaxies were excluded from the Bayesian analyses. From left to right, the logarithmic slopes are: 2.39\(\pm\)0.81 (red line), 2.70\(\pm\)0.77 (dashed orange line); 2.27\(\pm\)0.48 (blue line); 3.69\(\pm\)1.51 (solid green line); 2.70\(\pm\)0.77 (dashed green line); and 2.00\(\pm\)0.25 (red solid and dashed line). Full equations are in Table 1, and the arrows are explained in the main text.
the BCGs (most of which are E galaxies) are separated from the ordinary (non-BCG) E galaxies.
Footnote 11: The \(M_{\rm{bh}}\)-\(M_{\rm{orb}}\) relation is also found to be \(M_{\rm{bh}}\)-\(M_{\rm{orb}}\) relation.
Figure 1 shows the ordinary E galaxies (including the ES,e galaxies described and identified in Graham & Sahu 2023b) and ten BCGs in the \(M_{\rm{bh}}\)-\(M_{\rm{orb}}\) diagram. Due to their ability to skew the result, two apparent outliers -- NGC 3377 (ES,e) and NGC 6251 (E), which are marked in Figure 1 and discussed in Appendix B along with other interesting outliers -- are excluded from the Bayesian analysis (method described in Davis et al. 2019) performed here on the ordinary E and ES,e galaxies. This analysis yields12
Footnote 12: Including NGC 6251 gives a logarithmic slope of 1.90\(\pm\)0.25, while including NGC 3377 and NGC 6251 gives a logarithmic slope of 1.65\(\pm\)0.22.
\[\log(M_{\rm{bh}}/M_{\odot})=(2.00\pm 0.25)[\log(M_{\rm{,-ph}}/M_{\odot})-11.32 ]+(8.84\pm 0.15). \tag{1}\]
This quadratic relation is dramatically different to the near-linear relation previously thought to define the coevolution of E galaxies and their BHs (Magorrian et al. 1998).
The bulk of the ten BCGs are seen to reside to the right of Eq. 1 in Figure 1. This is readily explained if, on average, they predominantly formed from the major merger of two E or ES,e galaxies. The situation could equally represent the merger of several S0 galaxies or some other suitable combination. The steeper than linear nature of the ordinary E/ES,e \(M_{\rm{bh}}\)-\(M_{\rm{orb}}\) relation (Eq. 1) results in dry mergers forming BCGs that reside to the right of, rather than on, Eq. 1. More data are required to obtain a reliable fit for the distribution of the BCGs. Therefore, a simple mass doubling from a major dry merger has been used to define the dashed red line in Figure 1, which appears broadly representative of the distribution of the BCGs.
Differing from the single S0 galaxy \(M_{\rm{bh}}\)-\(M_{\rm{orb}}\) relation presented in Graham & Sahu (2023a), the S0 galaxies are placed in four 'dust bins' following Graham (2023b). These bins are denoted as follows: N for no visible dust; n for only a nuclear dust disc/ring; y for weak widespread dust, and Y for strong, widespread dust features. As alluded to in the Introduction, this was established in Graham (2023b) by looking at colour images in the HLA. Roughly, nuclear discs are less than a few hundred parsecs in (radial) size, while widespread features may be 2 to 3 or more kpc in (radial) size. To a certain degree, one can expect a general sequence of increasing dust from the low-mass S0 galaxies to the spiral galaxies and on to the (wet merger)-built S0 galaxies, some of which were previously ultraluminous infrared galaxies (ULIRGs: Komossa et al. 2003; Dasrya et al. 2006). This stems from the trickle of star formation in spiral galaxies and the starbursts which formed the dusty S0 galaxies, in which metals condensed out of the interstellar medium.
The dust-poor S0 galaxies need not be gas-poor, and some may contain expansive HI envelopes (e.g., van Zee et al. 1995). Massive reservoirs of hydrogen gas are known to surround some low-mass and low surface brightness galaxies (e.g., Hoffman et al. 1993; Impey & G. 1997; Blitz & Robishaw 2000). Low surface brightness galaxies are also known to be metal-poor (e.g., McGaugh 1994). Such gas clouds may remain indefinitely unless an angular-momentum-robbing gravitational disturbance drives them inward to fuel a galactic metamorphosis or a passing neighbour leads to a gas bridge. These gas clouds could instead be removed via several well-known mechanisms within a group or cluster environment. The latter processes will leave a dust-poor S0 galaxy, while the first may build an S galaxy. In passing, it is noted that a dust-poor S0 galaxies' present-day mass function of stars will be a truncated and modified form of the initial mass function from 10-13 Gyrs ago. With stellar mass loss due to winds and supernovae ejecta, coupled with ram-pressure stripping within a group/cluster environment, the galaxies' stellar masses will reduce over time. Thus, the stellar orbits within the galaxies' discs will slightly expand from their initial configuration. Coupled with a faded stellar population, the surface brightnesses will be reduced. That is, this 'bloating' (in the plane) of the disc -- after consumption of the available gas at the formation epoch -- adds to the dimness of local (\(z\sim 0\)) low surface brightness and ultra-diffuse disc galaxies (and dwarf spheroidal-shaped galaxies).
In Figure 1, relations for the S0 galaxies with either no dust or strong dust features are included for reference with the relation for non-BCG E galaxies. Equations are provided in Table 1.
Connected with the \(B/T\) ratios, several \(M_{\rm{bh}}\)-\(M_{\rm{orb}}\) relations are shown in Appendix A As no galaxy decompositions are required, these may be more amenable for studying ensembles of BH mergers and the associated ocean of gravitational waves they produce (Shannon et al. 2015; **7**; Amaro-Seoane et al. 2023).
## 4 Discussion
### Making tracks
Figure 1 reveals several \(M_{\rm{bh}}\)-\(M_{\rm{orb}}\) scaling relations, illustrating the march of galaxies to larger masses and different morphological types. There are fitted relations for non-dusty S0 galaxies (Graham 2023b), S galaxies (Davis et al. 2019; Graham & Sahu 2023a), and ordinary E/ES,e galaxies (Eq. 1). In addition is the trend for dusty S0 galaxies (Graham 2023b), which is offset from the relation defined by non-dusty S0 galaxies, and the trend for BCGs,
which we have just seen is offset from the relation defined by the non-BCG E/ES,e galaxies. Collectively, the adjacent relations track a sequence of increasing chaos, albeit with spirals blooming along the way. The increasing entropy, revealed through the growth of 'dynamically hot' spheroids at the expense of ordered rotating discs, results in convergence towards pure E galaxies. This evolution between and along the relations could be quantified with a chaos parameter, such as the \(B/T\) stellar mass ratio or dynamical mass ratio: \(\sigma^{2}R_{\rm sph}/V^{2}h_{\rm disc}\), where \(R_{\rm sph}\) is a suitable radius for the spheroid, \(V\) is the disc rotation at some outer radius, and \(h_{\rm disc}\) is the disc scale-length.
The relatively dust-poor S0 galaxies on the left-hand side of Figure 1 might be quasi-primordial if frozen in time due to a cluster environment which stripped away their gas, eroded their satellites, and inhibited galaxy mergers due to high-speed passages within the cluster swarm. In contrast, the dusty S0 galaxies may be built from wet S+S mergers (lower blue arrow) and wet S0+S mergers (middle and upper blue arrows). Such mergers can result in gas clouds shocking against each other, falling inwards to perhaps form a new disc, and sparking galaxy-centric bursts of dusty star formation. The relatively cool atmospheres of asymptotic giant branch stars rich in carbon and oxygen, or exploding supernovae, previously sprayed metals into the interstellar medium. Initially, most of these elements stay in a gas phase, although some quickly condense into dust particles as the stellar winds/ejecta expand and cool. These refractory dust grain cores can grow substantial mantles as they enter dense metal-enriched gas clouds within the cooling interstellar medium (Draine, 2003). Indeed, the high metallicity, high dust content, and high density of neutral gas will aid the gas cooling, molecule formation, and cloud condensation. A sequence of increasing dust-to-Hi with metallicity, [O/H], can be seen in, for example, Engelbracht et al. (2008, their figure 6). The result is a somewhat distinct population of dusty, high-mass S0 galaxies.
The few dust-poor S0 galaxies overlapping with the S galaxies might be former S galaxies (Rathore et al., 2022) which have lost their dust, gas, and spiral density wave due to entering a harsh cluster environment (Yagi et al., 2010). Their satellites may then effectively evaporate, thereby contributing to the intracluster light rather than building up (the bulge of) the central galaxy (Conroy et al., 2007). Such evolution, or rather stagnation, is often expressed in terms of S galaxies fading to become S0 galaxies.
Finally, the grey and black arrows in Figure 1 denote major dry mergers, in which \(M_{\rm*,ph}\) can increase more than \(M_{\rm th}\) if some of the progenitor galaxies' disc stars get folded into the newly wedded galaxy's spheroidal component. These arrows show transitions from S0 to E to BCG (the upper set of arrows) and from S0 to ES,e to E to BCG (the lower set of arrows). For example, the black arrow pair above NGC 1322 reflects a major dry merger of two S0 galaxies with a \(B/T\) ratio of 0.5; the end product represents a doubling of the BH mass and a quadrupling of the spheroid stellar mass. The merger remnant is an E galaxy if the orbital angular momentum cancels. Should the net angular momentum of the system not cancel, then the system will not make it across to the E sequence but fall short, forming either an ES galaxy with an intermediate-scale disc or an S0 galaxy with a dominant disc. Once a galaxy's stellar mass is great enough, at \(M_{\rm*}>\)\(\sim\) 10\({}^{11}\) M\({}_{\rm\odot}\), it is not uncommon for galaxies to be immersed in a million-degree corona, which destroys and removes the dust and cooler star-forming gas (Benson et al., 2003; Draine, 2003, 2004; McNamara and Nulsen, 2012).
Overlaying the galaxies' morphological type onto the \(M_{\rm th}\)-\(M_{\rm*,ph}\) diagram has painted a new picture of the accretion history of galaxies. This augmented scaling diagram reveals the major mergers, such as the (i) transition from dust-poor S0 galaxies to dusty S0 galaxies built from wet mergers and having high \(M_{\rm*,gal}/M_{\rm th}\) ratios, (ii) the dry merger of S0 galaxies to produce ES and E galaxies, and (iii) the merger of E galaxies (and massive S0 galaxies) to produce BCG. Furthermore, the \(M_{\rm th}\)-\(M_{\rm*,ph}\) diagram suggests that gas accretion and minor mergers onto dust-poor S0 galaxies have created the spiral galaxies. A cleaner representation of the data and tracks in Figure 1 is summarised through a simplified schematic in Figure 2. It captures the major trends and transitions. This is also shown pictorially in Figure 3. The S galaxy NGC 4151 shown there is a Seyfert with faint, wispy arms extending beyond the displayed frame. It is reminiscent of UGC 6614 (not in sample), which has a low surface brightness disc with extended wispy arms and a prominent bulge with an active galactic nucleus (Schombert, 1998; Das et al., 2006). UGC 6614 is HI-rich and may have partially grown by accreting a dwarf galaxy (Pickering et al., 1997).
For the S galaxies, the spiral-arm winding angle correlates with the BH mass (Seigar et al., 2008; Davis et al., 2017), offering a view of the changing S galaxy morphology along the S galaxy \(M_{\rm th}\)-\(M_{\rm*,ph}\) relation. The low-mass S galaxies with low \(B/T\) ratios (see Appendix A) tend to have loosely wound spiral arms, while the opposite is observed at high masses. The spiral patterns in S galaxies form in discs; that is, a disc is first required. The formation of the S galaxies, sandwiched between the dust-poor and dust-rich S0 galaxies in the \(M_{\rm th}\)-\(M_{\rm*,ph}\) diagram suggests that gas accretion and minor mergers may be required for their emergence. A consequence is that our Milky Way galaxy was likely an S0 galaxy in the past before it merged with the _Gaia_-Sausage-Enceladus satellite galaxy 10 Gyr ago. However, if the mass ratio of merging galaxies is too close to 1, the outcome may be more destructive, producing an S0 galaxy like Centaurus A. Indeed, such an outcome is expected when the Milky Way eventually collides with the Andromeda galaxy in several Gyrs (van der Marel et al., 2012; \(\lx@math@degree\); \(\lx@math@degree\); \(\lx@math@degree\)).13
Footnote 13: Given the expected dusty nature of the merger product, a more apt name than “Milkomeda” may be “Dustomeda”.
Suppose the more massive of the original, currently dust-poor, S0 galaxies had more satellites than the lower mass dust-poor S0 galaxies -- akin to the increased numbers of dark matter subhalos
Figure 2: Morphologically-aware \(M_{\rm th}\)-\(M_{\rm*,ph}\) schematic. The progression of BH mass and ‘bulge’ mass, i.e., the stellar mass of the spheroidal component of galaxies.
observed in simulations (Moore et al., 1999; Ishiyama et al., 2013). In that case, then satellite capture and integration into the central S0 galaxy may build more massive bulges in the more massive dust-poor S0 galaxies. Such growth may also contribute to the trend of the \(B/T\) ratio along the (Sa-Sb-Sc) S galaxy sequence (Graham & Worley, 2008). and, in turn, contribute to a tightening of the winding of the spiral arms generated by the density waves (Lin & Shu, 1964). Such _harvesting_ of satellites may help complete the picture of galaxy evolution, explaining why Sa galaxies have bigger \(B/T\) ratios than the less massive Sc galaxies and perhaps partly explaining the missing satellite problem (D'Onghia et al., 2010). The merger-driven galaxy evolution revealed by the BH mass scaling diagrams (Figure 1) should also aid our understanding of the (dark matter halo mass)-(galaxy stellar mass) relations as a function of galaxy type (Brouwer et al., 2021; Posti & Fall, 2021).
Due to dynamical friction (Chandrasekhar & von Neumann, 1943; Baranov & Batrakov, 1974; Tremaine et al., 1975; Inoue, 2011; Arca-Sedda & Capuzzo-Dolcetta, 2014), albeit with competing evaporative effects (Ostriker et al., 1989; Madrid et al., 2017), the currently dust-poor S0 galaxies that once resided, and may still reside, in richer globular clusters system are expected to have imprisoned more globular clusters at their centre (Capuzzo-Dolcetta, 1993). They should, therefore, contain a more massive nuclear star cluster (Arca-Sedda & Capuzzo-Dolcetta, 2014; Leaman & van de Ven, 2022). Unlike in gas-poor globular clusters, BHs can feed and grow at the centres of galaxies -- and perhaps rapidly so (Davies et al., 2011) --, and a (black hole)-(nuclear star cluster) mass relation exists (Graham, 2020). This relation may have its origins in the dust-poor S0 galaxy sequence.
### Acquisitions and mergers
Almost as soon as the first S galaxy was discovered (Rosse, 1850), it was suggested that a tidal encounter with another galaxy might produce spiral-like 'tidal arms' (Roche, 1850; Alexander, 1852; Aitken, 1906; Hoyle, 1951). In addition, infalling perturbers may induce a (transient, Baba et al., 2018; Sellwood, 2011) spiral pattern (Julian & Toomre, 1966; Dubinski et al., 2008; Kazantzidis et al., 2009) or a bar (Steinmetz & Navarro, 2002) -- which may aid some longer-lasting grand-design spirals -- and provide gas for ongoing star formation. To date, the Milky Way galaxy appears to have had at least one significant merger 10 Gyr ago, with the Gaia Sausage-Enceladus satellite (Helmi et al., 1999; Belokurov et al., 2018; Helmi et al., 2018; Gallart et al., 2019), and perhaps an even greater merger before that (Horta et al., 2021). This is in addition to many increasingly lesser mergers involving, for example, the Sagittarius and Canis Major satellite galaxies (Martinez-Delgado et al., 2001; Kruijssen et al., 2019), and perhaps explaining Gould's Belt (Bekki, 2009a). Indeed, data from the ESA Gaia satellite has revealed that the disc of our Galaxy is notably unsettled (Antoja et al., 2018; Gaia Collaboration et al., 2018; Bland-Hawthorn & Tepper-Garcia, 2021). Furthermore, disrupted dwarf or satellite galaxies are now routinely seen around S galaxies (Martinez-Delgado et al., 2008, 2010; Javanmardi et al., 2016; Mao et al., 2021).
The extent to which the low-mass S0 galaxies have 'harvested' systems from their neighbourhood, and undergone star formation, could be substantial given the factor of four difference in galaxy stellar mass seen between the dust-poor S0 galaxies and the spiral galaxies at fixed BH mass (see Appendix A). However, there may have been considerable retardation of growth in the dust-poor S0 galaxies located in clusters, and groups, due to a curtailed supply of gas and stripping of stars (Gunn & Gott, 1972; Kawata & Mulchaey, 2008; Bekki, 2009b). Furthermore, the early-type spiral (Sa/Sb) galaxies with big bulges may have been built by a major merger followed by disc-building (Steinmetz & Navarro, 2002; Hammer et al., 2009). In contrast, the ES,b galaxies may not have experienced the subsequent disc re-growth that these early-type S galaxies did.
It stands to reason that our Milky Way galaxy, perhaps first recognised as a spiral system 170 years ago (Alexander, 1852), see also 7, was not born an S galaxy but was previously an S0 galaxy. This conclusion is supported by a myriad of stellar chemical and kinematic information (Matsuno et al., 2019; Di Matteo et al., 2020). The abundance of disc galaxies seen at high redshifts by Ferreira et al. (2022) using the _James Webb Space Telescope (JWST)_ also supports a picture of early disc galaxy formation. S galaxies would then consist of an old S0 galaxy disc, in which a thin disc has formed, and a spiral emerged (Yuan et al., 2017), albeit with the ongoing competition with mergers (and spiral arms) which can dynamically heat and thicken a disc (Toth & Ostriker, 1992; Dubinski
Figure 3: Intergalactic speciation. Panel a) Dust-poor S0 galaxy NGC 4762 (HST Prop. 9401. PI. PCote. F850LP/F475W ACS/WFC). Panel b) S galaxy NGC 4151 (HST Prop. 13765. PI. B.Peterson. F814W/F350LP WFC3/UVIS. STScI/NASA, ESA, Joseph DePasquale). Panel c) Dust-rich S0 galaxy NGC 4594 (NASA and the Hubble Heritage Team. STScI/AURA). Panel d) E galaxy NGC 1407 (HST Prop. 9427. PI. W.Harris. F814W/F435W ACS/WFC). The white stripe in panel A is due to the camera join.
et al., 2008; Kawata et al., 2018). It would be interesting to learn, through JWST observations, when the spiral patterns formed in the S0 galaxies, presumably marking when a cold gas disc formed and a density wave emerged from the differential rotation of the galaxy disc.
Cosmological simulations reveal an abundance of satellites contributing to the growth of galaxies (e.g., Chua et al., 2017; Engler et al., 2021; Dillamore et al., 2022). The star cluster Nikhul, in the disturbed S galaxy NGC 4424, is thought to represent the remains of a disrupted dwarf early-type galaxy that may be delivering a BH into NGC 4424 (Graham et al., 2021), and potentially generating gravitational waves should a second massive BH already reside there (Brown et al., 2007; Mandel & Gair, 2009). Perhaps the Gaia Sausage-Enceladus satellite stream also contained a migrant BH brought into our Galaxy.
In Figure 4, the dS0 galaxies are added to the low-mass end of the dust-poor S0 (not S) galaxy sequence, where one observes the spiralless disc galaxies with small bulges and \(M_{\rm{\rm{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}<10^{10}M_{\rm{ \odot}}\) (see Appendix A). If galaxies like IC 335, NGC 4452, and NGC 5866 (not in sample) are edge-on, near-bulgeless S0 galaxies, then some face-on examples may resemble low surface brightness galaxies, including ultra-diffusive galaxies (UDGs: Sandage & Binggeli, 1984; Henkel et al., 2017). They may follow the curved size-(stellar mass) relation for early-type galaxies (Graham, 2019) to lower masses and larger galaxy half-light radii.14 Some dS0 galaxies have been detected with faint spiral arms (Jerjen et al., 2000; Barazza et al., 2002; Graham et al., 2003b). Rather than being faded dwarf S galaxies, which are rare (Sandage & Binggeli, 1984), they may be dwarf lenticular (dS0) galaxies attempting the transition to a late-type S galaxy (Corbin & Vacca, 2002; Graham et al., 2017) but retarded by lower numbers of satellites and reduced gas accretion.
Footnote 14: In the absence of a centrally concentrated bulge, the galaxy size becomes the disc size.
## 5 Placing developments in context
One of the most well-known diagrams in astronomy is the 'Hubble sequence' also known as the 'Hubble tuning fork', reviewed in Graham (2019) along with other galaxy morphology schemata. The sequence was initially thought to be evolutionary, based on the 'nebular hypothesis' from the 1700s. A legacy of that hypothesis is the 'early-type' and 'late-type' galaxy nomenclature used over the past century for the E/S0 and S galaxies, respectively, plus the early- and late-type spiral galaxy designation (e.g., Lundmark, 1925, p.867), Originally, the Sa/Sb galaxies were thought to form before the Sc/Sd galaxies (Jeans, 1919; Reynolds, 1921, 1925). The S0 galaxies were introduced later and positioned before the Sa galaxies, or rather, between the E and Sa galaxies to give the sequence: (E0-E3)-S0-(Sa-Sc), where the ellipticity of the E galaxies is denoted by \(1-b/a\), with \(b/a\) the observed axis ratio and thus apparently round galaxies labelled E0. However, in terms of an increasing mass build-up, Figure 1 (and Figure 2) reveal how one encounters the so-called 'late-type S' galaxies (Sc/Sd), known to have smaller BH masses, before the 'early-type S' galaxies (Sa/Sb), known to have larger BH masses. Although, within the 'down-sizing' scenario (Cowie et al., 1996), the higher mass S galaxies might finish forming first.
This section provides a brief overview of the significant advances which have led to the emergence of a new, triangular-like galaxy sequence presented in Figure 4 and detailed further in Figure 1. Dubbed the 'Triangal', it reveals the morphological connections and, for the first time, the merger-induced evolutionary pathways responsible for galactic speciation. It identifies, and recognises the significance of, the dust-poor and dust-rich S0 galaxies.
Figure 4 reflects that a galaxy's collisional record is evident from its morphology. As for the underlying, merger-driven, morphology-dependent \(M_{\rm{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}\) \) scaling relations, BHs may now seem somewhat akin to passengers, carried along by major mergers in which the redistribution of disc stars, and the increase in orbital entropy, leads to the step-change creation, i.e., 'punctuated equilibrium', of more massive spheroids and the transition to a new species of galaxy. Major wet mergers are, however, also associated with star formation and BH growth. Gaseous processes may be restricted to producing movement along the individual quadratic or steeper \(M_{\rm{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}\) relations, yielding a kind of 'gradualism' rather than evolution off any morphology-dependent relation.
Derived from the morphologically-aware \(M_{\rm{\tiny{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}}}\) diagram (Figure 1), the schematic in Figure 4 capture elements of not just the 'Hubble sequence' but also the van den Bergh trident, introduced within the Revised David Dunlap Observatory system (van den Bergh, 1976), which was later re-expressed as the ATLAS\({}^{\rm{\small{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}}}\) method (Cappellari et al., 2011). Since the van den Bergh trident was introduced, there have been several significant developments, two of which trace back to work by Sidney van den Bergh. First was the realisation that many early-type galaxies contain discs (Capaccioli, 1990; Rix & White, 1990) such that low-luminosity15 early-type galaxies are S0 galaxies rather than E galaxies (van den Bergh, 1990). The abundance of rotating discs was later witnessed through kinematic information (Graham et al., 1998; Emsellem et al., 2011). Second was the realisation that there are two subtypes of S0 galaxy: low- and high-luminosity (van den Bergh, 1990), with the origin of the high-luminosity S0 galaxies now known to be due to wet mergers (Graham, 2023b). These dusty high-luminosity S0 galaxies are not faded S galaxies (Spitzer & Baade, 1951) -- an idea which partly motivated the van den Bergh trident (van den Bergh, 1976) -- but S+S (Bekki, 1998; Naab & Burkert, 2003; Queregia et al., 2015) or S+S0 or (cold-gas rich but dust-poor) S0+S0 merger remnants. This accounts for why they are more massive than the S galaxy population (van den Bergh, 1990; Burstein et al., 2005). The sequence of dust-poor S0 galaxies seen in Figure 1 are also not faded S galaxies but rather failed S galaxies that never were.
Footnote 15: Absolute magnitude \(M_{B}>-20\) mag, H\({}_{0}\)=50 km s\({}^{-1}\) Mpc\({}^{-1}\).
The S0 galaxies are not the lynchpin they were initially thought to be (Reynolds, 1925; Hubble, 1936); that description seems more apt of the ES galaxies (Liller, 1966), which are both 'fast rotators' and'slow rotators', backtracking on themselves in the modified spin-ellipticity diagram for galaxies (Bellstedt et al., 2017). The ES galaxies are a bridging population between the E and S0 galaxies, while the S galaxies are now a bridging population between the dust-poor and dust-rich S0 galaxies.
Recognising that S0 galaxies are not simply a single bridging population (E0-E3)-S0-(Sa-Sc), nor are they a single low-(spiral strength) side to the disc galaxy distribution of \(B/T\) ratio and spiral strength (van den Bergh, 1976), alleviates a long-standing mystery. While the dusty S0 galaxies are a merger-built bridging population between the S and E galaxies, the non-dusty S0 galaxies form both a low-mass extension of the E galaxies -- along a sequence of changing \(B/T\) ratio and specific angular momentum (Bender, 1988;
Capaccioli & Caon 1992) -- and provide a population for accretion, minor mergers, and the development of spiral structures.
In addition to the \(B/T\) ratio (Figure 19) -- known to correlate with the bulge mass (e.g. Graham & Worley 2008) -- future work will explore the location in the \(M_{\rm hh}\)-\(M_{\rm*,ph}\) diagram of S galaxies with different arm strengths (van den Bergh 1976). After checking if systems with weak/anemic arms preferentially reside on one side of the S galaxy sequence, the location of disc galaxies with strong/weak/no bars (de Vaucouleurs 1959) will be examined. Furthermore, it may be insightful to explore if other features, such as ring-shape versus spiral-shape (de Vaucouleurs 1959) or the wealth of fine detail captured by the 'Comprehensive de Vaucouleurs revised Hubble-Sandage' system (Buta et al. 2007), occur in galaxies preferentially occupying a specific part of the diagram.
## Acknowledgments
This paper is dedicated to the memory of Troy Charles Smith (1973 February 8-2023 February 2), a great friend and neighbour to interact with, and whose "but what about..." remarks would prompt one to query the status quo. The author warmly thanks Drs Denis W. Coates and Vale Keith Thompson, formerly at Monash University, for past discussions. Part of this research was conducted within the Australian Research Council's Centre of Excellence for Gravitational Wave Discovery (OzGrav) through project number CE170100004. This work has used the NASA/IPAC Infrared Science Archive (IRSA), the NASA Extragalactic Database (NED), NASA's Astrophysics Data System (ADS) Bibliographic Services, and the Hubble Legacy Archive (HLA).
## 6 Data availability
The data for this investigation consists of published (Graham & Sahu 2023a) black hole masses, spheroid (and galaxy) stellar masses, along with the galaxies' morphological type, including whether the E galaxies are BCG or cD(Graham & Sahu 2023b). The spheroid masses are obtained from published multicomponent decompositions, which separate bars and inner discs from the spheroids(Savorgnan & Graham 2016; Davis et al. 2019; Sahu et al. 2019a; Graham & Sahu 2023b).
|
2307.15259 | Square Functions for Ritt Operators in $L^1$ | $T$ is a Ritt operator in $L^p$ if $\sup_n n\|T^n-T^{n+1}\|<\infty$. From
\cite{LeMX-Vq}, if $T$ is a positive contraction and a Ritt operator in $L^p$,
$1<p<\infty$, the square function
$\left( \sum_n n^{2m+1} |T^n(I-T)^{m+1}f|^2 \right)^{1/2}$ is bounded. We
show that if $T$ is a Ritt operator in $L^1$, \[Q_{\alpha,s,m}f=\left( \sum_n
n^{\alpha} |T^n(I-T)^mf|^s \right)^{1/s}\] is bounded $L^1$ when $\alpha+1<sm$,
and examine related questions on variational and oscillation norms. | Jennifer Hults, Karin Reinhold-Larsson | 2023-07-28T01:59:26Z | http://arxiv.org/abs/2307.15259v3 | # Square Functions for Ritt Operators in \(L^{1}\)
###### Abstract
\(T\) is a Ritt operator in \(L^{p}\) if \(\sup_{n}n\|T^{n}-T^{n+1}\|<\infty\). From [10], if \(T\) is a positive contraction and a Ritt operator in \(L^{p}\), \(1<p<\infty\), the square function \(\left(\sum_{n}n^{2m+1}|T^{n}(I-T)^{m+1}f|^{2}\right)^{1/2}\) is bounded. We show that if \(T\) is a Ritt operator in \(L^{1}\),
\[Q_{\alpha,s,m}f=\left(\sum_{n}n^{\alpha}|T^{n}(I-T)^{m}f|^{s}\right)^{1/s}\]
is bounded \(L^{1}\) when \(\alpha+1<sm\), and examine related questions on variational and oscillation norms.
## 1 Introduction
Let \((X,\beta,m)\) be a non-atomic, separable probability space and \(\tau\) an invertible measure-preserving transformation on \((X,\beta,m)\). A probability measure \(\mu\) on \(\mathbb{Z}\) defines the contraction operator \(\tau_{\mu}f(x)=\sum_{k}\mu(k)f(\tau^{k}x)\), for \(x\in X\), \(f\in L^{p}(X)\), with \(p\geq 1\). In [2], Below, Jones and Rosenblatt showed that if the measure \(\mu\) satisfies the bounded angular condition
\[\sup_{|t|<1/2}\frac{|1-\hat{\mu}(t)|}{1-|\hat{\mu}(t)|}<\infty,\]
then both, the maximal function \(Mf=\sup_{n\geq 1}|\tau_{\mu}^{n}f|\) and the square function \(Sf=(\sum_{n}n|\tau_{\mu}^{n}-\tau_{\mu}^{n+1}f|^{2})^{1/2}\), are bounded operators in \(L^{p}\); and the powers \(\tau_{\mu}^{n}f(x)\) converge almost everywhere in \(L^{p}\) (\(1<p<\infty\)). Additional control on convergent sequences can be established with variation and oscillation norms. Jones and Reinhold [7] proved that, for measures satisfying the bounded angular condition, the variation and oscillation norms
\[\|\{\tau_{\mu}^{n}f\}\|_{v(s)} =\sup_{\{n_{k}\}}\left(\sum_{k}|\tau_{\mu}^{n_{k}}f-\tau_{\mu}^{n _{k+1}}f|^{s}\right)^{1/s}\text{ and }\] \[\|\{\tau_{\mu}^{n}f\}\|_{o(s)} =\left(\sum_{k}\sup_{\{m_{k}\leq n,m\leq m_{k+1}\}}|\tau_{\mu}^{ n}f-\tau_{\mu}^{m}f|^{s}\right)^{1/s},\]
where the supremum in the later is taken over all non decreasing sequences \(\{m_{k}\}\) of positive integers, are bounded in \(L^{2}\) for any \(s>2\).
The natural question was whether properties of the contraction operator \(\tau_{\mu}\) also hold for other type of contractions. Let \(Y\) be a Banach space. We say that \(T\in\mathcal{L}(Y)\) satisfies the **resolvent condition** if there exists a constant \(C\) such that
\[|z-1|\|(T-z)^{-1}\|\leq C\text{ for all }|z|>1.\]
Ritt [17] proved that the resolvent condition implies the operator is power bounded, \(\sup_{n\geq 1}\|T^{n}\|<\infty\). Nagy and Zemanek [15], and independently Lyubich [14], (see also [16, 8]) showed that the resolvent condition and \(\sigma(T)\subset\) unit disk, is equivalent to the operator \(T\) being power bounded and satisfying \(\sup_{n}n\|T^{n}-T^{n+1}\|<\infty\).
**Definition 1.1**.: \(T\in\mathcal{L}(Y)\) _is a Ritt operator if \(T\) is power bounded and \(\sup_{n}n\|T^{n}-T^{n+1}\|<\infty\)._
It turns out that \(T\) is a Ritt operator if there is \(\gamma\in(0,\pi/2)\) such that its spectrum is included in the closure of a Stolz domain \(B_{\gamma}\) of the unit disk, that is, the interior of the convex hull of \(1\) and the disc \(D(0,\sin\gamma)\). In other words, \(\sigma(T)\) satisfies the bounded angular condition. See [8, 14, 16, 15].
**Lemma 1.2**.: \(T\) _is a Ritt operator if and only if there exists an angle \(\gamma\in(0,\pi/2)\) such that \(\sigma(T)\subset B_{\gamma}\) and for any \(\beta\in(\gamma,\pi/2)\), the set \(\{(z-1)(z-T)^{-1}:z\not\in\bar{B}_{\beta}\}\) is bounded._
This characterization allowed Blunke [3] to prove an interpolation theorem for Ritt operators.
**Theorem 1.3**.: _(Theorem 1.1 [3]) Let \(p,q\geq 1\) and \(T\in\mathcal{L}(L^{p})\) be power bounded and Ritt in \(L^{p}\). If \(T\) is power bounded on \(L^{q}\) then \(T\) is power bounded and Ritt on \(L^{r}\) for any \(r\) strictly between \(p\) and \(q\)._
Thus, for \(T\) and \(L^{1}-L^{\infty}\) contraction, If \(T\) is Ritt in one \(L^{p}\), \(p\geq 1\), it is Ritt in all \(L^{q}\)'s \(1<q<\infty\). In particular, if \(T\) is Ritt in \(L^{2}\) it is Ritt in all \(L^{p}\), \(1<p<\infty\). Thus, the operator being Ritt in \(L^{1}\) does not immediately follow from being Ritt in another \(L^{p}\).
With the resolvent condition, Le Merdi [8] developed \(H^{\infty}\) functional calculus for Ritt operators, yielding new insights regarding the convergence of powers and maximal and square functions estimates [8, 9, 10]. In particular, LeMerdi and Xu [10] (Prop. 4.1, Thm. 4.4 & Thm. 5.6) established the following implications for Ritt operators in \(L^{p}\), \(1<p<\infty\), connecting the Ritt property with the convergence of a square function.
**Theorem 1.4**.: _Let \((X,m)\) be a \(\sigma\)-finite measure space, \(1<p<\infty\), and \(T\) a positive contraction of \(L^{p}(X,m)\). If \(\sup_{n}n\|T^{n}-T^{n+1}\|<\infty\), then, for any fixed integer \(m\geq 0\) and any real number \(s>2\),_
* \(\left(\sum_{n}(n+1)^{2m+1}|T^{n}(I-T)^{m+1}f|^{2}\right)^{1/2}\)_,_
* \(\|\{T^{n}f\}\|_{v(s)}\)_, and more generally_ \(\|\{n^{m}T^{n}(I-T)^{m}f\}\|_{v(s)}\)_,_
* _for any increasing sequence_ \(\{n_{k}\}\)_,_ \(\|\{T^{n}f\}\|_{o(2)}\)_, and more generally_ \(\|\{n^{m}T^{n}(I-T)^{m}f\}\|_{o(2)}\)_,_
_are bounded in \(L^{p}\)._
Cohen, Cuny and Lin [4] completed the study by providing equivalent conditions between the Ritt, spectral conditions and the square functions.
**Theorem 1.5**.: _Let \((X,m)\) be a \(\sigma\)-finite measure space, \(1<p<\infty\) and \(T\) a positive contraction on \(L^{p}(X,m)\). Then the following are equivalent:_
* \(\sup_{n}n\|T^{n}-T^{n+1}\|<\infty\)_,_
* _there exists a constant_ \(C_{p}>0\) _such that_ \(\|(\sum_{n}n|T^{n}-T^{n+1}f|^{2})^{1/2}\|\leq C_{p}\|f\|_{p}\)_,_
* _there exists a closed Stolz region_ \(\Sigma\) _and a constant_ \(K>0\) _such that_ \(\|u(T)\|\leq K\sup_{z\in\Sigma}u(z)\) _for every rational function_ \(u\) _with poles outside_ \(\Sigma\)_._
Results in \(L^{1}\) turned out to be more elusive. Bellow and Calderon [1] showed that for \(\tau_{\mu}\), the maximal function \(Mf\) is weak (1,1) for centered measures \(\mu\) with finite second moment. Such measures have bounded angular ratio. Losert [12, 13] constructed measures \(\mu\) without bounded angular ratio for which pointwise convergence of \(\tau_{\mu}^{n}\) failed. Wedrychowicz [18] gave conditions for \(Mf\) to be weak (1,1) for measures with bounded angular ratio but without a finite second moment.
Next we extend \(\tau_{\mu}\) to a broader setting.
**Definition 1.6**.: _Let \(T\in\mathcal{L}(Y)\) be a power bounded linear operator and \(\mu\) a finite signed measure on the integers. If \(supp(\mu)\not\subset\mathbb{Z}^{+}\cup\{0\}\), we assume \(T\) is invertible and \(\sup_{n\in\mathbb{Z}}\|T^{n}\|<\infty\). We define the operator induced by \(\mu\) as_
\[T_{\mu}f=\sum_{k}\mu(k)T^{k}f.\]
Note that with the restrictions on the operator \(T\), \(T_{\mu}\) is well defined for any \(f\in Y\). In most of the paper, the Banach is \(L^{1}\) or \(L^{p}\) depending on the context.
Dungey [6] proved that \(T_{\mu}\) is Ritt in \(L^{1}\) for measures supported on the positive integers satisfying a spectral property (M1) that guarantee bonded angular ratio.
**Theorem 1.7**.: _(Dungey Theorem 4.1 [6]) Let \(\mu\) be a probability measure on \(\mathbb{Z}^{+}\) for which there exists \(0<\alpha<1\) such that (i) \(|Re\,\hat{\mu}(t)|\leq 1-c|t|^{\alpha}\), and (ii) \(|\hat{\mu}^{\prime}(t)|\leq c\,|t|^{\alpha-1}\) for \(0<|t|\leq 1/2\). Then \(T_{\mu}\) is Ritt in \(L^{1}\)._
Inspired by [18] and [6], Cuny [5] considered more general spectral conditions (M2) which apply to measures with support on \(\mathbb{Z}\) and yield week type inequalities.
**Theorem 1.8**.: _[_5_]_ _Let \(\mu\) be a probability measure on \(\mathbb{Z}\) such that its Fourier transform \(\hat{\mu}(t)=\sum_{k}\mu(k)e^{-2\pi ikt}\) is twice continuously differentiable on \(0<|t|<1\) and such that there exists a continuous function \(h(t)\) on \(|t|\leq 1\) with \(h(0)=0\), \(h(-t)=h(t)\), continuously differentiable on \(0<|t|<1\) satisfying the following conditions: (i) \(|\hat{\mu}(t)|\leq 1-c\,h(t)\), (ii) \(|t\,\hat{\mu}^{\prime}(t)|\leq c\,h(t)\), (iii) \(|\hat{\mu}^{\prime}(t)|\leq c\,h^{\prime}(t)\), and (iv) \(|t\,\hat{\mu}^{\prime\prime}(t)|\leq c\,h^{\prime}(t)\). Let \(T\) be the shift in \(\updownarrow^{1}(\mathbb{Z})\). Then \(m(x\in\mathbb{Z}:\sup_{n}|T^{n}_{\mu}f(x)|>\lambda)\leq\frac{C}{\lambda}\|f\|_ {1}\) and \(T_{\mu}\) is weak-\(\updownarrow^{1}\)-Ritt: \(m(x\in\mathbb{Z}:\sup_{n}n^{m}|(T^{n}_{\mu}-T^{n+m}_{\mu})f(x)|>\lambda)\leq \frac{C_{m}}{\lambda}\|f\|_{1}\) for any \(f\in\updownarrow^{1}(\mathbb{Z})\). If in addition, \(h\) satisfies (v) \(h(t)\leq cth^{\prime}(t)\) for \(0<t<1\), then \(\sup_{n}n\|\mu^{n}-\mu^{n+1}\|_{l^{1}}<\infty\)._
If \(0<\alpha<1\), we define \((I-T)^{\alpha}\) by considering the series expansion for
\[(1-x)^{\alpha}=1-\sum_{k\geq 1}g(\alpha,k)x^{k},\]
for \(|x|\leq 1\). That is, \((I-T)^{\alpha}=I-\sum_{l\geq 1}g(\alpha,k)T^{k}\). The coefficients satisfy \(g(\alpha,k)=\frac{\alpha|\alpha-1|\ldots|\alpha-k+1|}{k!}\geq 0\) and \(\sum_{k}g(\alpha,k)=1\). See [6], [11]. In other words, \(I-(I-T)^{\alpha}=T^{\alpha}_{\nu_{\alpha}}\) where \(\nu_{\alpha}\) is the probability measure on \(\mathbb{Z}^{+}\) with
\[\nu_{\alpha}(k)=g(\alpha,k). \tag{1.1}\]
By separating the integer part from the fractional part, we can define \((I-T)^{m}\) for any real \(m>0\). In the particular case of \(T_{\mu}\), we note that
\[T^{n}_{\mu}(I-T_{\mu})^{m}f=T_{\nu_{n,m}}f\]
where \(\nu_{n,m}\) is a signed measure on \(\mathbb{Z}\) satisfying \(\hat{\nu}_{n,m}(t)=\hat{\mu}^{n}(t)(1-\hat{\mu}(t))^{m}\).
**Definition 1.9**.: _For \(s,m>0\), define the "generalized square functions" (associated with the operator \(T\)) as:_
\[Q_{\alpha,s,m}f=Q^{T}_{\alpha,s,m}f=\left(\sum_{n=1}^{\infty}n^{\alpha}|T^{n} (I-T)^{m}f|^{s}\right)^{1/s}.\]
To prove the boundenes of \(Q^{T}_{\alpha,s,m}f\) for general Ritt operators in \(L^{1}\), we study first the case when \(T=T_{\mu}\). In this case, we work with probability measures for which \(T_{\mu}\) is Ritt in \(L^{1}\).
We say that a measure \(\nu\) on \(\mathbb{Z}\) satisfies condition M1 if \(\hat{\nu}\) is continuous differentiable on \(0<|t|<1\), and there exists \(a>0\) such that \(|\hat{\nu}(t)|\leq 1-c|t|^{a}\) (for some \(c>0\)) and \(|\hat{\nu^{\prime}}(t)|\lesssim|t|^{a-1}\), for \(0<|t|<1\). We say \(\nu\) satisfies condition M2 if it satisfies conditions (i) to (v) of Theorem 1.8.
Either condition guarantees the measure has bounded angular ratio. Since, if \(|\hat{\mu}(t)|<1-ch(t)\),
\[|1-\hat{\mu}(t)|\leq|\int_{0}^{t}\hat{\mu}^{\prime}(u)du|\leq\int_{0}^{|t|}h^{ \prime}(u)du=h(t)\leq\frac{1-|\hat{\mu}(t)|}{c}.\]
There are many examples of measure satisfying M1 or M2. If \(\mu\) is a centered measure with finite second moment, then it satisfies M1 with \(a=2\). The next example, due to Dungey [6], exhibits a non-centered measure without finite first moment.
**Example 1.10**.: _For fixed \(0<\alpha<1\), let \(\nu_{\alpha}\) be the probability measure on \(\mathbb{Z}\) defined in (1.1). Then \(\hat{\nu}_{\alpha}(t)=1-(1-e^{2\pi it})^{\alpha}\). By Proposition 3.3 of [5], \(\mu\) has bounded angular ratio and satisfies property M1. There are \(c,c^{\prime}>0\) such that_
\[|\hat{\mu}(t)|\leq 1-\frac{|1-\hat{\mu}(t)|}{c}\leq 1-c^{\prime}|t|^{\alpha}.\]
_Moreover, with this property and Theorem 1.7, (Theorem 1.1 [6]), \(T_{\nu_{\alpha}}=I-(I-T)^{\alpha}\) is a Ritt operator._
More examples of measures satisfying M2 can be found in [5]. Now we are ready for the main result.
**Theorem 1.11**.: _Let \(\mu\) be a probability measure on \(\mathbb{Z}\) satisfying condition M1 or M2. If \(m>0\), and \(sm>\alpha+1\), then \(Q_{\alpha,s,m}^{T_{\mu}}f\) is a bounded operator in \(L^{1}\)._
_Remark_.: Note that, by Theorem 1.5, \(Q_{1,2,1}f\) is bounded in \(L^{p}\) for any \(p>1\), but in \(L^{1}\), \(Q_{1,s,1}f\) is bounded for \(s>2\). And in general, when \(m=\alpha\), \(Q_{m,s,m}f\) is bounded in \(L^{1}\) for \(s>1+1/m\).
Also, by Theorem 1.4, \(Q_{2m+1,2,(m+1)}f\) is bounded in \(L^{p}\) for any \(p>1\). But in \(L^{1}\), \(Q_{2m+1,s,(m+1)}f\) bounded for any \(s>2\), and \(Q_{\alpha,2,(m+1)}f\) bounded in \(L^{1}\) for any \(\alpha<2m+1\). When \(m=1\), \(Q_{\alpha,s,1}f\) is bounded in \(L^{1}\) for \(s>1+\alpha\).
_Open question_.: Is \(Q_{\alpha,s,s}f\) bounded in \(L^{1}\) for \(sm=\alpha+1\)? Is it weak (1,1) in \(\updownarrow^{1}\)?
Proposition 6.1 and Corollary 6.2 in [4] show that for a power bounded Ritt operator \(T\) in \(L^{p}\) (\(1<p<\infty\)), \(\sup_{n\geq 1}n^{m}\|T^{n}(I-T)^{m}\|<\infty\). And, by Theorem 1.8, \(\sup_{n}n^{m}|T^{n}_{\mu}(I-T_{\mu})^{m}f|\) is weak (1,1) (when \(T\) is the shift operator in \(\updownarrow^{1}\)). The next corollary shows that for a factor slightly below \(n^{m}\), the maximal function is bounded.
**Corollary 1.12**.: _Let \(\mu\) be as in 1.11 and \(0\leq\alpha<m\). Then_
1. \(\sup_{n}n^{\alpha}|T^{n}_{\mu}(I-T_{\mu})^{m}f|\) _is bounded in_ \(L^{1}\)_._
2. _Let_ \(1<p<\infty\) _and_ \(T\) _a positive contraction on_ \(L^{p}(X)\)_. If_ \(T\) _a Ritt operator in_ \(L^{p}\)_, then_ \(\sup_{n}\frac{n^{m}}{\sqrt{\ln n}}|T^{n}(I-T)^{m}f|\) _is bounded in_ \(L^{p}\)_._
_In either case, \(\lim_{n\to\infty}n^{\alpha}|T^{n}_{\mu}(I-T_{\mu})^{m}f|=0\) in norm and a.e.._
Theorem 1.4 showed that \(\|n^{m}T^{n}_{\mu}(I-T_{\mu})^{m}f\|_{v(s)}\) and \(\|n^{m}T^{n}_{\mu}(I-T_{\mu})^{m}f\|_{o(s)}\) are bounded in \(L^{p}\), for \(s>2\) and \(p>1\), even in the case \(m=0\). In \(L^{1}\), we obtained the following variations and oscillations results.
**Proposition 1.13**.: _Let \(\mu\) be as in 1.11. Let \(s\geq 1\), \(m>0,\) and \(\beta\geq 0\) be fixed. If \(\beta=0\) or if \(s(m-\beta)>1\), then both \(\|n^{\beta}T^{n}_{\mu}(I-T_{\mu})^{m}f\|_{v(s)}\) and \(\|n^{\beta}T^{n}_{\mu}(I-T_{\mu})^{m}f\|_{o(s)}\) are bounded in \(L^{1}\)._
_Open question._ Are there values of \(\beta>-1\) and \(s>0\) for which \(\|n^{\beta}T_{\mu}^{n}f\|_{v(s)}\) and \(\|n^{\beta}T_{\mu}^{n}f\|_{o(s)}\) are bounded in \(L^{1}\).
The result in Proposition 1.13 stays shy of the case \(m=0\). The next result handles cases of differences along subsequences with increasing gaps.
**Proposition 1.14**.: _Let \(\mu\) be as in 1.11, \(\{n_{k}\}\) an increasing sequence such that \(n_{k+1}-n_{k}\sim n_{k}^{\alpha}\) for some \(0<\alpha<1\), and \(0\leq\beta<1-\alpha\). Then_
1. \(\left(\sum_{k}n_{k}^{\beta s}|T_{\mu}^{n_{k}}f-T_{\mu}^{n_{k+1}}f|^{s}\right)^ {1/s}\) _is bounded in_ \(L^{1}\) _for_ \(s>(1-\alpha)/(1-\alpha-\beta)\)_. In particular,_ \(\left(\sum_{k}|T_{\mu}^{n_{k}}f-T_{\mu}^{n_{k+1}}f|^{s}\right)^{1/s}\) _is bounded in_ \(L^{1}\) _for_ \(s>1\)_;_
2. \(\left(\sum_{k}n_{k}^{\beta s}\|\{T_{\mu}^{n}f:n_{k}\leq n<n_{k+1}\}\|_{v(s)}^{ s}\right)^{1/s}\) _is bounded in_ \(L^{1}\) _for_ \(s>1/(1-\alpha-\beta)\)_;_ _and_ \(\left(\sum_{k}n_{k}^{\beta s}\max_{n_{k}\leq n<n_{k+1}}|T_{\mu}^{n}f-T_{\mu}^{ n_{k}}f|^{s}\right)^{1/s}\) _is bounded in_ \(L^{1}\) _for_ \(s>(1-\alpha)/(1-\alpha-\beta)\)_._
We conclude noticing that any Ritt operator in \(L^{1}\) inherits the same properties as those obtained for \(T_{\mu}\).
**Theorem 1.15**.: _Let \((X,m)\) be a \(\sigma\)-finite measure space and \(S\) a Ritt operator in \(L^{1}(X)\), \(m>0\) and \(s\geq 1\). Then_
1. \(Q_{\alpha,s,m}^{S}f\) _is bounded in_ \(L^{1}\) _for_ \(sm>\alpha+1\)_;_
2. \(\sup_{n}n^{\alpha}|S^{n}(I-S)^{m}f|\) _is bounded in_ \(L^{1}\) _for any_ \(\alpha<m\)_;_
3. _both_ \(\|n^{\beta}S^{n}(I-S)^{m}f\|_{v(s)}\) _and_ \(\|n^{\beta}S^{n}(I-S)^{m}f\|_{o(s)}\) _are bounded in_ \(L^{1}\) _if_ \(\beta=0\) _or_ \(\beta>0\) _and_ \(s(m-\beta)>1\)_;_
4. _for any increasing sequence_ \(\{n_{k}\}\) _such that_ \(n_{k+1}-n_{k}\sim n_{k}^{\alpha}\) _for some_ \(\alpha\in(0,1)\)_, then (i)_ \(\left(\sum_{k}n_{k}^{\beta s}|S^{n_{k}}f-S^{n_{k+1}}f|^{s}\right)^{1/s}\) _and_ \(\left(\sum_{k}n_{k}^{\beta s}\max_{n_{k}\leq n<n_{k+1}}|S^{n}f-S^{n_{k}}f|^{s} \right)^{1/s}\) _are bounded in_ \(L^{1}\) _for_ \(s>(1-\alpha)/(1-\alpha-\beta)\)_; and_ _(ii)_ \(\left(\sum_{k}n_{k}^{\beta s}\|\{S^{n}f:n_{k}\leq n<n_{k+1}\}\|_{v(s)}^{s} \right)^{1/s}\) _is bounded in_ \(L^{1}\) _for_ \(s>1/(1-\alpha-\beta)\)_._
Proof.: By Theorem 1.3. in [6], there exists a power bounded operator \(T\) and \(\gamma\in(0,1)\) such that \(S=I-(I-T)^{\gamma}=T_{\nu_{\gamma}}\), where \(\nu_{\gamma}\) is the probability measure on \(\mathbb{Z}\) defined in (1.1). Then Theorem 1.11, Corollary 1.12, Propositions 1.13 and 1.14 apply to \(S\)
Proofs of Results
In these notes, \(c\) and \(C\) denote constants whose values may change from one instance to the next. We simplify \(e(x)=e^{2\pi ix}\). And for \(0\leq x,y\), we say \(x\lesssim y\) if there exists a constant \(c>0\) such that \(x\leq cy\).
**Lemma 2.1**.: _Let \(\Delta_{n}\) be a sequences of (finite) signed measures on \(\mathbb{Z}\). Let_
\[A=\int_{|t|<1/2}\frac{1}{|t|}\left(\sum_{n}|\hat{\Delta}_{n}(t)|^ {s}\right)^{1/s}dt,\quad\ C=\sum_{k\neq 0}\frac{1}{|k|}\left(\sum_{n}|\hat{ \Delta}_{n}(1/|k|)|^{s}\right)^{1/s},\] \[B=\int_{|t|<1/2}|t|\left(\sum_{n}|\hat{\Delta}_{n}^{\prime\prime} (t)|^{s}\right)^{1/s}dt,\ D=\sum_{k\neq 0}\frac{1}{|k|^{2}}\left(\sum_{n}|\hat{ \Delta}_{n}^{\prime}(1/|k|)|^{s}\right)^{1/s}\] \[\tilde{B}=\int_{|t|<1/2}\ln|t|^{-1}\left(\sum_{n}|\hat{\Delta}_{n }^{\prime}(t)|^{s}\right)^{1/s}dt,\ \ E=\left(\sum_{n}|\Delta_{n}(0)|^{s}\right)^{1/s}.\]
_If \(A,B,C,D\) and \(E\) are all finite, or \(A,\tilde{B},C\) and \(E\) are all finite, then, for any \(f\in X\),_
\[\left\|\left(\sum_{n}|T_{\Delta_{n}}f|^{s}\right)^{1/s}\right\|\lesssim\|f\|.\]
Proof.: Let \(\kappa=\sup_{n\geq 1}\|T^{n}\|\). Without loss of generality, we assume \(\kappa=1\).
\[\left\|\left(\sum_{n}|T_{\Delta_{n}}f|^{s}\right)^{1/s}\right\|= \left\|\left(\sum_{n}\left|\sum_{k}\Delta_{n}(k)T^{k}f\right|^{s }\right)^{1/s}\right\|\] \[\leq c\left\|\left(\sum_{n}\left|\sum_{k\neq 0}\int_{|t|<1/2|k|}\hat{ \Delta}_{n}(t)e(kt)dt\,T^{k}f\right|^{s}\right)^{1/s}\right\|\] \[+c\left\|\left(\sum_{n}\left|\sum_{k\neq 0}\int_{1/2|k|<|t|<1/2} \hat{\Delta}_{n}(t)e(kt)dt\,T^{k}f\right|^{s}\right)^{1/s}\right\|\] \[+c\left\|\left(\sum_{n}|\Delta_{n}(0)|^{s}\right)^{1/s}|f|\right\|\] \[= c(\mathrm{I}+\mathrm{I}+E\|f\|).\]
For the first term we have,
\[\mathrm{I}= \left\|\left(\sum_{n}\left|\sum_{k\neq 0}\int_{|t|<1/2|k|}\hat{ \Delta}_{n}(t)e(kt)dt\,T^{k}f\right|^{s}\right)^{1/s}\right\|\] \[\leq \|f\|\sum_{k\neq 0}\left(\sum_{n}\left|\int_{|t|<1/2|k|}\hat{ \Delta}_{n}(t)e(kt)dt\right|^{s}\right)^{1/s}\] \[\leq \|f\|\int_{|t|<1/2}\frac{1}{|t|}\left(\sum_{n}\left|\hat{\Delta}_ {n}(t)\right|^{s}\right)^{1/s}\,dt=A\|f\|.\]
For the second term we have,
\[\mathrm{II}\leq \left\|\left(\sum_{n}\left|\sum_{k\neq 0}\int_{1/2|k|<|t|<1/2}\hat{ \Delta}_{n}(t)e(kt)dt\,T^{k}f\right|^{s}\right)^{1/s}\right\|\] \[\leq \left\|\sum_{k\neq 0}\left(\sum_{n}\left|\int_{1/2|k|<|t|<1/2} \hat{\Delta}_{n}(t)e(kt)dt\right|^{s}\right)^{1/s}|T^{k}f|\right\|\] \[\leq \|f\|\sum_{k\neq 0}\left(\sum_{n}\left|\int_{1/2|k|<|t|<1/2} \hat{\Delta}_{n}(t)e(kt)dt\right|^{s}\right)^{1/s}.\]
The integrand decomposes as
\[\left|\int_{1/|k|<|t|<1/2}\hat{\Delta}_{n}(t)e(kt)dt\right|\leq \left|\int_{1/|k|<|t|<1/2}\hat{\Delta}_{n}^{\prime}(t)\frac{e(kt) }{2\pi k}dt\right|\] \[+\left|\frac{\hat{\Delta}_{n}(1/|k|)e(k/|k|)}{2\pi k}-\frac{\hat{ \Delta}_{n}(-1/|k|)e(-k/|k|)}{2\pi k}\right.\] \[\leq \left|\int_{1/|k|<|t|<1/2}\hat{\Delta}_{n}^{\prime\prime}(t) \frac{e(kt)}{4\pi^{2}k^{2}}dt\right|\] \[+\left|\frac{\hat{\Delta}_{n}(1/|k|)e(k/|k|)}{2\pi k}-\frac{\hat {\Delta}_{n}(-1/|k|)e(-k/|k|)}{2\pi k}\right.\] \[+\left|\frac{\hat{\Delta}_{n}^{\prime}(1/|k|)e(k/|k|)}{4\pi^{2}k ^{2}}-\frac{\hat{\Delta}_{n}^{\prime}(-1/|k|)e(-k/|k|)}{4\pi^{2}k^{2}}\right.\] \[= \mathrm{II}_{1}+\mathrm{II}_{2}+\mathrm{II}_{3}.\]
Using the first inequality
\[\sum_{k\neq 0}\left(\sum_{n}\left|\int_{1/2|k|<|t|<1/2}\hat{ \Delta}_{n}^{\prime}(t)\frac{e(kt)}{2\pi k}dt\right|^{s}\right)^{1/s}\] \[\lesssim \sum_{k\neq 0}\frac{1}{k}\int_{1/2|k|<|t|<1/2}\left(\sum_{n}\left| \hat{\Delta}_{n}^{\prime}(t)\right|^{s}\right)^{1/s}\,dt\] \[\lesssim \int_{0<|t|<1/2}\ln|t|^{-1}\,\left(\sum_{n}\left|\hat{\Delta}_{n} ^{\prime}(t)\right|^{s}\right)^{1/s}\,dt=\tilde{B},\]
and using the second one,
\[\sum_{k\neq 0}\left(\sum_{n}\left|\int_{1/2|k|<|t|<1/2}\hat{ \Delta}_{n}^{\prime\prime}(t)\frac{e(kt)}{4\pi^{2}k^{2}}dt\right|^{s}\right)^{ 1/s}\] \[\lesssim \sum_{k\neq 0}\frac{1}{k^{2}}\int_{1/2|k|<|t|<1/2}\left(\sum_{n} \left|\hat{\Delta}_{n}^{\prime\prime}(t)\right|^{s}\right)^{1/s}\,dt\] \[\lesssim \int_{0<|t|<1/2}|t|\,\left(\sum_{n}\left|\hat{\Delta}_{n}^{ \prime\prime}(t)\right|^{s}\right)^{1/s}\,dt=B.\]
Thus, \(\Pi\lesssim(\tilde{B}+C)\|f\|\) in the first case, and \(\Pi\lesssim(B+C+D)\|f\|\) in the second case.
The proof of this lemma can be adapted for the following setting.
**Lemma 2.2**.: _Let \(\Delta_{n}\) and \(T\) as in Lemma 2.1, \(\{n_{k}\}\) an increasing sequence and \(I_{k}=[n_{k},n_{k+1})\). Let_
\[A=\int_{|t|<1/2}\frac{1}{|t|}\left(\sum_{k}n_{k}^{\beta}\max_{n \in I_{k}}|\hat{\Delta}_{n}(t)|^{s}\right)^{1/s}dt,\,C=\sum_{l\neq 0}\frac{1}{|l|} \left(\sum_{k}n_{k}^{\beta}\max_{n\in I_{k}}|\hat{\Delta}_{n}(1/|l|)|^{s} \right)^{1/s},\] \[B=\int_{|t|<1/2}|t|\left(\sum_{k}n_{k}^{\beta}\max_{n\in I_{k}}| \hat{\Delta}_{n}^{\prime\prime}(t)|^{s}\right)^{1/s}dt,\,D=\sum_{l\neq 0}\frac{1}{|l| ^{2}}\left(\sum_{k}n_{k}^{\beta}\max_{n\in I_{k}}|\hat{\Delta}_{n}^{\prime}(1/ |l|)|^{s}\right)^{1/s},\] \[\tilde{B}=\int_{|t|<1/2}\ln|t|^{-1}\left(\sum_{n}|\hat{\Delta}_{n }^{\prime}(t)|^{s}\right)^{1/s}dt,\,E=\left(\sum_{k}n_{k}^{\beta}\max_{n\in I _{k}}|\Delta_{n}(0)|^{s}\right)^{1/s}.\]
_If either \(A,B,C,D,E\) are all finite, or \(A,\tilde{B},C,E\) are all finite, then, for any \(f\in X\),_
\[\left\|\left(\sum_{k}n_{k}^{\beta}\max_{n\in I_{k}}|T_{\Delta_{n}}f|^{s} \right)^{1/s}\right\|\lesssim\|f\|.\]
Proof of Theorem 1.11:
Let's assume \(\mu\) satisfies condition M2. We'll apply Lemma 2.1 with \(\Delta_{n}\) the measure on the integers defined by \(\hat{\Delta}_{n}=n^{\alpha/s}\hat{\mu}^{n}(1-\hat{\mu})^{m}\), that is \(T_{\Delta_{n}}=n^{\alpha/s}T_{\mu}^{n}(I-T_{\mu})^{m}\). The case for \(\alpha<-1\) is immediate. We'll address the cases \(\alpha\geq-1\).
For \(\alpha>-1\),
\[\sum_{n}n^{\alpha}|\hat{\mu}(t)|^{ns}\leq\sum_{n}n^{\alpha}(1-c\,h(t))^{ns} \lesssim\frac{1}{h(t)^{\alpha+1}}.\]
For \(\alpha=-1\),
\[\sum_{n}\frac{1}{n}|\hat{\mu}(t)|^{ns}\leq\sum_{n}\frac{1}{n}(1-ch(t))^{ns}=| \ln(1-(1-ch(t))^{s})|\lesssim\frac{1}{h(t)^{\gamma}}\]
for any \(\gamma>0\).
Note that
\[|1-\hat{\mu}(t)|\leq c\int_{0}^{|t|}h(s)^{\prime}ds=ch(t).\]
Following Lemma 2.1, we estimate
\[A=\int_{|t|<1/2}\frac{1}{|t|}\left(\sum_{n}\left|\hat{\Delta}_{n }(t)\right|^{s}\right)^{1/s}\] \[= \int_{|t|<1/2}\left(\sum_{n}n^{\alpha}|\hat{\mu}(t)|^{ns}\right) ^{1/s}\frac{|1-\hat{\mu}(t)|^{m}}{|t|}\] \[\lesssim \begin{cases}\int_{0<t<1/2}h(t)^{m-(\alpha+1)/s-1}\,h^{\prime}(t )\,dt&\text{ for }\alpha>-1\\ \int_{0<t<1/2}h(t)^{m-\gamma/s-1}\,h^{\prime}(t)\,dt&\text{ for }\alpha=-1.\end{cases}\]
When \(\alpha>-1\), the integral is finite for \(sm>\alpha+1\), and when \(\alpha=-1\), the integral is finite for \(m>0\) since we can choose \(\gamma\) arbitrarily small, say \(\gamma=sm/2\).
Similarly for E, with \(sm>\alpha+1\) (or \(0<\gamma<sm/2\) when \(\alpha=-1\)),
\[E=\left(\sum_{n}\left|\Delta_{n}(0)\right|^{s}\right)^{1/s}\leq \left(\sum_{n}n^{\alpha}\int_{|t|<1/2}|\hat{\mu}(t)|^{ns}|1-\hat{ \mu}(t)|^{sm}dt\right)^{1/s}\] \[\lesssim \begin{cases}\left(\int_{|t|<1/2}h(t)^{(sm-(\alpha+1))}dt\right) ^{1/s}<\infty,&\text{ for }\alpha>-1\\ \left(\int_{|t|<1/2}h(t)^{(sm-\gamma)}dt\right)^{1/s}<\infty,&\text{ for } \alpha=-1.\end{cases}\]
With \(B_{n}=T^{n}(I-T)^{m}\),
\[|\hat{B}_{n}^{\prime}(t)|=|n\hat{\mu}^{n-1}(t)(\hat{\mu}^{\prime}(t))(1-\hat{ \mu}(t))^{m}-\hat{\mu}^{n}(t)\hat{\mu}^{\prime}(t)(1-\hat{\mu}(t))^{m-1}|,\]
and
\[|\hat{B}^{\prime\prime}_{n}(t)|= |n(n-1)\hat{\mu}^{n-2}(t)(\hat{\mu}^{\prime}(t))^{2}(1-\hat{\mu}(t) )^{m}+n\hat{\mu}^{n-1}(t)\hat{\mu}^{\prime\prime}(t)(1-\hat{\mu}(t))^{m}\] \[-2nm\hat{\mu}^{n-1}(t)(\hat{\mu}^{\prime}(t))^{2}(1-\hat{\mu}(t))^ {m-1}\] \[+m(m-1)\hat{\mu}^{n}(t)(1-\hat{\mu}(t))^{m-2}(\hat{\mu}^{\prime}(t ))^{2}\] \[-m\hat{\mu}^{n}(t)(1-\hat{\mu}(t))^{m-1}\hat{\mu}^{\prime\prime}(t )|\] \[\lesssim n^{2}|\hat{\mu}^{n-2}(t)|\frac{h^{\prime}(t)}{|t|}h^{m+1}(t)+n| \hat{\mu}^{n-1}(t)|\frac{h^{\prime}(t)}{|t|}h^{m}(t)\] \[+|\hat{\mu}^{n}(t)|\frac{h^{\prime}(t)}{|t|}h^{m-1}(t).\]
Thus, for \(\alpha>-1\),
\[\left(\sum_{n}|\hat{\Delta}^{\prime\prime}_{n}(t)|^{s}=\sum_{n}n^{\alpha}| \hat{B}^{\prime\prime}_{n}(t)|^{s}\right)^{1/s}\lesssim h(t)^{(m-1)-(\alpha+1 )/s}\frac{|h^{\prime}(t)|}{|t|}.\]
Since \((1+\alpha)<sm\),
\[B=\int_{|t|<1/2}|t|\,\left(\sum_{n}|\hat{\Delta}^{\prime\prime}_{n}(t)|^{s} \right)^{1/s}dt\lesssim\int_{0<t<1/2}h(t)^{(m-1)-(\alpha+1)/s}\,h^{\prime}(t )\,dt<\infty.\]
When \(\alpha=-1\), choosing \(0<\gamma<m\), the estimate is
\[\int_{|t|<1/2}|t|\,\left(\sum_{n}|\hat{\Delta}^{\prime\prime}_{n}(t)|^{s} \right)^{1/s}dt\lesssim\int_{0<t<1/2}h(t)^{m-\gamma-1}\,h^{\prime}(t)\,dt<\infty.\]
For the remaining terms, we show the case \(\alpha>-1\) and note that the estimates for \(\alpha=-1\) follow similar arguments. For \((1+\alpha)<sm\), we have
\[C=\sum_{k\neq 0}\frac{1}{|k|}\left(\sum_{n}|\hat{\Delta}_{n}(1/|k |)|^{s}\right)^{1/s}\] \[\lesssim \sum_{k\neq 0}\frac{1}{|k|}\left(\sum_{n}n^{\alpha}|\hat{\mu}(1/|k|) |^{ns}|1-\hat{\mu}(1/|k|)|^{sm}\right)^{1/s}\] \[\lesssim \sum_{k>0}\frac{1}{k}h(1/k)^{m-(\alpha+1)/s}\leq c+\int_{0}^{1/2} \frac{h(t)^{m-(\alpha+1)/s}}{t}\,dt<\infty.\]
Since
\[\left(\sum_{n}|\hat{\Delta}^{\prime}_{n}(1/|k|)|^{s}\right)^{1/s}=\left(\sum _{n}n^{\alpha}|\hat{B}^{\prime}_{n}(1/|k|)|^{s}\right)^{1/s}\lesssim h(1/k)^{m -(\alpha+1)/s}h^{\prime}(1/k).\]
Then
\[D= \sum_{k\neq 0}\frac{1}{|k|^{2}}\left(\sum_{n}|\hat{\Delta}^{\prime} _{n}(1/|k|)|^{s}\right)^{1/s}\] \[\lesssim \sum_{k\neq 0}\frac{1}{|k|^{2}}h(1/k)^{m-(\alpha+1)/s}h^{\prime}(1/k )<\infty.\]
If instead of condition M2, we require M1, that is, \(|\hat{\mu}(t)|\leq 1-c|t|^{a}\) and \(|\mu^{\prime}(t)|\lesssim t^{a-1}\), we need only to use estimates for the first derivative and use \(\tilde{B}\) in Lemma 2.1,
\[\tilde{B}= \int_{|t|<1/2}\ln|t|^{-1}\,\left(\sum_{n}|\hat{\Delta}^{\prime} _{n}(t)|^{s}\right)^{1/s}dt\] \[\lesssim \int_{0<t<1/2}\ln|t|^{-1}t^{a(m-(\alpha+1)/s)-1}\,dt<\infty,\]
as long as \(ms>\alpha+1\). The computations of the other terms (A,C and E) are the same as above but substituting \(h(t)\) with \(t^{a}\) and \(h^{\prime}(t)\) with \(t^{a-1}\). The case \(\alpha=-1\) also follows from similar arguments. \(\Box\)
_Proof of Corollary 1.12:_
An application of Abel's summation yields
\[n^{\alpha}|T^{n}_{\mu}(I-T_{\mu})^{m}f|\leq \sum_{k=0}^{n-1}((k+1)^{\alpha}-k^{\alpha})|T^{k}_{\mu}(I-T_{\mu} )^{m}f|\] \[+\sum_{k=1}^{n}k^{\alpha}|T^{k-1}_{\mu}(1-T_{\mu})^{m+1}f|\] \[\lesssim \sum_{k=0}^{n-1}(k+1)^{\alpha-1}|T^{k}_{\mu}(I-T_{\mu})^{m}f|\] \[+\sum_{k=1}^{n}k^{\alpha}|T^{k-1}_{\mu}(1-T_{\mu})^{m+1}f|\] \[\lesssim Q_{\alpha-1,1,m}f+Q_{\alpha,1,m+1}f.\]
By Theorem 1.11, both generalized square functions on the right are bounded in \(L^{1}\) for \(\alpha<m\) and \(m>0\). Therefore \(\sup_{n}n^{\alpha}|T^{n}_{\mu}(I-T_{\mu})^{m})f|\) is also bounded in \(L^{1}\).
When \(1<p<\infty\) we have
\[n^{\alpha}|T^{n}(I-T)^{m}f|\] \[\leq \left(\sum_{k=0}^{n-1}(k+1)^{2m-1}|T^{k}(I-T)^{m}f|^{2}\right)^{1/2 }\left(\sum_{k=1}^{n}\frac{1}{k^{1+2(m-\alpha)}}\right)^{1/2}\] \[+\left(\sum_{k=1}^{n}k^{2m+1}|T^{k-1}(1-T)^{m+1}f|^{2}\right)^{1/2 }\left(\sum_{k=1}^{n}\frac{1}{k^{1+2(m-\alpha)}}\right)^{1/2}.\]
If \(\alpha<m\),
\[n^{\alpha}|T^{n}(I-T)^{m}f|\lesssim Q_{2m-1,2,m}f+Q_{2m+1,2,m+1}f,\]
which, by Theorem 1.4, are bounded in \(L^{p}\).
If \(\alpha=m\),
\[n^{\alpha}|T^{n}(I-T)^{m}f|\lesssim\sqrt{\ln n}(Q_{2m-1,2,m}f+Q_{2m+1,2,m+1}f),\]
it follows that \(\sup_{n>1}\frac{n^{m}|T^{n}(I-T)^{m}f|}{\sqrt{\ln n}}\) is bounded in \(L^{p}\).
Proof of Proposition 1.13:.: Let \(\Delta_{n,m}f=T_{\mu}^{n}(I-T_{\mu})^{m}f\), and let \(\{n_{k}\}\) any increasing sequence.
\[D_{k,\beta}=n_{k}^{\beta}\Delta_{n_{k},m}-n_{k+1}^{\beta}\Delta_{n_{k+1},m}=n_ {k}^{\beta}\sum_{r=n_{k}}^{n_{k+1}-1}\Delta_{r,m+1}-(n_{k+1}^{\beta}-n_{k}^{ \beta})\Delta_{n_{k+1},m}\]
\[\left(\sum_{k}|D_{k,\beta}f|^{s}\right)^{1/s}\lesssim \sum_{k}\sum_{r=n_{k}}^{n_{k+1}-1}r^{\beta}|\Delta_{r,m+1}f|+ \left(\sum_{k}n_{k}^{s\beta}|\Delta_{n_{k},m}f|^{s}\right)^{1/s}\]
Thus,
\[\|n^{\beta}\Delta_{n,m}f\|_{v(s)}\lesssim Q_{\beta,1,m+1}f+Q_{\beta s,s,m}f\]
which, by Theorem 1.11, are bounded in \(L^{1}\) for \(s(m-\beta)>1\).
For \(n_{k}\leq n\leq n_{k+1}\)
\[\max_{n_{k}\leq n\leq n_{k+1}}\left|n^{\beta}\Delta_{n,m}-n_{k}^{\beta}\Delta _{n_{k},m}f\right|^{s}\leq\sum_{r=n_{k}}^{n_{k+1}-1}r^{\beta}|\Delta_{r,m+1}f |+\sum_{r=n_{k}+1}^{n_{k+1}}n^{s\beta}|\Delta_{n,m}f|^{s}\]
Thus,
\[\|n^{\beta}\Delta_{n,m}f\|_{o(s)}\lesssim Q_{\beta,1,m+1}f+Q_{\beta s,s,m}f.\]
is bounded in \(L^{1}\) for \(s(m-\beta)>1\).
When \(\beta=0\),
\[\|\Delta_{n,m}f\|_{v(s)}\lesssim Q_{0,1,m+1}f\quad\text{ and }\quad\|\Delta_{n,m}f\|_{o(s)} \lesssim Q_{0,1,m+1}f,\]
are bounded in \(L^{1}\) for \(m>0\).
Proof of Proposition 1.14:.: Let \(\mu\) satisfy condition M2. The case for M1 is similar. We'll apply Lemma 2.1 with \(\Delta_{k}=n_{k}^{\beta}(T_{\mu}^{n_{k}}f-T_{\mu}^{n_{k+1}})\).
We estimate, for \(\gamma>\alpha\),
\[\sum_{k}n_{k}^{s\gamma}(n_{k+1}-n_{k})|\hat{\mu}(t)|^{n_{k}s}\lesssim\frac{1}{h (t)^{s\gamma+1}}.\]
A \[= \int_{|t|<1/2}\frac{1}{|t|}\left(\sum_{k}n_{k}^{\beta s}|\hat{\mu} (t)|^{sn_{k}}\,|1-\hat{\mu}(t)^{(n_{k+1}-n_{k})}|^{s}\right)^{1/s}dt\] \[\lesssim \int_{|t|<1/2}\frac{|1-\hat{\mu}(t)|}{|t|}\left(\sum_{k}n_{k}^{ \beta s}(n_{k+1}-n_{k})^{s}|\hat{\mu}(t)|^{sn_{k}}\right)^{1/s}dt\] \[\lesssim \int_{|t|<1/2}\frac{|1-\hat{\mu}(t)|}{|t|}\left(\sum_{k}n_{k}^{( \alpha+\beta-\alpha/s)s}(n_{k+1}-n_{k})|\hat{\mu}(t)|^{sn_{k}}\right)^{1/s}dt\] \[\lesssim \int_{|t|<1/2}\frac{h(t)}{|t|}\left(\frac{1}{h(t)^{s(\alpha+ \beta-\alpha/s)+1}}\right)^{1/s}\,dt\lesssim\int_{|t|<1/2}\frac{h^{\prime}(t)} {h(t)^{(\alpha+\beta)+(1-\alpha)/s}}\,dt<\infty,\]
for \(\alpha+\beta+(1-\alpha)/s<1,s>1\). Similarly,
\[\mathrm{E}\leq \left(\sum_{k}n_{k}^{(\alpha+\beta)s}\int_{|t|<1/2}|\hat{\mu}(t)| ^{n_{k}s}|1-\hat{\mu}(t)|^{s}dt\right)^{1/s}\] \[\lesssim \left(\int_{|t|<1/2}h(t)^{s(1-(\alpha+\beta)-(1-\alpha)/s)}dt \right)^{1/s}<\infty.\]
For the next term,
\[n_{k}^{-\beta}\hat{\Delta}_{k}^{\prime\prime}(t)= n_{k}(n_{k}-1)\hat{\mu}^{n_{k}-2}(t)(\hat{\mu}^{\prime}(t))^{2}-n_{k+ 1}(n_{k+1}-1)\hat{\mu}^{n_{k+1}-2}(t)(\hat{\mu}^{\prime}(t))^{2}\] \[+n_{k}\hat{\mu}^{n_{k}-1}(t)\hat{\mu}^{\prime\prime}(t)-n_{k+1} \hat{\mu}^{n_{k+1}-1}(t)\hat{\mu}^{\prime\prime}(t)\] \[= n_{k}(n_{k}-1)\hat{\mu}^{n_{k}-2}(t)(1-\hat{\mu}^{n_{k+1}-n_{k} }(t))(\hat{\mu}^{\prime}(t))^{2}\] \[+[n_{k}(n_{k}-1)-n_{k+1}(n_{k+1}-1)]\hat{\mu}^{n_{k+1}}(\hat{\mu} ^{\prime}(t))^{2}\] \[+n_{k}\hat{\mu}^{n_{k}-1}(t)(1-\hat{\mu}^{n_{k+1}-n_{k}}(t))\hat{ \mu}^{\prime\prime}(t)\] \[+(n_{k}-n_{k+1})\hat{\mu}^{n_{k+1}-1}(t)\hat{\mu}^{\prime\prime}( t).\]
Estimating
\[n_{k}(n_{k}-1)-n_{k+1}(n_{k+1}-1)= n_{k}^{2}-n_{k+1}^{2}+(n_{k+1}-n_{k})\sim n_{k}^{\alpha}(n_{k+1}+n_{k})+n_{k}^{\alpha}\] \[\leq n_{k}^{\alpha}(2n_{k}+n_{k}^{\alpha})\lesssim n_{k}^{1+\alpha},\]
we have
\[|\hat{\Delta}^{\prime\prime}_{k}(t)|\lesssim \left[n_{k}^{2+\alpha+\beta}\hat{\mu}^{n_{k}-2}(t)h(t)^{2}+n_{k}^{1+ \alpha+\beta}|\hat{\mu}(t)|^{n_{k+1}-1}h(t)\right.\] \[\left.+n_{k}^{1+\alpha+\beta}|\hat{\mu}(t)|^{n_{k+1}-2}h(t)+n_{k}^ {\alpha+\beta}|\hat{\mu}(t)|^{n_{k+1}-1}\right]\frac{h^{\prime}(t)}{|t|}.\]
Thus
\[\left(\sum_{k}|\hat{\Delta}^{\prime\prime}_{k}(t)|^{s}\right)^{1/s}\lesssim \frac{1}{h(t)^{(\alpha+\beta)+(1-\alpha)/s)}}\frac{h^{\prime}(t)}{|t|}.\]
Then, for \(\alpha+\beta+(1-\alpha)/s<1\),
\[\mathrm{B}=\int_{|t|<1/2}|t|\,\left(\sum_{k}|\hat{\Delta}^{\prime\prime}_{k}( t)|^{s}\right)^{1/s}\,dt\lesssim\int_{0<t<1/2}\frac{h^{\prime}(t)}{h(t)^{( \alpha+\beta)+(1-\alpha)/s))}}\,dt<\infty,\]
and
\[\mathrm{C}= \sum_{l\neq 0}\frac{1}{|l|}\left(\sum_{k}|\hat{\Delta}_{k}(1/|l|)|^{ s}\right)^{1/s}\lesssim\sum_{l\neq 0}\frac{1}{|l|}\left(\sum_{k}n_{k}^{( \alpha+\beta)s}|\hat{\mu}(1/|l|)|^{n_{k}s}|1-\hat{\mu}(1/|l|)|^{s}\right)^{1/s}\] \[\lesssim \sum_{l\neq 0}\frac{h(1/|l|)^{1-(\alpha+\beta)-(1-\alpha)/s)}}{|l|}<\infty.\]
For the last term, we have
\[n_{k}^{-\beta}|\hat{\Delta}^{\prime}_{k}(1/|l|)\leq n_{k}|\hat{\mu}^{n_{k}-1}(1/|l|)|\,\left|1-\hat{\mu}(1/|l|)^{n_{k+1}-n_ {k}}\right|\,|\hat{\mu}^{\prime}(1/|l|)|\] \[+(n_{k+1}-n_{k})\left|\hat{\mu}(1/|l|)\right|^{n_{k+1}-1}\,|\hat {\mu}^{\prime}(1/|l|)|\] \[\lesssim n_{k}^{1+\alpha}|\hat{\mu}^{n_{k}-1}(1/|l|)|\,|\hat{\mu}^{ \prime}(1/|l|)||1-\hat{\mu}(1/|l|)|\] \[+n_{k}^{\alpha}|\hat{\mu}^{n_{k+1}-1}(1/|l|)|\,|\hat{\mu}^{\prime }(1/|l|)|.\]
Then
\[\left(\sum_{k}|\hat{\Delta}^{\prime}_{k}(1/|l|)|^{s}\right)^{1/s}\lesssim \frac{h^{\prime}(1/|l|)}{h(1/l)^{(\alpha+\beta)+(1-\alpha)/s)}},\]
and
\[\mathrm{D}\lesssim\sum_{l>0}\frac{1}{l^{2}}\frac{h^{\prime}(1/|l|)}{h(1/l)^{( \alpha+\beta)+(1-\alpha)/s}}<\infty\]
for \(0<\alpha+\beta+(1-\alpha)/s<1\).
Now, let \(I_{k}=[n_{k},n_{k+1})\).
\[\sum_{k}n_{k}^{\beta s}\|\{T_{\mu}^{n}f-T_{\mu}^{n_{k}}f:n\in I_{ k}\}\|_{v(s)}^{s}\leq c\sum_{k}n_{k}^{\beta s}\sum_{n\in I_{k}}|T_{\mu}^{n}f-T_{\mu}^{n_{k}} f|^{s}\] \[\leq c\sum_{k}n_{k}^{\beta s}(n_{k+1}-n_{k})\max_{n\in I_{k}}|T_{\mu} ^{n}f-T_{\mu}^{n_{k}}f|^{s}.\]
Using Lemma 2.2 and arguments similar to the above,
\[\sum_{k}n_{k}^{\beta s}\|\{T_{\mu}^{n}f-T_{\mu}^{n_{k}}f:n\in I_{k}\}\|_{v(s)}^{s}\]
is bounded in \(L^{1}\) for \(1<s(1-\alpha-\beta)\), and
\[\sum_{k}n_{k}^{\beta s}\max_{n\in I_{k}}|T_{\mu}^{n}f-T_{\mu}^{n_{k}}f|^{s}\]
is bounded in \(L^{1}\) for \(1-\alpha<s(1-\alpha-\beta)\).
|
2304.14417 | Mapping Inequalities in Activity-based Carbon Footprints of Urban
Dwellers using Fine-grained Human Trajectory Data | Effective climate mitigation strategies in cities rely on understanding and
mapping urban carbon footprints. One significant source of carbon is a product
of lifestyle choices and travel behaviors of urban residents. Although previous
research addressed consumption- and home-related footprints, activity-based
footprints of urban dwellers have garnered less attention. This study relies on
deidentified human trajectory data from 5 million devices to examine the
activity-based carbon footprint in Harris County, Texas. Our analysis of the
heterogeneity of footprints based on places visited and distance traveled
reveals significant inequality: 10% of users account for 88% of
visitation-based footprints and 71% of distance-traveled footprints. We also
identify the influence of income on activity-based carbon footprint gap of
users related to their travel behavior and lifestyle choices, with high-income
users having larger footprints due to lifestyle choices, while low- to
medium-income users' footprints are due to limited access. Our findings
underscore the need for urban design adjustments to reduce carbon-intensive
behaviors and to improve facility distribution. Our conclusions highlight the
importance of addressing urban design parameters that shape carbon-intensive
lifestyle choices and facility distribution, decisions which have implications
for developing interventions to reduce carbon footprints caused by human
activities. | Akhil Anil Rajput, Yuqin Jiang, Sanjay Nayak, Ali Mostafavi | 2023-04-26T19:44:56Z | http://arxiv.org/abs/2304.14417v1 | Mapping Inequalities in Activity-based Carbon Footprints of Urban Dwellers using Fine-grained Human Trajectory Data
###### Abstract
Effective climate mitigation strategies in cities rely on understanding and mapping urban carbon footprints. One significant source of carbon is a product of lifestyle choices and travel behaviors of urban residents. Although previous research addressed consumption- and home-related footprints, activity-based footprints of urban dwellers have garnered less attention. This study relies on deidentified human trajectory data from 5 million devices to examine the activity-based carbon footprint in Harris County, Texas. Our analysis of the heterogeneity of footprints based on places visited and distance traveled reveals significant inequality: 10% of users account for 88% of visitation-based footprints and 71% of distance-traveled footprints. We also identify the influence of income on activity-based carbon footprint gap of users related to their travel behavior and lifestyle choices, with high-income users having larger footprints due to lifestyle choices, while low- to medium-income users' footprints are due to limited access. Our findings underscore the need for urban design adjustments to reduce carbon-intensive behaviors and to improve facility distribution. Our conclusions highlight the importance of addressing urban design parameters that shape carbon-intensive lifestyle choices and facility distribution, decisions which have implications for developing interventions to reduce carbon footprints caused by human activities.
## 1 Introduction
Cities function as hubs of economic growth and social transformation; however, rapid urbanization challenges the attainment of environmental sustainability and climate mitigation [1, 2]. Normal life activities contributing to a city's carbon footprint comprise three components (Fig. 1): (1) home activity-based footprint; (2) activity-based footprint, and (3) consumption-based footprint. Home-activity-based carbon footprint captures consumption of any form of energy used within the home [3]. Activity-based carbon footprint refers to an individual's interaction with the built environment through travel outside their homes [4, 5]. Consumption-based carbon footprint refers to all products consumed or used in the course of daily life. Carbon emissions are generated during the production and transportation of these goods [6, 7, 8]. Among these three components of residents' life activity carbon footprint, the literature has paid far greater attention to home-activity-based and consumption-based carbon footprints; thus, our understanding of activity-based carbon footprints is rather limited. Activity-based carbon footprint is driven primarily by residents' lifestyle and travel patterns. The literature shows that patterns of human lifestyle mobility in cities are influenced by urban forms and structures [9, 10, 11]. Thus urban design and development, such as facility distribution, can influence activity-based carbon footprints. Accordingly, understanding ramifications of patterns of carbon footprints associated with individuals'
activities and mobility in urban areas can inform decision makers and urban planners in the development of targeted strategies and interventions to reduce urban residents' carbon footprint and to promote more effective mitigation strategies.
Studies examining activity-based carbon footprints use survey-based data to learn of activities individuals engage in; however, time-based assessments from survey data fail to fully capture patterns of human visitation to points of interest and the associated travel distance, both of which influence the extent of carbon footprint. This gap can be addressed through the use of observational fine-grained human trajectory data. Accordingly, this study seeks to address four research questions: (1) To what extent do activity-based carbon footprints of individuals vary across different residents?; (2) To what extent does an activity-based carbon footprint gap exist in cities?; (3) What proportion of individuals account for the majority of activity-based carbon footprint in cities?; and (4) To what extent do activity-based carbon footprint profiles of individuals vary based on income? To answer these research questions, we investigate the heterogeneity in activity-based carbon footprint (including both visitation-based footprint and distance traveled by users) in Harris County (Houston metropolitan area), Texas. Our analysis is based on high-resolution user-level waypoint data collected by INRIX, a location-based data provider; building polygons from Microsoft; points of interest location and attribute data from SafeGraph; and US Census data. By examining the patterns of carbon emissions associated with visitation and distance traveled, our goal is to shed light on the factors contributing to the heterogeneity in users' carbon footprints and their implications for urban design and development, as well as for climate mitigation strategies. It is important to note that although the total carbon footprint of an individual depends on an activity-based footprint derived from traveling and visitation to POIs, home activities, and consumption, our study focuses only on evaluating activity-based footprint, for which little knowledge and prior work exists.
Our results reveal intriguing patterns in the distribution of activity-based carbon footprints based on visitation and distance traveled, indicating substantial heterogeneity (inequality) among users. We observed that a small percentage of users account for the majority of contributions to activity-based carbon footprint. This heterogeneity may be influenced by factors such as city structure, lifestyle patterns influenced by income, distribution of facilities, and accessibility to POIs and facilities. The results show that, unlike consumption- and home-activity-based footprints, gaps in which the primary contributors are high-income individuals [12; 13; 14], low- and medium-income groups are among the highest contributors to high activity-based carbon footprint. The high activity-based footprint of high-income individuals, however, is attributable mainly to their lifestyle choices (type of POIs visited), while the high activity-based footprint of low-income individuals is due to the need for traveling longer distances to POIs to obtain necessities. These findings show that the significant carbon footprint gap among individuals in cities can inform integrated urban design strategies in which urban development plans and projects could target interventions for reducing carbon footprint gap to achieve climate mitigation goals in cities.
Figure 1: Illustration of the components of residents’ life-activity carbon footprint: (a) home-activity-based footprint, is related to in-house activities such as energy consumption (b) activity-based footprint, which accounts for the emissions generated during travel between home and destination points of interest; visitation-based footprint, representing emissions resulting from activities during visitation to destination points of interest; and (c) consumption-based carbon footprint, which encompasses the emissions associated with the consumption of goods and services during home activities. This study focuses on activity-based carbon footprints of individuals.
The remainder of this paper is organized as follows: Section 2 provides information on relevant literature related to carbon footprint evaluation in cities, and Section 3 describes the data and methods used in our analysis, including the datasets, data preprocessing, and the analytical approach. Section 4 presents the results, focusing on the heterogeneity in distance traveled and POI visitation footprint, as well as the disproportionate impact of user groups on activity-based carbon footprint and distance traveled. Section 5 discusses the implications of our findings for understanding the drivers of carbon emissions and informing policy and planning efforts to promote sustainable urban development.
## 2 Related Work
### Carbon Footprints of Individuals and Households
Households and individuals play a significant role in global carbon emissions [15; 16; 17; 18]. Research has shown that households contribute to over half of national greenhouse gas emissions in various countries [19; 20]. To understand and mitigate household-level carbon footprints, numerous studies have investigated carbon footprint patterns at the household or individual levels. Generally, carbon footprint at these levels can be categorized into direct and indirect emissions. Direct carbon footprint refer to direct energy consumption, such as heating, cooling, and electricity usage by appliances. Indirect emissions refer to greenhouse gases emitted by other entities producing goods and services households and individuals consume [21; 22; 23]. For instance, when an individual purchases a product, the indirect carbon footprint comprises the emissions generated during the production, transportation, storage, and sale of that product.
Previous studies have assessed the carbon footprint of individuals or households based on their expenditure records, taking into account both direct and indirect emissions that result from the consumption of goods and services by individuals or households. Expenditure-related studies relying primarily on survey data have been conducted in multiple countries, including China [24; 25; 26; 27], Japan [28; 23], Belgium [29], Germany [30], Norway [31; 32], and the United States [12; 33; 34]. Expenditure-related studies focus primarily on consumption-related carbon footprint of households and individuals but do not capture activity-based carbon footprints.Multi-regional input-output (MRIO) models and related databases were also used extensively to understand consumption activities. MRIO data records economic flows across multiple regions or nations. The construction of MRIO tables allow researchers to track carbon emissions generated concomitant with economic flows [35; 36; 37; 38; 39; 40].
Another stream of research focuses on activity-based carbon footprints based on the time-use perspective. Time-based studies examine time as a unified unit, enabling the comparison of carbon footprint by different individual-level activities within the same time unit [41; 42; 5]. Activity-based carbon footprint studies based on time-use perspective estimate carbon emissions levels for activities during the same time unit (usually an hour or fifteen minutes). With a common temporal duration, activities can be compared for their respective carbon footprint. Time-use perspective studies enable a better understanding of activity-based carbon footprints due to individual-level lifestyles [43; 44]. This method has been implemented in studies in Japan [42; 28], China [45; 46], Finland [43], France [47], Austria [5], Britain [41], and the United States [48]. These studies found that activities such as eating out, personal care, and commuting, had high carbon footprints [41; 44; 5; 45].
While the time-use studies have demonstrated the importance of capturing and analyzing activity-based carbon footprints, a significant limitation across time-use studies is the nature of their data collection methods. Most of these studies rely heavily on survey data, wherein participants self-report their time usage. Gathering time usage information through surveys is a labor-intensive process, and the self-reported data may not be accurate and representative, leading to an incomplete measurement of activity types and thus, inaccurate estimation of the activity-based footprint. To address these limitations, this study uses a human mobility dataset based on cell phone location service. Using this more comprehensive record of individuals' movement trajectories, the type and frequency of POIs visited, as well as distance traveled by each user can be accurately measured to compute the extent of activity-based carbon footprint (the total of visitation-based footprint and total distance traveled). Additionally, the human trajectory dataset provides data at a finer temporal resolution, enabling a deeper understanding of the characteristics of people's activity patterns and associated carbon footprints.
### Carbon Footprint Gap
Carbon footprint gap refers to the differences in the extent of carbon footprint of individuals and households. Various demographic and socioeconomic factors have been found to correlate with household carbon footprint, including income, gender, age, household composition, car ownership, employment [17; 49; 50]. Generally, household income exhibits a strong positive correlation with carbon emissions, as higher-income households tend to have a higher demand for consumption. This relationship has been observed in Japan [17], China [51; 20], Norway [31], and the United States
[12; 52]. For the consumption-based footprint, the existing studies show a significant carbon footprint gap. The term carbon elites have been used to refer to higher-income households and individuals whose consumption of goods account for a greater portion of carbon footprint; however, regarding activity-based footprint, limited insights exist regarding the extent of carbon footprint gap and whether a similar carbon footprint elitism exists.
Urban design can also affect how people interact with their surroundings, impacting their carbon footprint; however, urbanization presents mixed effects on carbon emissions [7; 53; 54]. On one hand, dense metropolitan areas can use less energy by encouraging people to use facilities like public transportation [55; 56; 30; 57]. The availability of a wider range of commodities and their ease of accessibility, on the other hand, might result in a rise in consumption, ultimately resulting in a rise in carbon emissions [30; 8]. The examination of activity-based carbon footprint of individuals can inform about differences among carbon footprints of individuals and possible associations with urban design characteristics.
## 3 Data and Methods
This section provides an overview of the data used, the preprocessing approach for cleaning and merging the datasets, and the methods adopted for evaluating user carbon impact. In this study, we used human trajectory data from Harris County (Houston metropolitan area), Texas. We chose Harris county as a region of study for three reasons: first, Harris County is the most populous county in Texas and the third most populous in the United States [58]. Harris County also has a diverse population with a range of lifestyles and travel patterns. Second, Harris County is home to a large number and diverse type of points of interest (POIs), such as businesses, institutions, healthcare, and recreational facilities, which are relevant for measuring activity-based carbon footprint. Third, Harris County has a significant carbon footprint due to its high levels of transportation-related emissions [59], making it an important area for study in terms of carbon reduction and sustainability.
Due to the volume of data required, privacy concerns, the complexity of analysis, and computational cost, we limited the study to one metropolitan city. Nevertheless, the methodology can be applied to any spatial area at any resolution. To evaluate user lifestyle and travel patterns, we collected anonymized waypoint travel data from INRIX [60]. We also collected building polygons from Microsoft [61], SafeGraph points of interest location and attribute data [62], and census data to link the travel stop locations to POIs or to a spatial unit, such as census tract. The integration of these diverse data sources allows delineation of types of POIs visited and distance traveled from each individual user's movement trajectory for a more holistic understanding of the relationship between user lifestyle, travel patterns, and activity-based carbon footprint.
### Datasets
This study leverages a variety of unique datasets that facilitate the capture of human lifestyle and travel patterns to delineate activity-based carbon footprints. The primary dataset contains high-resolution user-level waypoint data, obtained from INRIX, a private location intelligence company that provides anonymized location-based data and analytics, ensuring the privacy of individuals. This dataset stands out for two reasons: first, it provides accurate coordinates for the trajectory for each trip, with INRIX collecting vehicle coordinates every few seconds, resulting in a high spatiotemporal resolution for a more granular analysis of travel patterns and their corresponding carbon footprints. Second, INRIX data was collected for the entire Houston metropolitan area (Harris County) over a three-month period (February through April 2016) at all times of the day, resulting in a dataset of more than 26 million trip records.
To associate the stop locations from the waypoint data to POIs, we used Microsoft building footprint data. This dataset consists of building boundaries in a shapefile format, generated from Bing imagery using open-source toolkits and machine-learning techniques. While this dataset accurately identifies the building footprint or boundary, it does not identify the type of building or service it provides. To address this limitation, we incorporated the SafeGraph POI dataset, which offers precise POI coordinates for most brands and categorizes them by type of business. In this study, we focused on major categories to facilitate easier generalizability.
To categorize stop points corresponding to high-, medium-, or low-income neighborhoods, we collected US Census data from census.gov at the census-tract resolution. This data includes census-tract boundaries and the corresponding median household income for each area, which serves as a proxy for the income characteristics of that particular region. We used census data from the year 2020, assuming that income and major demographics remain relatively stable over time. This approach enables a more accurate understanding of the relationship between lifestyle, travel patterns, and carbon footprint across different income groups, ultimately informing about carbon footprint gap among various income groups.
### Data Preprocessing
The data preprocessing stage was a crucial step in our study, involving several stages, including data cleaning, merging, and aggregation. We first focused on cleaning the high-resolution user-level waypoint data obtained from INRIX, which contained more than 26 million trip records for 6.5 million devices in Harris County spanning three months. The data cleaning process entailed the removal of trips that both originated and concluded outside our designated study region (Harris County) resulting in a dataset spanning 20 million trips across 5.16 million devices. Figure 2 illustrates the steps followed during preprocessing.
We first combined SafeGraph POIs and building footprint data if a POI coordinate was located within a building footprint. This resulted in an enriched building footprint dataset containing additional attributes for POIs, such as brand name, POI type, category, subcategory, and other attributes along with the building boundary information. For the purpose of this study, we used only the category information for POIs, which classified POIs into high-, medium-, and low- footprint activities. This classification will be discussed in further detail in the next section. We then filtered out all building polygons that were not linked to any SafeGraph POIs. These excluded building polygons likely represent residential buildings that are not relevant to this study.
The next step involved the linking of all the stop points from our waypoint data to building polygons by considering a 10-meter buffer around each building polygon. A buffer of 10 meters along the building footprint boundary was employed to account for stop points related to a POI that, due to GPS error, were not located within a building footprint. As the distance between the road system and POIs typically exceeds 10 meters, this method prevents the incorrect classification of stop points to POIs. In cases where the specific POI could not be determined for a stop point, such as brands inside a malt or a building with multiple stores, a random POI was selected to designate that stop point. This approach does not introduce bias, as most co-located POIs tend to belong to similar categories and thus have comparable visitation-based carbon footprints. We also linked stop points to census tract polygons if the stop points were situated within them, thereby associating stop points with different income categories based on the median household income in the census tract.
Given the vast dataset comprising 26 million trip records, a total of 65,000 POIs, and 750 plus census-tract polygons that required linking through spatial operations such as buffering, the preprocessing stage was executed on a powerful 64-core and 500-GB RAM system to accommodate the high computational requirements for processing the data. One novel aspect of this study lies not only in the data preprocessing itself but in the optimization and parallelization of the code, which significantly reduced the computational time.
### POI Classification
As previously mentioned, individuals' carbon footprints are driven primarily by their activities, which in this study are categorized based on user visits to POIs and distance traveled. To evaluate users' carbon footprint according to their visitation activity, we first classified POIs into high, medium, and low carbon-footprint activities. These categorizations were primarily based on a framework developed by [5]. We also referred to studies from [46, 41, 42] that conducted similar categorizations. For instance, we designated outdoor activities, such as visiting a park, as low-carbon intensive since the activity itself does not significantly contribute to the carbon footprint. Conversely, activities such as dining at a restaurant were considered high-carbon footprint activities.
Figure 2: Overview of the data preprocessing step. Raw user waypoints are merged and linked to building polygons and points of interest.
Following the data preprocessing stage, which involved linking each stop point in a trip to a POI if the stop point was in close proximity, we obtained approximately 160 categories of POIs. Based on their closest functional resemblance, these POIs were manually reclassified into 15 categories, as shown in Table 1. If a POI did not correspond to any of these categories, it was assigned an "others" classification, and the POI carbon footprint was labeled manually. After categorizing the POIs into these 17 groups, we used the total footprint values captured in studies [5, 46, 41, 42] to classify the POIs into low, medium, and high carbon footprints. To aggregate the footprint impact of each trip, we assigned numerical values of 1, 3, and 5 to low, medium, and high carbon footprint categories, respectively. The literature indicated that the footprint of activities in the high category was at least twice that of medium and low footprint activities. This approach ensures that a visit to a high-footprint POI does not equate to visits to a low- and medium-footprint POI. Further details will be discussed in the methods section.
### Methods
The preprocessing of the above-described data resulted in a refined dataset in which the stop points of every trip were linked to a POI, if in close proximity, and linked to the corresponding census tract to determine income attributes. With this refined dataset, we first evaluated the distance traveled in every trip made by a user. Next, we assessed the types of POIs visited and stop locations. By examining the types of POI visitations and frequency of visits, we gained insights into the visitation-based carbon footprint of each user and the income of the locality where these POIs visits or stop locations are situated. Figure 3 depicts an overview of the steps for specifying visitation-based and distance-traveled carbon footprint of each user in this study.
In the next step, we aggregated all metrics at the weekly level for each user, meaning that for every user, a weeks' worth of trips were aggregated, summing up all the POIs visited and the total distance traveled. The weekly aggregation of POI visitations and distance traveled provide a typical activity-based footprint of users since the variations of POI visitations and trips across weeks are insignificant. Specific methods of aggregation will be discussed in more detail later in this section. This weekly aggregation approach was chosen to better capture the lifestyles of people in a city, as weekdays and weekends exhibit different mobility patterns and lifestyle activities [63, 64, 65, 66]. By aggregating on a weekly basis, we encapsulated all the variability associated with the weekday-weekend effect, allowing for a more comprehensive representation of the overall lifestyle and behavior of users.
#### 3.4.1 Visitation-based user carbon footprint
From the preprocessed data, we have visitation records for each user's trip for the months of February through April. For each trip, we classified the POIs visited into high-, medium-, and low-footprint categories based on their categories, as discussed in the previous section. We then evaluated the effective carbon footprint for each user trip using the Equation 1:
\begin{table}
\begin{tabular}{|c|l|l|} \hline
**S.No.** & **POI Category** & **Carbon Footprint** \\ \hline
1 & business & low \\ \hline
2 & construction & high \\ \hline
3 & eating & high \\ \hline
4 & education & low \\ \hline
5 & entertainment & high \\ \hline
6 & fitness & low \\ \hline
7 & gardening & mid \\ \hline
8 & home-related & mid \\ \hline
9 & hotel & high \\ \hline
10 & infrastructure & high \\ \hline
11 & manufacturing & high \\ \hline
12 & medical & high \\ \hline
13 & outdoor & low \\ \hline
14 & repairs & mid \\ \hline
15 & self-care & mid \\ \hline
16 & shopping & low \\ \hline
17 & transportation & high \\ \hline \end{tabular}
\end{table}
Table 1: POI Categories and their corresponding carbon footprint levels. POIs that did not fall into any of these 17 categories were labeled manually.
\[CF_{v}=\sum_{i=1}^{n}w_{i}\times f_{i} \tag{1}\]
where, \(CF_{v}\) is the visitation-based carbon footprint for a trip made by a user, \(w_{i}\) is the weight assigned to each POI category based on their carbon footprint, and \(f_{i}\) is the frequency of POIs visited in each category during the trip.
The values of \(w_{i}\) for low, medium, and high footprint categories are 1, 3, and 5, respectively.
For example, if a user visits two low-footprint POIs (such as a dog park and gym) and one high-footprint POI (such as a restaurant) during a trip, the total footprint will be seven (2x1 + 1x5 = 7). It should be noted that here that we account only for the footprint based on the types of POIs visited, so if a user does not visit any POI on that trip, then the visitation activity-based carbon footprint for that trip will be zero. We then aggregated these footprint values for each user for all trips at a weekly resolution. This means that a user's visitation-based carbon footprint will represent the weekly footprint that represents a user's lifestyle. Mathematically, it is represented by Equation 2:
\[CF_{w}=\sum_{j=1}^{m}CF_{v_{j}} \tag{2}\]
where, \(CF_{w}\) is the weekly visitation-based carbon footprint for a user, \(CF_{v_{j}}\) is the visitation-based carbon footprint for the \(j^{th}\) trip made by the user during the week, and \(m\) is the total number of trips made by the user during the week.
#### 3.4.2 Distance-based carbon footprint
Transport has direct and indirect carbon emissions which contribute to a user's carbon footprint. To account for this contribution in evaluation of activity-based footprints, we also calculated the distance traveled by each user in a trip and, subsequent total distance traveled by a user in a week. We first calculated the distance between consecutive stop locations using the Haversine formula, which considers the earth's curvature. We then summed up the distances between all consecutive stop locations to obtain the total distance traveled in a trip. Using this method, we then calculated the total distance traveled by each user in a week by aggregating the distances traveled in all trips. This approach allowed us to accurately estimate the distance-based carbon footprint of each user by considering their transportation activities in addition to their visitation-based carbon footprint.
For a trip, the distance traveled by a user is calculated as the sum of the distances between consecutive stop locations using Equation 3:
\[DT_{trip}=\sum_{i=1}^{n-1}dist(loc_{i},loc_{i+1}) \tag{3}\]
Figure 3: Conceptual illustration of methods employed in the study. We first collected fine-grained waypoint data for users in Harris County and added location-based attributes to stop points, such as type of POI, activity-based footprint, average household income category, and distance traveled. Then we evaluated the spatiotemporal linkages between carbon footprint and other attributes.
where \(n\) is the total number of stop locations in a trip, \(loc_{i}\) is the latitude and longitude of the \(i^{th}\) stop location, and \(dist(loc_{i},loc_{i+1})\) is the distance between consecutive stop locations calculated using the Haversine formula.
For a week, the total distance travelled by a user is calculated by aggregating the distances traveled in all their trips using Equation 4:
\[DT_{week}=\sum_{j=1}^{m}DT_{trip_{j}} \tag{4}\]
where \(m\) is the total number of trips made by the user during a week, and \(DT_{trip_{j}}\) is the distance traveled by the user in the \(j^{th}\) trip.
By considering both the visitation activity-based carbon footprint and the distance-based footprint of users, we can obtain a comprehensive understanding of the users' overall activity-based carbon footprint for purposes of evaluating carbon footprint gap among users.
#### 3.4.3 Activity-based carbon footprint gap
To investigate the variation of activity-based carbon footprint across users and the extent of footprint gap, we utilized the median household income of the locations visited by a user as a proxy for the user's income level. We first determined the median household income for each location using publicly available data from the US Census Bureau. Then, for each user, we assigned an income category based on the median household income of the locations visited by the user. Users who visited locations with a median household income of less than 45,000$ were categorized as low income, those with a median household income between 45,000-100,000$ were categorized as median income, and those who visited locations with a median household income of more than 100,000$ were categorized as high income. We did this by assigning an income category to each trip based on the category of the POIs of a particular income category visited with the highest frequency.
To gain insights into the extent of the activity-based carbon footprint gap, we analyzed the distribution of users across different income categories and their corresponding carbon footprint and distance-traveled categories. This allowed us to identify any patterns or trends in the data, highlighting the variation of users' lifestyle choices and travel behavior across different income levels. We computed the proportion of low-, medium-, and high-income users in each of the low-, medium-, and high-visitation-based carbon footprint categories, as well as the proportion of users in each income group in the low-, medium-, and high-distance-based footprint categories. By comparing these proportions, we were able to discern whether users from different income groups have different activity-based carbon footprint profiles and to evaluate the extent of activity-based carbon footprint. We categorized the thresholds for high and low and the top 10 percentile and bottom 50 percentile of the respective visitation- and distance-based carbon footprint values. A linear split was not done as the values had an exponential distribution.
## 4 Results
### Heterogeneity in distance traveled and POI visitation footprint
The analysis results reveals intriguing patterns in the distribution of activity-based carbon footprints of urban dwellers based on visitation and distance traveled, indicating a substantial degree of heterogeneity and carbon footprint gap among users.
The results, as illustrated in Fig. 4, show that both visitation-based and distance-based carbon footprints exhibit similar patterns of distribution in terms of the frequency of users in different levels of carbon footprint. Fig. 4 (a) presents the frequency plot of visitation-based carbon footprint versus the number of users on a log scale. The plot shows that the frequency of users decreases rapidly with higher levels of carbon footprint, suggesting that a small percentage of users has significantly higher contributions to visitation-based carbon footprint than the rest. The majority of users have a lower activity-based carbon footprint, but a few users are responsible for a disproportionately large share of visitation-based carbon footprint, highlighting the gap in the users' visitation-based carbon footprint. Similarly, Fig. 4 (b) displays the frequency plot of distance-based carbon footprint versus the number of users on a log scale. The frequency of users also decreases exponentially with higher levels of distance-based carbon footprint. This result indicates that a small group of users travels longer distances and thus has a significantly higher distance-based carbon footprint. This finding further emphasizes the gap in users' activity-based carbon footprint, with a few users having the greatest activity-based carbon footprint.
These results suggest a high degree of heterogeneity and gap in users' activity-based carbon footprint (both visitation-based and distance-based), which has important implications for understanding and addressing the drivers of carbon
emissions. By identifying the factors contributing to this heterogeneity, policymakers and urban planners can develop targeted interventions to promote more sustainable lifestyles and reduce the carbon footprint of urban residents. The heterogeneity in users' carbon footprints may be influenced by factors such as city structure, income, distribution of facilities, and accessibility to different modes of transportation.
### Activity-based carbon footprint gap across users
The results from the stacked bar plot in Fig. 5 highlight the contribution of users in different footprint categories to the total activity-based carbon footprint and distance traveled. In particular, as shown in Fig. 5 (a), among the users, 11% are responsible for a substantial 88% of the total activity-based carbon footprint, indicating a disproportionately high impact on carbon emissions from their activity. Moreover, 31% of users contribute to only 12% of the total footprint, indicating a relatively lower impact. Interestingly, 58% of the users have a carbon footprint of 0, suggesting a considerable portion of the population with minimal or no impact on carbon emissions from their activity. This could be due to visitations that do not correspond to travel to any POI, such as home to office and back. Also, one limitation of the dataset is that due to user anonymization.Device IDs may change, making it difficult to evaluate the weekly travel profile of an individual. This finding highlights the heterogeneity in the carbon footprint among users, with a small fraction of users contributing significantly to the total emissions, while a larger fraction has minimal impact. In terms of distance traveled by users in a week over all the trips, the stacked bar plot in Fig. 5 (b) reveals a similar trend. 10% of users account for 71% of the total distance traveled, indicating a significant contribution to overall travel behavior and carbon emissions. On the other hand, 39% of users contribute to only 27% of the total distance traveled, suggesting a relatively lower impact. Notably, 51% of the users only have a minimal contribution of 3% to the total distance traveled, indicating a significant heterogeneity in travel behavior and associated carbon impact arising from the choice of travel across the user population.
These findings suggest that there are distinct patterns of heterogeneity in both activity-based carbon footprint and distance traveled among users, which may be influenced by factors such as individual lifestyle choices, travel behavior, and activity patterns. Further analysis is needed to better understand the underlying factors driving these patterns of heterogeneity and their implications for carbon emissions in the context of user lifestyles in the city. A city's structure and layout can influence the distribution of facilities, services, and employment opportunities influencing the travel and lifestyle patterns of individuals. In cities with a more dispersed layout, users may need to travel long distances to access essential services or to reach their workplaces, leading to higher carbon emissions. To understand if the observed behavior is in part influenced by demographic attributes of the census tracts where the POIs exist, we also evaluated the dependence income on the distance traveled and activity-based carbon footprint in the next section.
Figure 4: Distribution of POI visitation-based and distance-based carbon footprint of users accounting for 99.9% of activity. (a) Frequency plot of visitation-based carbon footprint versus number of users, displayed on a log scale. The plot shows the distribution of visitation-based carbon footprints for different frequency levels of users. The frequency of users with different carbon footprints follows a decaying trend that is more exponential in nature. (b) Frequency plot of distance-based carbon footprint versus number of users, displayed on a log scale. The plot shows the distribution of distance-based carbon footprints for different frequency levels of users. Similar to the visitation-based carbon footprint, the distribution of distance-based carbon footprints also follows a decaying trend that is more exponential rather than linear. The results indicate that both visitation-based and distance-based carbon footprints exhibit similar patterns of distribution with respect to the frequency of users.
### High footprint users: Choice or force?
We evaluated the distribution of users across different income and visitation-based and carbon footprint and distance-traveled categories. We observed a significant difference in the visitation-based carbon footprint and distance traveled by users based on their income category. The heatmap plot in Fig. 6 (a) shows the distribution of users across different income and visitation-based carbon footprint categories. It can be seen that a majority of high-income users fall into the low-carbon footprint category, while a higher percentage of low-income users fall into the high-carbon footprint category. This finding is intriguing, as one might expect that high-income users typically have more resources and opportunities to engage in activities with a higher carbon footprint, hence contribute more to carbon footprint. However, the results indicate that high-income users follow lifestyles that include more visits to low-footprint POIs. On the other hand, low-income users may visit POIs more frequently due to having fewer resources and a more constrained access. [11] showed that lower-income users visit grocery stores more frequently since they might not have resources to buy all their needs from the closest facility and in one visit. Greater frequency of POI visits and longer distance traveled translate into higher activity-based carbon footprints. Similarly, Fig. 6 (b) shows the distribution of users across different income and distance-traveled categories. These results indicate that income significantly influences travel distances. Fig. 6 (c) shows that a significant portion of users travel short or moderate distances while maintaining a low carbon footprint, suggesting better access for POIs contributing to low-footprint activities. On the other hand, some users have a high carbon footprint despite traveling short or moderate distances, which highlights the importance of interventions for reducing carbon-intensive lifestyle choices.
Fig. 6 (d) and (e) provide a clearer picture of the contribution of low, medium, and high-income categories to different distance traveled and visitation-based carbon footprint categories. It can be observed that a higher percentage of high-income users fell into the low-distance traveled category, while a higher percentage of low-income users fell into the high-distance traveled category. Interestingly only a fraction of users with high-income travel longer distances, which is more than five times less than low and medium-income category visits. This result could highlight the disparities in facility distribution and limited access in low-income neighborhoods requiring users to travel longer distances. Users in all income categories exhibit near-same proportion of low visitation-based footprint, indicating that a low visitation-based footprint is not strongly influenced by income category. But users corresponding from higher-income areas contribute less than half to high visitation-based footprint compared to those of low and medium income. This could be partly due to the fact that high-income users may have greater flexibility in their lifestyle choices, such shopping locally, or engaging in low-carbon visitation activities, and also better access to facilities, which can
Figure 5: Contribution of users to activity-based carbon footprint and distance traveled. (a) The stacked bar plot shows the contribution of users in high-, medium-, and low- footprint categories to the total activity-based carbon footprint. Among the users, 11% are responsible for 88% of the total carbon footprint, while 31% contribute to 12% of the total footprint. Interestingly, 58% of the users have a carbon footprint of 0, indicating low or no impact on the carbon emissions from their activity. (b) The stacked bar plot depicts a similar trend with distance traveled. Here, 10% of users account for 71% of the total distance traveled, while 39% contribute to 27% of the total distance. Notably, 51% of the users only have a minimal contribution of 3% to the total distance traveled, suggesting a significant heterogeneity in the travel behavior and carbon emissions among the user population.
contribute to reduced travel distances and a lower distance-based carbon footprint. On the other hand, low-income users may have limited choices and limited access, resulting in longer travel distances and a higher carbon footprint. Additionally, the uneven distribution of facilities in low-income neighborhoods may force users to travel longer distances to access essential services, amenities, and recreational activities, whereas high-income neighborhoods may have better access to such facilities, resulting in shorter travel distances. These observations suggest that urban design decisions related to facility distribution and access could influence the lifestyle of low-income populations in ways that force them to exhibit high activity-based carbon footprint behaviors. Hence, sustainable urban design strategies focusing on equitable facility distribution and improving access could also have positive effects on reducing the activity-based carbon footprint of lower-income residents [67; 68; 69; 70].
## 5 Concluding Remarks
Activity-based carbon footprint of urban dwellers is one of the least understood and studied components of carbon footprint analysis of urban dwellers. Addressing this important gap, in this study, we analyzed the users' activity-based carbon footprint using high-resolution waypoint data in the context of Harris County, Texas. Our findings provide valuable insights into understanding the user lifestyles that shape the extent of visitation-based and distance-based carbon footprints of individuals to evaluate carbon footprint gap among residents. Our results show disproportionately high activity-based carbon footprint from a small fraction of users, with 11% of users responsible for 88% of the total visitation-based carbon footprint, and 10% of users accounting for 71% of the distance-based carbon footprint. These results highlight a high degree of heterogeneity and a significant carbon footprint gap in both visitation-based and distance-based carbon footprint by users in Harris County, Texas. According to the results, a small percentage of urban residents have significantly higher contributions to activity-based carbon footprint, while a larger fraction of residents has the lowest footprint. The observed heterogeneity and footprint gap may be influenced by factors such as city structure, access, distribution of facilities, and accessibility to different modes of transportation. These findings underscore the importance of mapping and analyzing activity-based carbon footprint in urban sustainability and climate mitigation studies, plans, and actions to enable evaluation of the extent to which urban design and development patterns shape the extent and distribution of activity-based carbon footprint of urban residents.
Figure 6: Relationship between income, carbon footprint, and distance traveled by the users. (a) shows a heatmap representing the percentage of users in each of the nine categories, obtained by combining the three income categories with the three carbon footprint categories. The colors and values of the heatmap indicates the percentage of users falling in each of the nine categories. (b) and (c) show similar plots for income versus distance traveled and distance traveled versus carbon footprint, respectively. Plots (d) and (e) are stacked bar plots that illustrate the fraction of users in each income category that contribute to different distance-traveled categories and carbon-footprint categories, respectively.
This study has multiple novel and significant contributions. First, this study is among the first attempts to map and analyze activity-based carbon footprint of urban residents. The majority of the existing literature focuses primarily on consumption-based and home-activity-based carbon footprint dimensions; limited attention has been paid to activity-based carbon footprint and its distribution among urban residents. The findings of this study provide novel and important insights into the extent and distribution of activity-based carbon footprint and the significant gap across different residents. Second, departing from the time-based approach to examining the activity-based carbon footprint of individuals and households from survey data, this study utilizes fine-grained human trajectory data to specify visitations to POIs and distance traveled with high precision and granularity and for a very large sample of urban users. Third, the study and findings show a novel application of fine-grained human mobility and trajectory data for examining urban sustainability problems beyond the current applications in transportation and urban studies. Fourth, the methodology used in this study for data preprocessing and analysis to convert fine-grained human trajectory datasets into POI visitation count and distance traveled provides a computationally efficient approach that could be adopted in future studies. Through these contributions, this study advances the understanding of urban sustainability by better examining human lifestyle and mobility that shape the carbon footprint of urban dwellers.
The findings from this study also have important implications for decision makers, urban planners, and city managers. By understanding the extent and distribution of users' activity-based carbon footprint resulting from lifestyle patterns and travel behavior, targeted strategies can be developed to reduce the overall carbon footprint. For example, improving access and decentralized facility distribution could reduce both visitation-based and distance-based footprints of urban residents. More equitable distribution of facilities and amenities in low- and medium-income neighborhoods could reduce disparities in carbon footprint and travel distances among users from different income categories.
This study and its findings also set the stage for future research directions. For example, future studies could examine the extent and distribution of activity-based carbon footprints among users across different cities to specify the extent to which urban form and structure would shape the extent of activity-based carbon footprints and the gaps across different groups of residents. The advancement of understanding of activity-based carbon footprints in cities will move us closer to integrated urban design solutions that would promote sustainability, equity, and climate mitigation.
## Acknowledgments
This material is based in part upon work supported by the National Science Foundation under CRISP 2.0 Type 2 No. 1832662 grant and the Texas A&M University X-Grant 699. The authors also would like to acknowledge the data support from INRIX. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, Texas A&M University, or INRIX.
## Data Availability
Some of the datasets used in this study are not publicly available under the legal restrictions of the data provider. Interested readers can request waypoint data from INRIX provided here ([https://inrix.com/products/](https://inrix.com/products/)) and POI data from SafeGraph from here ([https://www.safegraph.com/products/places](https://www.safegraph.com/products/places)).
## Code Availability
The code supporting this study's findings is available from the corresponding author upon request.
|
2304.02275 | Topology mediated organization of E.coli chromosome in fast growth
conditions | Recent experiments have been able to visualise chromosome organization in
fast-growing E.coli cells. However, the mechanism underlying the
spatio-temporal organization remains poorly understood. We propose that the DNA
adopts a specific polymer topology as it goes through its cell cycle. We
establish that the emergent entropic forces between polymer segments of the
DNA-polymer with modified topology, leads to chromosome organization as seen
in-vivo. We employ computer simulations of a replicating bead spring model of a
polymer in a cylinder to investigate the problem. Our simulation of the
overlapping cell cycles not only show successful segregation, but also
reproduces the evolution of the spatial organization of the chromosomes as
observed in experiments. This manuscript in addition to our previous work on
slowly growing bacterial cells, shows that our topology-based model can explain
the organization of chromosomes in all growth conditions. | Shreerang Pande, Debarshi Mitra, Apratim Chatterji | 2023-04-05T07:44:03Z | http://arxiv.org/abs/2304.02275v4 | # Entropy mediated organization of _E.coli_ chromosome in fast growth conditions
###### Abstract
Recent experiments have been able to visualise chromosome organization in fast-growing _E.coli_ cells. However, the mechanism underlying the spatio-temporal organization remains poorly understood. We propose that the DNA adopts a specific polymer topology as it goes through its cell cycle. We establish that the emergent entropic forces between polymer segments of the DNA-polymer with modified topology, leads to chromosome organization as seen _in-vivo_. We employ computer simulations of a replicating bead spring model of a polymer in a cylinder to investigate the problem. Our simulation of the overlapping cell cycles not only show successful segregation, but also reproduces the evolution of the spatial organization of the chromosomes as observed in experiments. This manuscript in addition to our previous work on slowly growing bacterial cells, shows that our topology-based model can explain the organization of chromosomes in all growth conditions.
bacterial chromosome organization | entropic organization of bead-spring model of polymers | polymers in confinement |
t is vital for the living cell to make a copy of its DNA, and segregate it to two halves of the cell, before the cell can divide (1, 2). These essential processes have been extensively studied for the one of the simplest single celled organism, the _E.coli_ bacteria. As the chromosomes replicate and segregate thereafter, the mechanism of spatio-temporal organization of the chromosomes remains controversial [(3, 4, 5, 6, 7, 8, 9)]. Unlike in higher organisms, the bacterial cell does not have dedicated protein machinery to transfer its two daughter chromosomes to two halves of the cell (10). _E.coli_ is a rod shaped bacteria, whose chromosome occupies the central region named the nucleoid. The bacteria does not have a nucleus. The segregation of the daughter chromosomes happens simultaneously as replication is in progress (2). In contrast, in eukaryotes, segregation of daughter chromosomes by the mitotic spindle occurs after replication is complete. Most bacterial cells have just one chromosome, and each chromosome is a ring polymer (11). The chromosome of the bacteria _E. coli_ and _C.crescentus_ consist of a single ring polymer with 4.6 million and 4 million base-pairs (BPs), respectively[(12, 13, 14)].
In _E.coli_ and other bacteria, replication begins at a site called _oriC_ to end at the _diff_ locus of the _ter_ macrodomain and proceeds along the two arms of the ring polymer simultaneously (5, 7, 15). Approximately 1000 base pairs (BPs) are replicated per second on an average, by the replication forks (RFs), moving along each arm of the chromosome (16, 17). By controlling the growth medium, the doubling time \(\tau\) of the _E.coli_ bacterial cells can be varied to as low as 20 minutes to 3 hours or more (18). The cell cycle typically consists of three periods. The B period refers to the time period between the birth of the cell and the start of replication. Once replication starts, the cell enters the so called C period, and lasts till the time it takes for the replication to be completed. Thereafter, the cell remains in D phase till cell division occurs (18, 19). The bacterial cells are said to be under fast growth, if \(\tau<\tau_{C}+\tau_{D}\), where \(\tau_{C}\) and \(\tau_{D}\) refer to the C and D periods, respectively.
In fast growing cells, the B period is absent implying that the cells are continuously replicating and segregating. The conundrum of how a cell can divide in 20 minutes, even though the minimum replication time is 40 minutes was resolved by Helmstetter and Cooper and others (18) who showed that a daughter cell is newly born with partially replicated grand-daughter DNA. Thus a second round of replication begins even before the first round of replication is complete. Thereby, it is possible for the daughter cell to complete the rest of the replication (and segregation) of the grand-daughter DNAs thereafter, to divide into two grand-daughter cells in a time interval less than \(\tau_{C}^{min}=40\) minutes. Refer Figure 1, for a schematic of how the the multiple rounds of replication proceeds with overlapping cell cycles. Thus, the chromosome in fast-growth conditions undergoes multi-fork replication with the process of replication occurring simultaneously at more than two replication forks (18, 20).
It is accepted that for _E.coli_ chromosomes, entropic forces between the ring polymers play a significant role in the segregation of daughter chromosomes (12, 21, 22, 23, 24, 25, 26, 27), though proteins like MukBEF also play a critical role in the process (6, 28). Moreover, researchers have used Fluorescent In Situ Hybridization (FISH) experiments to track the position multiple DNA-loci at different points in the cells life cycle, i.e. while the replication and segregation of the bacterial chromosome is in progress.
## 1 Significance Statement
_E.coli_ cells grow with overlapping cell cycles in all but the slowest growth conditions. The fast growing bacteria can have four or more copies of the replicating DNA of different lengths. This makes the spatial segregation and the subsequent organization of the DNA a difficult task with two rounds of replication going on simultaneously. We show how the principles of entropy maximization of topologically modified confined ring DNA-polymers can achieve this. The topology is modified by introducing cross-links (emulating the effects of linker-proteins) between specific segments. Our simulation reproduces the emergent organization of chromosomes as seen _in-vivo_. Thus polymer physics principles, previously used to understand chromosome organization in slow growing _E.coli_ cells, also resolves DNA-organization mechanisms in more complex scenarios.
both in fast and slow growth conditions [20, 29, 30]. For slow growth, it is observed that the _oriC_ is initially found in the mid-cell position, and after about 20 minutes into the C-period the two _oriC_s move to the quarter and three quarter positions along the cell-long axis [29, 31]. The position of the _oriCs_ are measured from one of the pole positions. Correspondingly, the _diff-ter_ locus remains delocalized within the cell at the start of C period, but eventually moves to the mid-cell position at the end of the replication process. Other loci also move to their respective 'home positions' as segregation proceeds [29]. The mechanism by which the different genomic loci identify their cellular addresses within the cell and then move to that position at the appropriate time of the cell-cycle remained an open question. In our previous work with slow growth conditions, we showed that this DNA-organization is obtained by the adoption a suitably modified polymer topology by having long range contacts, which are mediated by MukBEF or rrn-operons acting on the DNA-contour [31]. As a consequence internal loops of the polymer segements are formed. The different loops entropically repel each other and occupy different segments of the cell along the long axis, and thereby also localize different loci which are part of the loop.
In fast growth conditions with multifork replication, the existence of four (or more) chromosomes at different stages of the replication process makes the segregation and faithful division of chromosomes a more complex task. If there existed an active machinery which directs the newly replicated chromosomes to opposite directions to segregate them, then the daughter and the mother chromosomes might end up being spatially overlapped, in this complex life cycle [20]. It is because if there are overlapping rounds of replication that take place and each chromosome (of each generation) must move in appropriately different directions such that they occupy a specific region in the cell, without being spatially proximal to the chromosomes of the previous/subsequent generations. Therefore, an entropic model without any active-segregating machinery which achieves the purpose can be a lucrative idea with a minimal number of assumptions. The question how
Fig. 1: **Schematic of the cell cycle:** Given a specific growth medium and other parameters, the E-coli grows with different C and D periods and doubling time \(\tau\). In the specific case of the experiments that we are modeling [20], the C-period is \(r_{C}=55\) minutes (min), and the D-period is \(r_{D}=44\) min. This implies that the total time taken for a daughter chromosome to be produced by replication, and thereafter post segregation be part of a new daughter cell is 99 minutes (min). The cells are in fast growth conditions with overlapping cell cycles, thereby, the doubling time \(\tau\) is also 55 min. We explain the scenario explicitly in the figure for the benefit of the reader. In the schematic, a round of cell division takes place at time \(t=0\). This implies that after another \(\tau=55\) mins, it at \(t=55\) mins, another cell division will take place to form daughter cells from the mother, as shown. We set the convention that the cell born at time \(t=0\) is called the _MC_ cell (MC-cell) and the cells born at \(t=55\) mins is the daughter (D-cell). However, for the pair of daughter-chromosomes (green) which divide into the two D-cells at \(t=55\) min, their replication started \(r_{C}+r_{D}=99\) minutes earlier. This implies that the two (green) _oriCs_ of the D-chromosomes were formed 99 minutes before the birth of the two D-cells, i.e. at \(t=-44\) mins. Let's follow the “green” chromosomes from the start of replication at time \(t=-44\) min. The _onc_ of the three chromosome in the grandmother-cell (GM-cell) starts a new round of replication at \(t=-44\) mins from two green _ORs_. The two FFs will start moving along the arms of the blue chromosome to form two complete green chromosomes after \(r_{C}\) mins, i.e. at \(t=11\) mins. Meanwhile, the GM cell has divided to form two M-cells at \(t=0\), and we follow the cell which has the green chromosome. The other M-cell (shown in dashed line) have the red (black) chromosomes, which are fully equivalent to that of blue (& green) chromosomes. But we color it differently to distinguish it from the green chain. Note that at time \(t=0\), the M-cells are nearby born with only 20% of blue chromosome (M-chromosome) and 80% green D-chromosome. From \(t=11\) to \(t=55\) mins (D-period), we have two green _differs_ connected to each other before cell division at \(t=55\) mins. Moreover, at \(t=11\) mins, the four new orange grids have formed from the two green _ORs_, as a new round of replication starts. The D-cells are born with only \(20\%\) green D-chromosome and a pair of \(80\%\) formed orange grand-daughter chromosomes. In legends, we show the colors of the M (blue, red), D (green, black), GD (orange) chromosomes. To visualize the different stages of the cell-cycle the reader may refer to the section titled “Movies’ in the SI (videos “Vid-1” and “Vid-2”).
ever is, does the same entropic mechanism which explains loci-organization in slow growth also explain the organization of segregating chromosomes in fast growth? In this paper, we establish that it is indeed the case and demonstrate that the same mechanism can also be used to obtain the organization of loci along the cell long axis in fast growth conditions with overlapping replication cycles, and establish quantitative consistency with previously published experimental data [(20)]. Thus our proposed mechanism is likely a generic mechanism to obtain chromosome organization in bacterial DNA, as we have also previously quantitatively matched our model results with organization of loci as shown by HiC and FISH data for the chromosome of bacteria _C.crescentus_[(32)].
_Model and mechanism:_ We use the bead spring model of a polymer with 500 monomers to model a single chromosome (with 4.6 million base pairs) within a cylinder, while multifork replication is in progress. The cylinder length doubles in small steps over the course of the simulation while the diameter remains fixed as observed for _E.coli_ cells _in vivo_. The length \(a\) of the springs connecting the centers of neighbouring beads is the unit of length, and a polymer is confined within a cylinder of diameter \(7a\) (\(\equiv 1\mu\)m, the typical diameter of the cell), and the cylinder length increases doubles from \(21a\) (\(\equiv 3\mu\)m) to \(42a\) as our simulation proceeds. We use Monte Carlo (MC) simulations to update the position of the monomers, where one Monte Carlo step (MCS) consists of \(N\) attempts to update the position of the \(N\) monomers, chosen at random. Since we model replication and thereby add monomers at the RFs at regular intervals, \(N\) keeps increasing as the simulation proceeds. To update monomer positions, a trial move is made to displace the monomer in a random direction, and the move is accepted or rejected using the Metropolis criterion and depends on the resulting energy change, choosing \(k_{B}T=1\). The polymers keeps changing conformation as simulation proceeds as the monomers undergo local diffusive motion. A description of the model can be found in [(31)]. We suitably adapt our previous model for fast growth conditions, refer Supplementary information SI-(1-3) for details.
For our previous study [(31)] in slow growth conditions, we had introduced chromosomal loops by bridging specific loci along the chain contour of daughter DNAs, after the RFs had crossed the corresponding loci of the mother DNA. These loops are created in our simulations by introducing additional springs which cross-link between specific pair of monomers along the chromosome contour. These cross-links (CLs) mimic the role of bridging proteins [(8)], such that distant genomic segments become spatially proximal. A loop or an internal ring is formed by the DNA-segments which lies in between the specific loci. These internal loops entropically repel each other, which implies that each loop can take a larger number of conformations if they do not overlap with each other. Thus configurations where there is minimal overlap between loops have lower free energy, assuming conditions of local equilibrium, though we are aware that a cell is energy consuming driven non-equilibrium system. This entropic repulsion between loops consequently induce the loops to occupy different segments of the cylinder along the long axis. Thereby one obtains emergent organization of the chromosome loci, which belong to particular loops. We posit that processes like loop extrusion make transient loops, which give rise to the TADs observed in the Hi-C maps, whereas some loops are more long lived and give rise to the macro-domain structure of Hi-C maps. Consequently, these result in the spatio-temporal localization of tagged loci to points along the cell long axis.
Previously, we used two polymer architectures named Arc-2 and Arc-2-2(31) to unravel the mechanism [(32)] of loci-organization in slow growth conditions, refer Fig.2. Moreover, though the RFs were made to move along the chain contour (train track model)as replication progressed, we observed that the RFs remain spatially localized near the cell center. This spontaneously emerges out in our simulations as a consequence of the chosen architecture. This is consistent with the replication factory model in spirit, and thus reconciles consequences from two contradicting models pertaining to the motion of RFs. In this paper, we use the same Arc-2-2 architecture for our model of replicating chromosomes to study organization in fast-growth conditions. We model the chromosome replication and segregation over one doubling time \(\tau\) inside a growing cylinder (_E.coli_ cell). The simulation starts from the state right after cell division (equivalent to the state shown at time \(t=0\) in Fig.1), such that two new cells are just born from
Fig. 2: **Schematic of the different architectures:**The schematic shows the Arc-2-2 topology of the DNA-polymer with 500 monomers. It is a ring polymer (Arc-0), thus monomer 1 is joined to monomer 500. We label monomer-1 as oriC and monomer 250 as _di-ter_. In addition, we suggest that monomer 125 & 375 is cross-linked to monomer 1 by bridging proteins to create the Arc-2 architecture of the polymer. Two overlapping polymers in the Arc-2 architecture have enhanced forces of segregation, as compared to the ring polymer. In addition, they also show loci localization as seen in FISH experiments, but the contact map obtained from simulations only partially matched the HiC data of the _E.coli_ chromosome. Additionally, if monomers 136 & 218, as well as 282 & 364 are cross-fixed, we obtain the architecture named as Arc-2-2. Simulations with Arc-2-2 architecture reproduced at the results obtained with Arc-2, and also showed macro-domain organization as seen in the HiC maps. In this manuscript, we use the same Arc-2 architecture to model another and daughter chromosomes in cells with overlapping life cycles. Organization of tagged loci as seen in FISH experiments spontaneously emerges out from our simulations. In simulations presented in this paper, the daughter chromosomes are in the Arc-2-2 topology at time \(t=0\) (refer Fig.1), whereas the grand-daughters adopt the Arc-2 topology, after the relevant monomers are replicated and become available for crossing-linking by springs. In-vivo, we expect active processes contribute to bring the relevant segments in spatial proximity before they are bridged by the relevant proteins: these are not being modeled presently. Thus though the cross-linking is instantaneous after replication of specific monomers in our simulations, it will not be so _in-vivo_.
their parent cell. We follow the replication of chromosomes in just one cell and the simulation ends just before the cell is ready to divide into two daughter cells. The newly born mother cell (M-cell) at \(t=0\) has 80% partially replicated mother M-chromosome, i.e. there are two D-chromosomes (schematically) marked in green with 400 monomers each and 20% mother chromosome marked in blue with 100 monomers; refer Fig.1. Thereby, there are two _oriCs_ at the start of simulations. Refer SI-2 for details of how we model replication, and SI-4 to know the details of how we initialise the system.
During the course of our simulation, the two RFs reach the _diff-ter_ loci such that the mother is completely replicated to 2 green D-chromosomes and the cell enters the D-period. Simultaneously, a new round of replication starts such that the two (green) D-_oriCs_ each divide into two (orange) GD-_oriCs_. The simultaneous start of the second round is a consequence of the choice of \(7\) and \(\tau_{C}\) in our model, which have been adapted from (20). For modeling replication, we add monomers at a fixed rate of 1 monomer every \(2\times 10^{5}\) MCS at each RF. Moreover, to mimic the role played by topoisomerase within the living cell, we allow topological constraint release at regular intervals at rates we used previously, i.e. every \(10^{4}\) MCS. We track the position of all the (available) _oriCs_ and other monomers as the simulation proceeds. We do not model cell division. We have also outlined the mechanism of topological constraint release in our simulations in SI-5.
Though we use Monte Carlo simulations for our investigation of chromosome organization as the cell goes through its life cycle, the simulation is quintessentially a Non-equilibrium simulation scheme. In the simulations, we (a) add effects of polymers chains crossing each other to release topological constraints, (b) add monomers to the simulation box at regular intervals at different points along the contour, i.e. at the position of the RFs to mimic replication and formation of two chains from one, (c) add cross-links at certain stages of the simulation, and lastly (d) increase the length of the cylinder as the simulation proceeds. These are energy consuming non-equilibrium processes inside the cell, and these steps break detailed balance in the simulation. MC is used primarily to model diffusion of monomers and the exploration of different conformations of polymers in a confined space, assuming local equilibrium [(33, 34)]. While modelling the cell cycle, we do not map Monte Carlo Steps (MCS) directly to time in terms of minutes, but rather in terms of the stage of replication.
## 1 Results
**Localization of _oriC, dif-ter_**. As in experiments, we follow the trajectories of particular monomers along the chain contour as the cell cycle proceeds. These monomers correspond to the
Figure 4: **Schematic of the tagged loci:Schematic of chromosome loci tagged in experiments and the corresponding monomer indices for a \(500\) monomer chain. Experimentally, the circular chromosome is tagged at different sections, where different loci along chain contour are denoted in terms of minutes and seconds. The inner circle in the schematic corresponds to the loci fluorescently labeled in the experiment (20). The outer circle denote the monomer indices corresponding to these labels in our model system, where the _oriC_ is denoted by monomer index 1, and _diff-ter_ by monomer index 250. As multifork replication proceeds, there can be more than one loci of a given label within the cylinder (cell) at a particular stage of the cell cycle.**
Figure 3: **Stagnshot of simulations for Arc-2-2 and Arc-0Fi.** shows representative simulation snapshots taken at the last stage of the cell-cycle before cell-division, for Arc-2-2(top frame) and Arc-0(bottom frame), polymer topologies. The green and red small spheres are the monomers of the daughter chromosomes. We note that for Arc-2-2 the grand-daughter chromosomes (shown in orange, violet, blue and deep-green) get spatially localized to specific regions along the long axis, while Arc-0 the grand-daughter chromosomes show greater spatial overlaps with each other. The four big red spheres represent the four _oriCs_, whereas the two big orange spheres are the two _diff-ter_ loci, they are connected to each other as is seen _in-vivo_: they are of the same size as other monomers in the simulation. In the subsequent sections we show that loci show localization patterns for Arc-2-2 but Arc-0 show no such localization patterns.
same genomic loci which have been tracked in the experiments of (20), also refer Fig.4. We quantify organization by plotting their spatial distribution in five equally divided intervals of their life cycle, as in experiments. The probability distributions of the position of specific monomers at different stages of replication/segregation can be directly compared with data from experiments. The experimental data for specific genomic loci has been provided in Fig.5 (20) for ready comparison with data from simulations. We average our data over 50 independent runs corresponding to a life cycle in 50 cells.
As mentioned earlier, we start the simulations with the birth of a new cell corresponding to \(t=0\) of Fig.1. It is known that the two replicated _dif-ter_ loci remain linked to each other and are localized in the middle of the parent(M-cell) cell up to just before cell-division. To mimic this phenomenon in our model, we kept _dif-ter_ monomer tethered to one of the poles of the cylinder (corresponding to the new pole of the cell), and release the tether just at the start of the simulation. Before we start our simulations, we equilibrate the two D-chromosomes connected at the two RFs (located at monomer indices 200 and 300, assuming completion of 80% replication on both arms) on the M-chromosome. The equilibration run is for \(3\times 10^{7}\) MCS while allowing for topological constraint release. In this stage, we have the CLs which create Loop-1 and Loop-2 in each of the D-chromosomes, while the mother M-chromosome has the CLs which results in Loop-3 and Loop-4, refer Fig.2 and other details in SI-4. Note we model a life cycle where the ratio of the \(\tau_{D}/\tau_{C}=0.8\) and \(\tau/\tau_{C}=1\).
Soon after the simulation starts, the _dif-ter_ monomer (the 250-th monomer) will be near the end of the cylinder where it had been tethered during equilibration. Thus the probability distribution shows non-zero values near the ends of the cylinder in the first 20% of its life cycle. But even as the RFs move towards the _dif-ter_ loci and cross the 218-th monomer, we introduce CLs between the newly introduced 218-th monomer and the 136-th monomers to form Loops-3 (and correspondingly Loop-4) for both the D-chromosomes. Two double helix DNA chains are formed from the leading and lagging strands _in-vivo_. However, such details are beyond the scope of our coarse grained model. We re-christen the monomers of the parent DNA as monomers of one of the newly replicated DNAs instead of introducing new monomers, after the RF has passed the corresponding loci of the parent. The monomers being added at the RFs form the other newly replicated DNA-polymer. Thus, in our model the two D-chromosomes keep elongating at the cost of the length of the M-chromosomes, as seen _in-vivo_. Refer SI-3 for schematic diagram to better understand how we implement the cross-linking in our simulations.
The entropic repulsion between the loops of the two D-chromosomes ensures the segregation of the two polymers into two halves of the cylinder. This also relocates the _dif-ter_ loci to the middle of the elongating cylinder. As a consequence, we also observe a peak for the probability distribution of _dif-ter_ at the middle of the cylinder at all stages of the life-cycle, refer first row of Fig.6. As the two D-chromosomes occupy two different halves of the cylinder, the _OriC_ moves to the mid position of each half cylinder, i.e the quarter positions of the elongated cylinder, in the \((0-0.2)\) and \((0.2-0.4)\) stages of the cell life cycle, as seen _in-vivo_. This is due to the repulsion between internal loops within each D-chromosome. This organization emerges spontaneously in our simulations as a consequence of the adoption of the Arc-2-2 polymer topology, as we show in the second row of Fig.6.
**Comparing data from experiments & simulations:** Since the two D-chromosomes are connected at the _dif-ter_, the _dif-ter_ continues to remain in the cylinder center. This continues even when the next round of replication starts at the two _oriC_s of the D-chromosomes. When comparing our results to that of experiments, there are some caveats related to conventions of presentation of data which need to be accounted for. Sometimes, these bring up apparent differences between the data obtained from experiments and our simulations, the reasons for which has been explained in subsequent paragraphs. For example, we don't see two peaks in the _dif-ter_ distribution as we don't model cell division: compare the data in Fig.6 with the two-foci data in Fig.5 for \((0.8-1)\) interval.
We have two _oriCs_ till \((0-0.2)\) interval in the cell life cycle, thereafter, there are four _oriCs_ in our simulations. We can track the four oriCs independently and plot their spatial probability distributions in the last four interval of cell cycle. However, in experiments in the \((0.2-0.4)\), the two just replicated _oriCs_ cannot be distinguished in FISH data, since the newly replicated _oriCs_ segregate after a certain interval of time, known as the cohesion time. Hence data for the spatial distribution of four _oriCs_ do not appear till the \(0.4-0.6\) interval of life cycle. Consistent with the observation of cohesion time, the distributions of the _oriC_s overlap significantly in our simulations, in the \(0.2-0.6\) interval of the life-cycle. This overlap decreases as the _oriCs_ localize. Furthermore in our modeling, we do not have a scenario with three _oriCs_. But _in-vivo_, one can observe 3 _oriC_s in the cell, because start of replication need not be perfectly synchronous (as in simulations). Moreover, segregation of the newly replicated _OriCs_ may proceed at slightly different rates due to inherent stochasticity in the active process which govern cross-linking by binding proteins. This can give rise to different cohesion times for different pairs of replicated _oriC_s, which in turn can result in the observation of three _oriCs_. These reasons hold true also for the data of spatial distribution of other loci as presented later. Another important difference is that we can identify and track the loci of GD-chromosomes and the D-chromosome (shown in different colors), but experimentally one cannot distinguish between the two to identify whether a loci belongs to D or GD-chromosomes. In the experiments one can count the number of cells with two or three or four fluorescent foci, and plot their spatial distributions.
Furthermore, in simulations we observe spatial distributions for GD-_oriCs_ to show relatively high occupancy values near the cylinder ends for the three intervals corresponding to \(0.2-0.8\) stages of the life-cycle, which is not seen in experimental data. This is because _in-vivo_ the bacterial chromosome is condensed to stay to form a region called the nucloid due to the presence of many other loops due to transient cross-links. These have not been incorporated in the current study. These transient loops will help shift the distributions away from the cell boundaries. However, in experimental plots we do observe rather broad oriC distributions for the 2 and 3-foci data in the \(0.4-0.8\) interval of cell-cycle; and even wider distributions in \(0.8-1\) interval for data with two foci, i.e. when the two foci cannot be distinguished. Thus, our results are not in contradiction to those obtained from experiments.
Appendix A Cell long-axis histograms: Information flows outward during multifork replication (except ter)
(area under the curve reflects the fraction within the population; the color of the lines matches the chromosome marker.)
Figure 5: **Experimental data of the modelled system:** This figure has been adapted from previously published data in (20) (after having obtained requisite permissions) for aid of comparison with our results, presented in Fig.6 and Fig.7. The data shows frequency distributions for different fluorescently tagged loci at different stages of the cell cycle. Other details can be found in the text and in (20)
In simulations, the CLs between the 125-th and 375-th monomer and the _oriC_, are already present for two of the GD chromosomes, as they have been rechristened from the D-chromosomes. In the two other GD chromosomes, CLs are being introduced in this stage of the life cycle, once the relevant monomers get replicated. This happens when the RFs proceed beyond the 125-th (and 375-th) monomer in the \(0.6-0.8\) life-interval in our simulations. Thereafter, one has four GD-chains each with Loop-1 and Loop-2. As this occurs in the middle of \(0.6-0.8\) stage of the life cycle, the two _oriCs_ start to occupy the 1/8 and 3/8-th positions in one half of the cell. Correspondingly, the other two _oriC_s in the other half of the cell occupy 5/8-th and 7/8-th positions, leading to the appearance of four peaks at these positions. This is seen _in-vivo_ and in our simulations, and the localization occurs due to entropic repulsion between GD-loops. The peaks are enhanced in the next stage of life cycle.
**Position of Replications Forks (RFs)**The confidence in our model is further strengthened by the reconciliation of the spatial distribution of the RFs from our model and _in-vivo_ results. We have all the positional information of the position of the model RFs as we know the position of the RFs on the chain contour at different stages of the cell's life cycle; refer 3\(-\)rd row of Fig.6. In the \((0-0.2)\) stage, the two RFs on the M-chromosome move from the 200-th (and the 300-th) monomer to the _diff-ter_ position. Thereby there is a peak in the spatial distribution at the center of the cell, which the _diff-ter_ itself gets to occupy after cell division due to reasons already explained. At the end of the \((0-0.2)\) interval, the replication of the M-chromosome is complete and one has two complete D-chromosomes connected at the _diff-ter_ in the Arc-2-2 architecture. The Arc-2-2 architecture ensures that the _oriC_ are in the quarter positions, as the two D-chromosomes occupy different halves of the cell (31).
Fig. 6: **Long axis distributions for diff-ter, oric and the replication forks:** We plot the spatial probability distributions \((P(z/L))\) of the position of different loci, where \(z\) denotes the position along the the long axis of the cylinder (cell), and \(L\) is the length of the cylinder at that stage of the simulation run. Data is shown for _df-ter_ locus (first row), _oriC_ locus (second row) and the RFs (third row) for various intervals during the life cycle. The intervals are indicated at the top of each subfigure. In the first interval, there are only two data sets corresponding to the loci of the D-chromosomes. Thereafter, in subsequent intervals one may observe the spatial distribution data corresponding to both D and GD chromosomes, depending on when (and whether) that particular locus gets replicated. Once a particular locus gets replicated then the corresponding D monomer is renamed as the GD monomer (eq: D1 is renamed to GD1), and new monomers have also been introduced due to the replication protocol (labelled GD1). The distribution for the renamed GD monomer (eq: GD1) is normalised by a different number, as compared to the GD monomer that was recently introduced (i.e. GD1). The normalization is different because the GD1 monomer was already present as the D1 monomer, even before it was replicated. The same labeling and normalization protocol holds true for D2, GD2 and GD2 monomers. The _df-ter_ locus gets replicated once. In the stage corresponding to \(0-0.2\) we have only the D1 off-ter locus. In the next stage the D1 locus has been replicated and there are two overlapping distributions corresponding to D1 and D2 (which remain cross-linked). The _oriC_ locus gets replicated at the start of the \((0.2-0.4)\) interval and hence the plots show four different data sets. The localization of the _oriC_s and the _df-tes_ can be visualized from our representative simulation video video video "St-3" (left, left, section Movies). In the third row we plot the spatial distributions of the RFs as they are assumed to move from one monomer to the next, along the chain contour. In the \((0-0.2)\) interval there only two RFs which move along the contour of the M-chromosome (shown in orange and blue). Thereafter, one has four RFs branching out from the two _oriC_s of the D-chromosomes, which get replicated to form the GD chromosomes. Note that GD1 and GD1 chromosomes occupy one-half of the cell while GD2 and GD2 occupies the other half. In the third row we show the spatial distributions of the RFs in different intervals of the life cycle. The RFs have been named in a specific way to clearly demarcate those which are traversing along different arms of the same chromosome. For instance, RF1 and RF1’ are the two replication forks moving along the two arms of the mother chromosome, while RF2 and RF2 denote the RFs moving along D1 and corresponding RF3 and RF3’ moving along D2.
At the start of \((0.2-0.4)\) interval, replication of each of the two D-chromosomes begins from the two _oriC_s. Thus four new RFs start at the position of the _oriC_s to start creating the GD-chromosomes. Thus, they start out from a position close to the quarter positions as reported for experimental data obtained _in-vivo_. Thereafter, the RFs start moving along the the two arms as in the train track model. Their positions are maintained around the quarter position as the cell elongates, though they also get pushed out towards the cylinder ends by the unreplicated sections of D-chromosomes. Because of the reasons mentioned above, the GD-_oriC_s are closer to the cells-ends in the simulation, thereby RFs show higher propensity to be in cell-ends in the \((0.2-0.6)\) stage, though it peaks near the quarter positions as in experiments.
The spatial distribution of RFs can again be observed and understood in the \((0.6-0.8)\) interval from Fig.6, as they move away from the _OriC_s along the contour towards the sites of CLs ( monomer 125 and 375) which make up Loop-1 and Loop-2. The CLs will be formed by linking monomer pairs \((1-125)\) and \((1-375)\), and thus after the CLs have formed at time \(0.7\tau\), the RFs will be spatially proximal to the _oriC_s. After the Loop-1 and Loop-2 of GD1' (and correspondingly GD2') are formed at \(0.7\tau\), the _oriC_s of the GD-chromosomes move to the \(3/8\)-th and \(5/8\)-th positions. Thus the RFs will also be found at this position.
In the last \((0.8-1)\) interval of the cell cycle, as the RFs move towards _dif-ters_ of the two D-chromosomes, starting out from the CL sites, as mentioned in the previous paragraph. Thus, there is a higher propensity for them to be near the cell-center, but there are indications of some spatial separation along the long axis in the position of two RFs, in the data from our simulations as seen in Fig.6 3rd row. In contrast, the experimental data for cell-age \(>0.6\) in Fig.5 clearly shows a prominent separation of peaks in the spatial distribution
Fig. 7: **Long axis distribution for other tagged loci:** We plot the spatial probability distributions \((P(z/L))\) of the position of different loci, where \(z\) denotes the position along the the long axis of the cylinder (cell), and \(L\) is the length of the cylinder at that stage of the simulation run. Data is shown for \(54.2^{\prime}\) locus (first row) and for \(45.1^{\prime}\) locus (second row), \(64.1^{\prime}\) locus (third row) \(79^{\prime}\) locus (fourth row) and \(74.1^{\prime}\) locus (fifth row), during the life cycle. The corresponding monomer indices are at the top of each row. The other plotting conventions are same as in Fig.6.
of RFs. To check the reason for this discrepancy of our data (averaged over 50 runs) from experimental data, we plot the spatial distribution of the RFs from individual runs in the \((0.8-1)\) interval of the life cycle: refer SI-6 for data on spatial positions of RFs from 50 individual runs. Many of these clearly show 3 to 4 peaks. Note that when we plot the averaged data shown in Fig.6 (3rd row), the separation of the peaks diminish, since the positions of the RFs can be oriented differently relative to each other, across different simulation runs. We compared the spatial distribution of RF data obtained with Arc-2-2 with that obtained using the Arc-2 architecture in SI-6 ans SI-7 respectively. In comparison, the data obtained using Arc-2 architecture shows lesser separation between the peaks, which indicates that Loops-3 and Loops-4 (absent in Arc-2 architecture) play a role in the separation of peaks, seen in the spatial distrubution of RFs.
Next, to find the relevant correlations, we plot the spatial distributions of center of mass (COM) of Loop-3 and Loop-4 in interval \((0.8-1)\tau\) SI-8, and compare this distribution with the distribution of RFs for Arc-2-2 architecture. We do this because at this stage of the life cycle the RFs are traversing along the contours of Loop-3 and Loop-4. We find reasonable one-to-one correspondence as shown in SI-9, between the two sets of plots, in terms of the separation of the spatial distributions of RFs and the COMs of loops-3 and 4. This implies that the RFs get separated along the cell long axis as the Loops-3 and 4 often occupy different regions. This may be a consequence of mutual repulsion between Loops-3 and 4, and further entropic interactions with Loops-1 and 2 of both the replicated chromosomes. All loops jostle for space to avoid each other and often end up interchanging positions.
The reader may question: If the above reasoning is correct then why do the spatial distributions of the RFs in the \((0.2-0.4)\) interval (and in the subsequent intervals except the last one) not show four peaks? We remind the reader that in these stages of the life cycle the RFs mostly traverse along the contours of Loop-1 and Loop-2. They reach Loops 3 and 4 only in the last stage of the life cycle. The Loops-3 and Loop-4 are different from Loops 1 and 2, since they are closer to the _diff-ter_ CL along the loop contour. As a consequence Loops 3 and 4 have a greater propensity to interchange positions (to avoid spatial overlaps while exploring conformational microstates) along the cell long axis, as compared to that of loops 1 and 2. The interchanging of loops are better visualised in SI-10. To have a complete understanding of the cause of repulsion between loops, and how the location of the loops along chain contour affects the organization of loops with respect to each other, refer (32).
The spatial location of each locus as they get (replicated and subsequently) separated from its copy was studied in experiments (20). The distribution of these positions along the long-axis for each locus is reproduced in Fig.5(b) for the aid of the reader. We can also obtain this distribution of "position of split" from our simulations, and we show it in SI-11 to see a good match with experiments for certain loci. In SI-11, we further discuss why the'split positions' of some loci as obtained by us differ from that seen in experiments.
**Data normalization: Experiments vs. Simulations:** There are some other caveats to be aware of. In simulations, we never see a pair of distinctly separated peaks of the _ter_ loci distributions, as we do not model cell division. However, the experimental data shows finite probability for two _diff-ters_, even when the cell is in its \((0.6-0.8)\) stage of life cycle (as deduced from the length of the cell) as well as for the interval \((0.8-1)\). In experiments, the cell lengths is used as a proxy for age of the cell. These could give rise to discrepancies when analyzing data by image processing softwares.
Moreover, there are differences in the methods of collecting data and normalizing the spatial distributions. In the given experimental data, the cells were first differentiated by their cell-size to categorize the age of the cell. For each such category, the number of distinguishable foci in each cell is observed and spatial distribution data corresponding to number of foci observed, they are normalised with respect to the number of cells in each of the sub-categories. The experimental images are unable to discern if the foci belong to the D-chromosome or GD-chromosome. Furthermore, if the foci cannot be spatially resolved, a 4-foci cell may be erroneously categorized as a three-foci or a two-foci cell. The three foci scenario may also arise due to asynchrony in the replication initiation process or stochasticity in the cohesion times. In the simulations we do not have asynchrony in the replication initiation process, however, there maybe stochasticity in the cohesion times of the loci. But, we have access to the positions of each monomers at all stages of the simulation run. Thus, our normalization protocol is different to that adopted in experiments.
In simulations, we have 50 independent runs to collect data over the entire life cycle, and thereby precisely know the stage of the life-cycle, when a loci is replicated to two loci of the next generation. We thereby normalize by the precise number of micro-states relevant for a particular monomer, depending on when that monomer has been introduced within that interval of the life cycle. For each independent run having a total of \(5\times 10^{7}\)MCS, we store data to calculate distributions every \(3\times 10^{4}\) MCS. Thereby, we know the number of contributing micro states in each stage of of the life cycle for each loci.
**Spatial distribution of other loci:** We now look at the spatial distributions of the other tagged loci in the experiments. We check whether we obtain distributions similar to those obtained by (20), relevant data has been reproduced in Fig.5 for comparison. Here we provide data for five such loci from the left arm in Fig.7 (as in (20)). The data for the loci on the right arm are given in SI-12.
For the locus marked as \(54.2^{\prime}\) (monomer 150), we see only two distributions for \((0-0.8)\) intervals of the life cycle, as the monomers of the daughter chromosomes get replicated only at the end of the fourth interval. Our modeling data is in fair agreement with the experimental data. We find that there are four peaks towards the end of the life cycle. Experimentally, it is not possible to distinguish between the two GD loci near the end of the life cycle, as they cannot be spatially resolved if are spatially proximal. In our simulations, we can uniquely identify the loci of each GD chromosome, and thereby, we obtain four distinct spatial distributions, albeit they overlap. In experiments, the loci distributions (that we obtain) will appear as broad distributions having only two distinct peaks. In this sense, our spatial distribution of loci are in agreement with the experimentally obtained distributions (20).
The data obtained from simulations for the locus marked as \(45.1^{\prime}\) (monomer 200) also shows good agreement with the experimental data for 2 foci. We do not have four peaks for this locus in the \((0.8-1)\) interval since the simulations are stopped
## 6 Conclusion
Figure 8: **Radial organization of arms and radial distribution of loci:** Subfigure (a) shows \(\langle cos(\theta)\rangle\), where \(\theta\) denotes the angle between vectors \(\vec{I}_{1}\) and \(\vec{I}_{2}\)(refer text). A high negative value of \(cos(\theta)\) indicates that the two loops (belonging to the two arms of the chromosome) lie on different cell halves along the radial axis. We have computed the average value of \(cos(\theta)\) by considering \(1000\) microstates. We observe that the average \(cos(\theta)\) value is more negative in the cases with smaller loops (within Loop-1 and Loop-2) as compared to the case without smaller loops. Subfigures(b-h) show the radial probability distributions of the monomers for several loci in the presence of smaller loops. These smaller loops are of size \(10\) monomers each and are placed along loop-1 and loop-2. We show here the data for the loci of the left arm, while the corresponding data for the right arm loci have been provided in the Supplementary. We note that we obtain a bimodal distribution for some loci, by the introduction of these smaller loops. The distributions we obtain match those found in SI-15. The experimentally obtained radial distributions for the loci have also been reproduced in SI-16 for aid of comparison. We do not obtain a match for the locus 54.2’ and data for some other loci on the right arm (presented in SI). We also notice that we obtain a double peaked distribution of the _oriC_ while the experimentally obtained distribution has a single peaked distribution. However, we have explicitly checked that one can obtain an exact match with experiments by tuning the size and location of these loops along the chain contour. But this is outside the scope of this study. Here we have just established the mechanism by which one may obtain the bimodal peaks in the radial distribution plots of genomic loci.
just as this specific locus gets replicated. Correspondingly, the experimental data also doesn't have any contribution from this locus in the \((0.8-1)\) interval, for the row with 4 foci. Furthermore, comparing our data in other intervals with the 2 foci data in Fig.5, the distribution of the locus \(45.1^{\prime}\) is peaked near the center as this locus is close to _dif-ter_. As the cell ages, the loci moves away from each other, hence delocalizing from the center of the cell due to the presence of mutual repulsion between Loop-3 and Loop-4. But once the Loop-1 and Loop-2 of the GD1' and GD2' chains are formed, they are pushed away from the poles in the \((0.6-0.8)\) interval. Thus, our data is consistent with the experimentally obtained data of Fig.5.
In the third row of Fig.7 we show data for the locus \(79^{\prime}\) (monomer 26). We note that the distributions from our simulations peak around the cell poles. This differs from two foci data from experiments in the \((0-0.4)\tau\) intervals, where they find that this locus is localized around the quarter positions. We have not incorporated the effects of transient loops in the current work. We presume that if we introduce transient loops in our simulations, this locus will stay away from the poles. This is because loops maintain distance from the walls to entropically explore conformations and can lead to better match with experimental data. In later parts of the cell-cycle with more than two foci, the experimental distribution is broad, and four peaks are observed. We do have distributions which are spread out over the length of the cell. But we obtain a reasonable match with experimental data only in the \((0.8-1)\tau\) interval as there are peaks around the quarter positions and at the poles.
We may reason along similar lines as to why the distributions obtained for the \(74^{\prime}\) locus is different to those seen in experiments. This locus again lies within the Loop-1, and would remain away from the cell-center (where the dif-ter is located). While there are broad similarities between the two sets of data, we still have high probability to obtain the loci right at the poles. This differs from the experimental data.
**Radial distributions of loci:** Reference (20) also provides experimental data for the radial distributions of loci along the radial axes, i.e. directions perpendicular to the long axis of the cylinder (say the \(z\) axis). We plot spatial distributions using the \(x\) or \(y\) coordinate of the loci as both of these can be referred to as "radial" directions, refer Figures in SI-13 and SI-14. We find that these distributions show in general broad distributions peaked at the middle of the cylinder, while the experimental data show doubled peaked distribution for some loci.
We propose an outline of the mechanism by which one can obtain such bimodal distribution as seen in experiments. A suitable modification of the the simple Arc-2-2 architecture results in the bimodal distribution of monomers (loci). We introduce 6 smaller subloops in each of Loop-1 and Loop-2 of size 10 monomers each. These subloops are equally spaced with 10 monomers in between. The introduction of these subloops enhances the entropic repulsion between Loop-1 and Loop-2 along the short axis. As a consequence, the monomers belonging to those loops show a bimodal distribution. We show this through simulations of two Arc-2-2 polymers segregated in space, with subloops of size 10 monomers in Loop-1 and Loop-2. We do not incorporate replication, since we only outline the mechanism by which one may obtain such bimodal distributions. The existence of topologically associated domains(TADs) in Hi-C maps indicate the existence of smaller loops in the DNA-polymer (35-47). We take equally sized and spaced subloops as a simple proof-of-concept model to establish that the smaller loops help in the radial organization of the loci. These loops will appear to be transient _in-vivo_.
To establish that Loop-1 and Loop-2 lie in different halves of the cylinder, along the radial axis, we carry out the following calculation. We construct a vector \(\vec{l}_{1}\) joining the mid-point of the cylinder and the COM (center of mass) of Loop-1. Similarly, we construct another vector \(\vec{l}_{2}\) joining the center of the cylinder and the COM of Loop-2. Note that we only consider the \(\hat{x}\) and \(\hat{y}\) components of the vectors. The angle between the two vectors is denoted by \(\theta\). Then, if \(cos\theta\approx-1\), the two vectors are anti-parallel to each other. This implies that the two loops, Loop-1 and Loop-2 (belonging to different arms of the chromosome) lie in different cell halves along the short axis. As can be inferred from Fig.8(a), we observe that introduction of smaller additional loops leads to the separation of arms along the short axis. We further note in Fig.8(b-h) that the introduction of subloops also leads to bimodal distributions of some loci along short axis, similar to what is seen in (20). Other loci show broader radial distributions as compared to radial distributions for Arc-2-2 polymers without smaller loops (refer SI-13 & SI-14), even if they do not show bimodal distributions. To check for robustness of our conclusions, we conduct similar simulations with five smaller subloops within Loop-1 (and Loop-2) with 16 monomers in each of the subloops. This also shows separation of these arms as can be inferred from values of \(\langle cos(\theta)\rangle\), refer Fig.8(a). This however fails to show bimodal radial distribution.
Furthermore, one may also introduce smaller loops along the rest of the Arc-2-2 polymer (outside loop-1 and loop-2) to obtain the bimodal distribution of other monomers. In the experimental data of (20), some loci show single peaked distributions while others show bimodal distribution. Even in our simulations, some monomers show single peaked distributions while others show bimodal distributions, although an exact match is not obtained with experiments. To obtain an exact match one needs to optimize the size and location of loops along the contour, which is out of the scope of the current manuscript.
## 2 Discussion
We establish that entropic repulsion between loops is a viable mechanism through which the chromosomes segregate and organise themselves. We show that the organisation of _oriC_, _dif-ter_ and other loci emerge spontaneously in our model, both along the longitudinal and the radial axis. Our model also successfully reconciles other experimental observations such as the spatial organization of replication forks. We have also shown that even though we have modeled the replication process in a train-track manner, the adoption of a particular polymer architecture by the chromosome results in the localisation of the RFs which supports (in spirit) the replication-factory model. Thus, the hypothesis first proposed by the authors of (20): "The position and dynamics of the replication forks are likely the consequence of the spatial organization of the chromosomes rather than vice-versa" is justified and the mechanism has been established by us. With this work and our past paper on the organization of _E.coli_ chromosomes in slow growth conditions using the Arc-2-2 architecture (31), we present a
unified model of the _E.coli_ chromosome formation model is able to reconcile growth conditions.
Although our coarse-grained model is able to reconcile multiple experimental results, the model still has room for improvement in the future. The incorporation of crowders in our model [23, 48, 49] could condense the chromosome significantly and away from the cylinder-ends. One may also incorporate effects of other transient cross-links in addition to the permanent (long-lived) cross-links that we have employed. Binding and loop extrusion proteins, which associate and disassociate from the chromosome at different stages of the cell-cycle, are likely to result in transient cross-links and further fine-tune evolution of organization. We mimic the action of the Topoisomerase enzymes by decreasing the monomer-diameters at infrequent but regular intervals. This enzyme acts _in vivo_, to remove topological constraints locally along the chain contour. In our study, we use Monte Carlo simulations to realize local diffusion of polymer segments within a confining cylinder (cell), which results in emergent entropic repulsion between different loops of the DNA-polymer. Consequently, we obtain evolution of organization of the chromosome, though we have a non-equilibrium system.
We have outlined principles by which one may obtain the experimental data of [20] though our model of the DNA-polymer with a modified topology. The experiments were conducted on a specific growth medium of the bacterial cells, which determines the value of \(\tau_{C}\), \(\tau_{D}\) and the doubling time. We have adapted our model similarly to establish a correspondence to the experiments of [20]. In future manuscripts, we shall communicate our results for a different choice of \(\tau_{C}\), \(\tau_{D}\), and the doubling time. We hope that our theoretical predictions can then be validated by experiments conducted using different growth media.
## 3 Acknowledgements
Authors acknowledge useful discussions with Arieh Zaritsky, Conrad Woldringh, Tejal Agarwal and Suckjoon Jun. A.C., with DST-SERB identification SQUID-1973-AC-4067, acknowledges funding by DST-India, project MTR/2019/000078 and CRG/2021/007824. A.C also acknowledges discussions in meetings organized by ICTS, Bangalore.
|
2302.00934 | High-dimensional variable clustering based on maxima of a weakly
dependent random process | We propose a new class of models for variable clustering called Asymptotic
Independent block (AI-block) models, which defines population-level clusters
based on the independence of the maxima of a multivariate stationary mixing
random process among clusters. This class of models is identifiable, meaning
that there exists a maximal element with a partial order between partitions,
allowing for statistical inference. We also present an algorithm depending on a
tuning parameter that recovers the clusters of variables without specifying the
number of clusters \emph{a priori}. Our work provides some theoretical insights
into the consistency of our algorithm, demonstrating that under certain
conditions it can effectively identify clusters in the data with a
computational complexity that is polynomial in the dimension. A data-driven
selection method for the tuning parameter is also proposed. To further
illustrate the significance of our work, we applied our method to neuroscience
and environmental real-datasets. These applications highlight the potential and
versatility of the proposed approach. | Alexis Boulin, Elena Di Bernardino, Thomas Laloë, Gwladys Toulemonde | 2023-02-02T08:24:26Z | http://arxiv.org/abs/2302.00934v3 | High-dimensional variable clustering based on sub-asymptotic maxima of a weakly dependent random process
###### Abstract
We propose a new class of models for variable clustering called Asymptotic Independent block (AI-block) models, which defines population-level clusters based on the independence of the maxima of a multivariate stationary mixing random process among clusters. This class of models is identifiable, meaning that there exists a maximal element with a partial order between partitions, allowing for statistical inference. We also present an algorithm for recovering the clusters of variables without specifying the number of clusters _a priori_. Our work provides some theoritical insights into the consistency of our algorithm, demonstrating that under certain conditions it can effectively identify clusters in the data with a computational complexity that is polynomial in the dimension. This implies that groups can be learned nonparametrically in which block maxima of a dependent process are only sub-asymptotic.
C 60G70; 62H05; 62M99
## 1 Introduction
Multivariate extremes arise when two or more extreme events occur simultaneously. These events are of prime interest to assess natural hazard, stemming from heavy rainfall, wind storms and earthquakes since they are driven by joint extremes of several of meteorological variables. It is well known from Sklar's theory (Sklar, 1959) that multivariate distributions can be decomposed into two distinct parts: the analysis of marginal distributions and the analysis of the dependence structure described by the copula function. Results from multivariate extreme value theory show that the possible dependence structure of extremes satisfy certain constraints. Indeed, the dependence structure may be described in various equivalent ways (Beirlant et al., 2004; De Haan and Ferreira, 2006; Resnick, 2008): by the exponent measure (Balkema and Resnick, 1977), by the Pickands dependence function (Pickands, 1981), by the stable tail dependence function (Huang, 1992), by the madogram (Naveau et al., 2009; Boulin et al., 2022), and by the extreme value copula (Gudendorf and Segers, 2010).
While the modeling of univariate and low-dimensional extreme events has been well-studied, it remains a challenge to model multivariate extremes, particularly when multiple rare events may occur simultaneously. Recent research in this area has focused on connecting the study of multivariate extremes to modern statistical and machine learning techniques. This has involved the development of new methods for characterizing complex dependence structures between extreme observations, such as sparsity-based approaches (Goix, Sabourin, Clemencon, et al., 2015; Meyer and Wintenberger, 2021; Simpson, Wadsworth, and Tawn, 2020), conditional independence and graphical models (Engelke and Hitz, 2020; Gissibl and Kluppelberg, 2018; Segers, 2020), dimensionality reduction (Chautru, 2015; Drees and Sabourin, 2021), and clustering methods (Cooley and Thibaud, 2019; Fomichov and Ivanov, 2022; Janssen and Wan, 2020). Our work is aligned with this direction of
research as we propose a clustering algorithm for learning the dependence structure of multivariate extremes and, withal, to bridge important ideas from modern statistics and machine learning to the framework of extreme-value theory. Our approach is remotely related to extremal graphical models. The probabilistic framework of this paper can effectively be seen as a disconnected extremal graph where the connected components are mutually independent of each other (see for example section 8 in Engelke, Ivanov, and Strokorb 2022).
It is possible to perform clustering on \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\), where \(n\) is the number of observations of a random vector \(\mathbf{X}\in\mathbb{R}^{d}\), through two different approaches: by partitioning the set of row indices \(\{1,\ldots,n\}\) or by partitioning the set of column indices \(\{1,\ldots,d\}\). The first problem is known as the data clustering problem, while the second is called the variable clustering problem, which is the focus of this paper. In data clustering, observations are drawn from a mixture distribution, and clusters correspond to different realizations of the mixing distribution, which is a distribution over all of \(\mathbb{R}^{d}\). In the framework of independent and identically distributed (i.i.d.) replications, (Pollard 1981) showed that \(k\)-means clustering is strongly consistent, and this result was replicated in the context of extremes by (Janssen and Wan 2020) for spherical \(k\)-means.
The problem of variable clustering (see, e.g., Bunea et al. 2020; Eisenach et al. 2020) involves grouping similar components of a random vector \(\mathbf{X}=(X^{(1)},\ldots,X^{(d)})\) into clusters. The goal is to recover these clusters from observations \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\). Instead of clustering similar observations based on a dissimilarity measure, the focus is on defining cluster models that correspond to subsets of the components \(X^{(j)}\) of \(\mathbf{X}\in\mathbb{R}^{d}\). The goal is to cluster similar variables such that variables within the same cluster are more similar to each other than they are to variables in other clusters. Variable clustering is of particular interest in the study of weather extremes, with examples in the literature on regionalization (Bador et al. 2015; Bernard et al. 2013; Saunders, Stephenson, and Karoly 2021), where spatial phenomena are observed at a limited number of sites. A specific case of interest is clustering these sites according to their extremal dependencies. This can be done using techniques such as \(k\)-means or hierarchical clustering with a dissimilarity measure designed for extremes. However, the statistical properties of these procedures have not been extensively studied, and it is not currently known which probabilistic models on \(\mathbf{X}\) can be estimated using these techniques. In this paper, we consider model-based clustering, where the population-level clusters are well-defined, offering interpretability and a benchmark to evaluate the performance of a specific clustering algorithm.
The assumption that data are realizations of independent and identically distributed (i.i.d.) random variables is a fundamental assumption in statistical theory and modeling. However, this assumption is often unrealistic for modern datasets or the study of time series. Developing methods and theory to handle departures from this assumption is an important area of research in statistics. One common approach is to assume that the data are drawn from a multivariate stationary and mixing random process, which implies that the dependence between observations weakens over the trajectory. This assumption is widely used in the study of non-i.i.d. processes.
Our contribution is twofold. First, we develop a probabilistic setting for Asymptotic Independent block (AI-block) models to address the problem of clustering extreme values of the target vector. These models are based on the assumption that clusters of components of a multivariate random process are independent relative to their extremes. This approach has the added benefit of being amenable to theoretical analysis, and we show that these models are identifiable (see Theorem 1). Second, we motivate and derive an algorithm specifically designed for these models (see Algorithm (ECO)). We analyze its performance in terms of exact cluster recovery for minimally separated clusters, using a cluster separation metric (see Theorem 4). The issue is investigated in the context of nonparametric estimation using the block maxima method, where the block length is a tuning parameter.
NotationsAll bold letters \(\mathbf{x}\) correspond to vector in \(\mathbb{R}^{d}\). By considering \(B\subseteq\{1,\ldots,d\}\), we denote the \(|B|\)-subvector of \(\mathbf{x}\) by \(\mathbf{x}^{(B)}=(X^{(j)})_{j\in B}\). We define by \(\mathbf{X}\in\mathbb{R}^{d}\) a random vector with law \(H\) and \(\mathbf{X}^{(B)}\) a random subvector of \(\mathbf{X}\) with law
\[H^{(B)}(\mathbf{x}^{(B)})=H(\mathbf{1},\mathbf{x}^{(B)},\mathbf{1}),\quad(X^{ (j)})_{j\in B}\in[0,1]^{|B|},\]
where \((\mathbf{1},\mathbf{x}^{(B)},\mathbf{1})\) has its \(j\)th component equal to \(x^{(j)}\mathds{1}_{\{j\in B\}}+\mathds{1}_{\{j\notin B\}}\). In a similar way, we note \((\mathbf{0},\mathbf{x}^{(B)},\mathbf{0})\) the vector in \(\mathbb{R}^{d}\) which equals \(x^{(j)}\) if \(j\in B\) and \(0\) otherwise. When \(B=\{1,\ldots,d\}\), we will write \(H\) instead of \(H^{(\{1,\ldots,d\})}\). Classical inequalities of vectors such as \(\mathbf{x}>0\) should be understand componentwise. Weak convergence of processes are denoted by '\(\leadsto\)'. The notation \(\delta_{x}\) corresponds to the Dirac measure at \(x\). Let \(O=\{O_{g}\}_{g=1,\ldots,G}\) be a partition of \(\{1,\ldots,d\}\) into \(G\) groups and let \(s:\{1,\ldots,d\}\to\{1,\ldots,G\}\) be a variable index assignement function, thus \(O_{g}=\{a\in\{1,\ldots,d\}:s(a)=g\}=\{i_{g,1},\ldots,i_{g,d_{g}}\}\) with \(d_{1}+\cdots+d_{G}=d\). Using these notations, the variable \(X^{(i_{g},t)}\) should be read as the \(\ell\)th element from the \(g\)th cluster. Let \(\mathbf{X}^{(O_{g})}\), \(g\in\{1,\ldots,G\}\) be extreme value random vectors with \(\mathbf{X}=(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})})\), we say that \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) are independent if and only if
\[H(\mathbf{x})=\Pi_{g=1}^{G}H^{(O_{g})}\left(\mathbf{x}^{(O_{g})}\right),\quad \mathbf{x}\in\mathbb{R}^{d}.\]
The structure of this paper is as follows. In Section 2, we provide background on extreme-value theory and weakly dependent random processes, and describe the probabilistic framework of AI-block models. We show that these models are identifiable and provide a series of equivalent characterizations. In Section 3, we develop a new clustering algorithm for AI-block models and prove that it can recover the target partition with high probability. We provide a process that satisfies our probabilistic and statistical assumptions in Section 4, and compare our approach to existing state-of-the-art methods in Section 5. We illustrate the finite sample performance of our approach on simulated datasets in Section 6. The proofs of our main, auxiliary, and supplementary results are provided in A, B, and C of the supplementary material, respectively. Additional figures and numerical results are presented in C.3. Throughout the paper, readers will be directed to appendices B or C for additional materials when necessary. Otherwise, all the necessary materials can be found in A.
## 2 A model for variable clustering
### Background setting
Consider \(\mathbf{Z}=(Z^{(1)},\ldots,Z^{(d)})\) and \(\mathbf{Z}_{t}=(Z^{(1)}_{t},\ldots,Z^{(d)}_{t})\), where \(t\in\mathbb{Z}\) be respectively a \(d\)-dimensional random vector with law \(F\) and a strictly stationary multivariate random process distributed according to \(\mathbf{Z}\). For the process \((\mathbf{Z}_{t},t\in\mathbb{Z})\), let
\[\mathcal{F}_{k}=\sigma(\mathbf{Z}_{t},t\leq k),\quad\text{ and }\quad\mathcal{G}_{ k}=\sigma(\mathbf{Z}_{t},t\geq k),\]
be respectively the natural filtration and "reverse" filtration of \((\mathbf{Z}_{t},t\in\mathbb{Z})\). Many types of mixing conditions exist in the literature. The weakest among those most commonly used is called strong or \(\alpha\)-mixing. Specifically, for two \(\sigma\)-fields \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) of a probability space \((\Omega,\mathcal{A},\mathbb{P})\) the \(\alpha\)-mixing coefficient of a multivariate random process is defined for \(\ell\geq 1\)
\[\alpha(\ell)=\sup_{t\in\mathbb{Z}}\,\alpha\left(\mathcal{F}_{t},\mathcal{G}_{t+ \ell}\right), \tag{1}\]
where
\[\alpha\left(\mathcal{A}_{1},\mathcal{A}_{2}\right)=\sup_{A_{1}\in\mathcal{A}_{ 1},A_{2}\in\mathcal{A}_{2}}\left|\mathbb{P}(A_{1}\cap A_{2})-\mathbb{P}(A_{1}) \mathbb{P}(A_{2})\right|.\]
For any process \((\mathbf{Z}_{t},t\in\mathbb{Z})\), let
\[\beta(\mathcal{A}_{1},\mathcal{A}_{2})=\sup\frac{1}{2}\sum_{i,j\in I\times J} \left|\mathbb{P}(A_{i}\cap B_{j})-\mathbb{P}(A_{i})\mathbb{P}(B_{j})\right|,\]
where the sup is taken over all finite partitions \((A_{i})_{i\in I}\) and \((B_{j})_{j\in J}\) of \(\Omega\) with the sets \(A_{i}\) in \(\mathcal{A}_{1}\) and the sets \(B_{j}\) in \(\mathcal{A}_{2}\). The \(\beta\)-mixing (or completely regular) coefficient is defined as follows
\[\beta(\ell)=\sup_{t\in\mathbb{Z}}\beta(\mathcal{F}_{t},\mathcal{G}_{t+\ell}). \tag{2}\]
By considering
\[\varphi(\mathcal{A}_{1},\mathcal{A}_{2})=\sup_{A_{1},A_{2}\in\mathcal{A}_{1} \times\mathcal{A}_{2},\mathbb{P}(A_{1})\neq 0}\left|\mathbb{P}(A_{2}|A_{1})- \mathbb{P}(A_{1})\right|,\]
the \(\varphi\)-mixing coefficient is defined by
\[\varphi(\ell)=\sup_{t\in\mathbb{Z}}\varphi(\mathcal{F}_{t},\mathcal{G}_{t+ \ell}) \tag{3}\]
It should be noted that if the original process \((\mathbf{Z}_{t},t\in\mathbb{Z})\) satisfies an \(\alpha\)- or \(\beta\)- or \(\varphi\)-mixing condition, then the stationary process \((f(\mathbf{Z}_{t}),t\in\mathbb{Z})\) for a measurable function \(f\) also satisfies the same mixing condition. The \(\alpha\)-mixing rate, \(\beta\)-mixing rate, and \(\varphi\)-mixing rate of the stationary process are all bounded by the corresponding rate of the original process. In terms of their order, the three mixing coefficients are related as follows:
\[\alpha(\ell)\leq\beta(\ell)\leq\varphi(\ell). \tag{4}\]
This means that the \(\alpha\)-mixing coefficient is the weakest, followed by the \(\beta\)-mixing coefficient, and finally the \(\varphi\)-mixing coefficient is the strongest.
Let \(\mathbf{M}_{m}=(M_{m}^{(1)},\ldots,M_{m}^{(d)})\) be the vector of component-wise maxima, where we denote by \(M_{m,j}=\max_{i=1,\ldots,m}Z_{i}^{(j)}\). Consider a random vector \(\mathbf{X}=(X^{(1)},\ldots,X^{(d)})\) with distribution \(H\). A normalizing function \(a\) on \(\mathbb{R}\) is a non-decreasing, right continuous function that goes to \(\pm\infty\) as \(x\rightarrow\pm\infty\). In extreme value theory, a fundamental problem is to characterize the limit distribution \(H\) in the following limit:
\[\lim_{m\rightarrow\infty}\mathbb{P}\left\{\mathbf{M}_{m}\leq\mathbf{a}_{m}( \mathbf{x})\right\}=H(\mathbf{x}), \tag{5}\]
where \(\mathbf{a}_{m}=(a_{m}^{(1)},\ldots,a_{m}^{(d)})\) with \(a_{m}^{(j)},1\leq j\leq d\) are normalizing functions and \(H\) is a non-degenerate distribution. Typically, \(H\) is an extreme value distribution, and \(\mathbf{X}\) is a max-stable random vector with generalized extreme value margins. In this case, we can write:
\[\mathbb{P}\left\{\mathbf{X}\leq\mathbf{x}\right\}=\exp\left\{-\Lambda(E \setminus[0,\mathbf{x}])\right\},\]
where \(\Lambda\) is a Radon measure on the cone \(E=[0,\infty)^{d}\setminus\{\mathbf{0}\}\). When (5) holds with \(H\) an extreme value distribution, the vector \(\mathbf{Z}\) is said to be in max-domain of attraction of the random vector \(\mathbf{X}\) with law \(H\), denoted as \(F\in\mathcal{D}(H)\). In our context of a dependent process \((\mathbf{Z}_{t},t\in\mathbb{Z})\), the limit in (5) will in general be different from a multivariate extremal types distribution and further conditions over the regularity (or mixing conditions) are thus needed to obtain an extremal distribution. In particular, if the random process \((\mathbf{Z}_{t},t\in\mathbb{Z})\) is \(\alpha\)-mixing that is \(\alpha(\ell)\to 0\) as \(\ell\rightarrow\infty\), then a Fisher-Tippett-Gnedenko's type theorem holds for multivariate stationary random processes (see Theorem 4.2 of Hsing 1989).
The max-domain of attraction can be translated into terms of copulae. Let \(C_{m}\) be the copula of \(\mathbf{M}_{m}\). In this context, the domain of attraction condition can be written as:
\[\lim_{m\rightarrow\infty}C_{m}(\mathbf{u})=C(\mathbf{u}).\]
Under the same mixing condition, i.e., \(\alpha(\ell)\to 0\) as \(\ell\to\infty\), \(C\) is an extreme value copula and it can be expressed as follows for \(\mathbf{u}\in[0,1]^{d}\):
\[C(\mathbf{u})=\exp\left\{-L\left(-\ln(u^{(1)}),\ldots,-\ln(u^{(d)})\right) \right\},\]
where \(L\) is known as the stable tail dependence function (see Gudendorf and Segers, 2010 for an overview of extreme value copulae). As it is a homogeneous function of order \(1\), i.e., \(L(a\mathbf{z})=aL(\mathbf{z})\) for all \(a>0\), we have, for all \(\mathbf{z}\in[0,\infty)^{d}\),
\[L(\mathbf{z})=(z^{(1)}+\cdots+z^{(d)})A(\mathbf{t}),\]
with \(t^{(j)}=z^{(j)}/(z^{(1)}+\cdots+z^{(d)})\) for \(j\in\{2,\ldots,d\}\), \(t^{(1)}=1-(t^{(2)}+\cdots+t^{(d)})\), and \(A\) is the restriction of \(L\) into the \(d\)-dimensional unit simplex, viz.
\[\Delta_{d-1}=\{(v^{(1)},\ldots,v^{(d)})\in[0,1]^{d}:v^{(1)}+\cdots+v^{(d)}=1\}.\]
The function \(A\) is known as the Pickands dependence function and is often used to quantify the extremal dependence among the elements of \(\mathbf{X}\). Indeed, \(A\) satisfies the constraints \(1/d\leq\max(t^{(1)},\ldots,t^{(d)})\leq A(\mathbf{t})\leq 1\) for all \(\mathbf{t}\in\Delta_{d-1}\), with lower and upper bounds corresponding to the complete dependence and independence among maxima. For the latter, it is commonly said that the stationary random process \((\mathbf{Z}_{t},t\in\mathbb{Z})\) exhibits asymptotic independence, i.e., the multivariate extreme value distribution \(H\) in the max-domain of attraction is equal to the product of its marginal extreme value distributions.
### Proposed AI-block models
In this paper, we focus on the concept of asymptotic independence, which has been observed in various applications, such as the analysis of spatial precipitation patterns (see Lalancette et al., 2021; Le et al., 2018) and water discharges in river networks (Fomichov and Ivanovs, 2022). Motivated by these applications, we introduce a new class of models for variable clustering, AI-block models, in which population-level clusters are defined as groups of variables that are dependent within clusters and independent from other clusters relative to their extremes. Formally, the variables of the distribution in the domain of attraction of observed processes can be partitioned into \(G\) unknown clusters \(O=\{O_{1},\ldots,O_{G}\}\), such that variables within the same cluster are dependent, and the clusters are asymptotically independent. In this section, we focus on the identifiability of the model, specifically the existence of a unique maximal element with respect to a certain partial order on partition. We explicitly construct this maximal element, which corresponds to the thinnest partition where this property holds and serves as a target for statistical inference.
Let us consider \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) to be extreme value random vectors with extreme value copulae \(C^{(O_{1})},\ldots,C^{(O_{G})}\) respectively. Under condition of independence between \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\), the random vector \(\mathbf{X}=(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})})\) is again extreme and one can detail the expression of its extreme value copula. The formal statement of this result is stated in the next proposition.
**Proposition 1**.: _Let \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) be independent extreme value random vectors with extreme value copulae \(C^{(O_{1})},\ldots,C^{(O_{G})}\). Then the function \(C\) defined as_
\[C:\ \ [0,1]^{d} \longrightarrow [0,1]\] \[\mathbf{u} \longmapsto \Pi_{g=1}^{G}C^{(O_{g})}(u^{(i_{g,1})},\ldots,u^{(i_{g,d_{g}})}),\]
_is an extreme value copula associated to the random vector \(\mathbf{X}=(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})})\)._
As a result, a random vector \(\mathbf{X}\) that exhibits asymptotic independence between extreme-valued subvectors therefore inherits this extreme-valued property. Using the definitions and notations so far introduced in this work, we now present the definition of our model.
**Definition 1** (Asymptotic Independent-block model).: Let \((\mathbf{Z}_{t},t\in\mathbb{Z})\) be a \(d\)-variate stationary random process with law \(F\) and \(\mathbf{X}\) a random vector with extreme value distribution \(H\). The random process \(\mathbf{Z}_{t}\) is said to follow an AI-block model if \(F\in D(H)\) and for every \(g\in\{1,\ldots,G\}\), \(\mathbf{X}^{(O_{g})}=(X^{(i_{g,1})},\ldots,X^{(i_{g,d_{g}})})\) are extreme value random vectors and \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) are independent, that is \(H=\Pi_{g=1}^{G}H^{(O_{g})}\).
Notice that, when \(G=1\), the definition of AI-block models thus reduces to the process \((\mathbf{Z}_{t},t\in\mathbb{Z})\) is in the domain of attraction of an extreme value distribution \(H\).
Following Bunea et al.2020, we introduce the following notation in our framework. We say that \(\mathbf{Z}\) follows an AI-block model with a partition \(O\), denoted \(\mathbf{Z}\sim O\). We define the set \(\mathcal{L}(\mathbf{Z})=\{O:O\text{ is a partition of }\{1,\ldots,d\}\text{ and } \mathbf{Z}\sim O\}\), which is nonempty and finite, and therefore has maximal elements. We introduce a partial order on partitions as follows: let \(O=\{O_{g}\}_{g}\) and \(\{S_{g^{\prime}}\}_{g^{\prime}}\) be two partitions of \(\{1,\ldots,d\}\). We say that \(S\) is a sub-partition of \(O\) if, for each \(g^{\prime}\), there exists \(g\) such that \(S_{g^{\prime}}\subseteq O_{g}\). We define the partial order \(\leq\) between two partitions \(O\) and \(S\) of \(\{1,\ldots,d\}\) as follows:
\[O\leq S,\text{ if }S\text{ is a sub-partition of }O. \tag{6}\]
For any partition \(O=\{O_{g}\}_{1\leq g\leq G}\), we write \(a\stackrel{{ O}}{{\sim}}b\) where \(a,b\in\{1,\ldots,d\}\) if there exists \(g\in\{1,\ldots,G\}\) such that \(a,b\in O_{g}\).
**Definition 2**.: For any two partitions \(O,S\) of \(\{1,\ldots,d\}\), we define \(O\cap S\) as the partition induced by the equivalence relation \(a\stackrel{{ O\cap S}}{{\sim}}b\) if and only if \(a\stackrel{{ O}}{{\sim}}b\) and \(a\stackrel{{ S}}{{\sim}}b\).
Checking that \(a\stackrel{{ O\cap S}}{{\sim}}b\) is an equivalence relation is straightforward. With this definition, we have the following interesting properties that lead to the desired result, the identifiability of AI-block models.
**Theorem 1**.: _Let \((\mathbf{Z}_{t},t\in\mathbb{Z})\) be a stationary random process, then the following properties hold:_
1. _Consider_ \(O\leq S\)_. Then_ \(\mathbf{Z}\sim S\) _implies_ \(\mathbf{Z}\sim O\)_,_
2. \(O\leq O\cap S\) _and_ \(S\leq O\cap S\)_,_
3. \(\mathbf{Z}\sim O\) _and_ \(\mathbf{Z}\sim S\) _is equivalent to_ \(\mathbf{Z}\sim O\cap S\)_,_
4. _The set_ \(\mathcal{L}(\mathbf{Z})\) _has a unique maximum_ \(\bar{O}(\mathbf{Z})\)_, with respect to the partition partial order_ \(\leq\) _in (_6_)._
The proof demonstrates that for any partition such that \(\mathbf{Z}\) follows an AI-block model, there exists a maximal partition, denoted by \(\bar{O}\), and its structure is intrinsic of the definition of the extreme value random vector \(\mathbf{X}\). This partition, which represents the thinnest partition of \(\mathbf{Z}\) where \(\mathbf{X}\) is independent per block, matches our expectations for a reasonable clustering target in these models. With a slight abuse of notation, we will refer to \(\bar{O}(\mathbf{Z})\) as \(\bar{O}\) throughout the rest of this paper.
### Extremal dependence structure for AI-block models
Under the conditions stated in Proposition 1, \(\mathbf{X}\) is an extreme value random vector with a stable tail dependence function \(L\). This function can be expressed in the following form:
\[L\left(z^{(1)},\ldots,z^{(d)}\right)=\sum_{g=1}^{G}L^{(O_{g})}\left(\mathbf{ z}^{(O_{g})}\right),\quad\mathbf{z}\in[0,\infty)^{d}, \tag{7}\]
where \(L^{(O_{1})},\ldots,L^{(O_{G})}\) are the stable tail dependence functions with copulae \(C^{(O_{1})},\ldots,C^{(O_{G})}\), respectively. This model is a specific form of the nested extreme value copula, as mentioned in the remark below and discussed in further detail in Hofert, Huser, and Prasad 2018.
**Remark 1**.: Equation (7) can be rewritten as
\[L(\mathbf{z})=L_{\Pi}\left(L^{(O_{1})}\left(z^{(O_{1})}\right),\ldots,L^{(O_{G })}\left(z^{(O_{G})}\right)\right),\]
where \(L_{\Pi}(z^{(1)},\ldots,z^{(G)})=\sum_{g=1}^{G}z^{(g)}\) is a stable tail dependence function corresponding to asymptotic independence. According to Proposition 1, \(C\) is an extreme value copula. Therefore, it follows that \(C\), which has the representation
\[C(\mathbf{u})=C_{\Pi}\left(C^{(O_{1})}(\mathbf{u}^{(O_{1})}),\ldots,C^{(O_{G} )}(\mathbf{u}^{(O_{G})})\right),\quad C_{\Pi}=\Pi_{g=1}^{G}u^{(g)},\]
is also a nested extreme value copula, as defined in Hofert, Huser, and Prasad 2018.
Equation (7) can be restricted to the simplex, allowing us to express the stable tail dependence function in terms of the Pickands dependence function. Specifically, the Pickands dependence function \(A\) can be written as a convex combination of the Pickands dependence functions \(A^{(O_{1})},\ldots,A^{(O_{G})}\) as follows:
\[A(t^{(1)},\ldots,t^{(d)}) =\frac{1}{z^{(1)}+\cdots+z^{(d)}}\left[\sum_{g=1}^{G}(z^{(i_{g,1} )}+\cdots+z^{(i_{g,d_{g}})})A^{(O_{g})}(\mathbf{t}^{(O_{g})})\right]\] \[=\sum_{g=1}^{G}w^{(O_{g})}(\mathbf{t})A^{(O_{g})}(\mathbf{t}^{(O _{g})})=:A^{(O)}(t^{(1)},\ldots,t^{(d)}), \tag{8}\]
with \(t^{(j)}=z^{(j)}/(z^{(1)}+\cdots+z^{(d)})\) for \(j\in\{2,\ldots,d\}\) and \(t^{(1)}=1-(t^{(2)}+\cdots+t^{(d)})\), \(w^{(O_{g})}(\mathbf{t})=(z^{(i_{g,1})}+\cdots+z^{(i_{g,d_{g}})})/(z^{(1)}+ \cdots+z^{(d)})\) for \(g\in\{2,\ldots,G\}\) and \(w^{(O_{1})}(\mathbf{t})=1-(w^{(O_{2})}(\mathbf{t})+\cdots+w^{(O_{G})}(\mathbf{ t}))\), \(\mathbf{t}^{(O_{g})}=(t^{(i_{g,1})},\ldots,t^{(i_{g,d_{g}})})\) where \(t^{(i_{g,\ell})}=z^{(i_{g,\ell})}/(z^{(i_{g,1})}+\cdots+z^{(i_{g,d_{g}})})\) and \((i_{g,\ell})\) designates the \(\ell\)th variable in the \(g\)th cluster for \(\ell\in\{1,\ldots,d_{g}\}\) and \(g\in\{1,\ldots,G\}\). As a convex combination of Pickands dependence functions, \(A\) is itself a Pickands dependence function (see page 123 of Falk, Husler, and Reiss 2010).
In the context of independence between extreme random variables, it is well-known that the inequality \(A(\mathbf{t})\leq 1\) holds for \(\mathbf{t}\in\Delta_{d-1}\), where \(A\) is the Pickands dependence function and equality stands if and only if the random variables are independent. This result extends to the case of random vectors, with the former case being a special case where \(d_{1}=\cdots=d_{G}=1\).
**Proposition 2**.: _Consider an extreme value random vector \(\mathbf{X}\in\mathbb{R}^{d}\) with Pickands dependence function \(A\). Let \(A^{(O)}\) be as defined in (8). For all \(\mathbf{t}\in\Delta_{d-1}\), we have:_
\[\left(A^{(O)}-A\right)(\mathbf{t})\geq 0,\]
_with equality if and only if \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) are independent._
This proposition states that the difference between the convex combinations of the Pickands dependence functions denoted by \(A^{(O)}\) and the Pickands dependence function \(A\) is always nonnegative. Equality holds if and only if the subvectors \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) are independent. The next proposition gives the form of the exponent measure when the random vectors \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) are independent.
**Proposition 3**.: _Suppose \(\mathbf{X}\) is an extreme-value random vector with exponent measure \(\Lambda\) concentrating on \(E\setminus[0,\mathbf{x}]\) where \(E=[0,\infty]^{d}\setminus\{\boldsymbol{0}\}\). The following properties are equivalent:_
1. _The vectors_ \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) _are independent._
2. _The vectors are blockwise independent: for every_ \(1\leq g<h\leq G\)__ \[\mathbf{X}^{(O_{g})}\ \mathrm{and}\ \mathbf{X}^{(O_{h})},\ \mathrm{are\ independent\ random\ vectors}.\]
3. _The exponent measure_ \(\Lambda\) _concentrates on_ \[\bigcup_{g=1}^{G}\{\boldsymbol{0}\}^{d_{1}}\times\cdots\times]0,\infty[^{d_{g }}\times\cdots\times\{\boldsymbol{0}\}^{d_{G}},\] (9) _so that for_ \(\mathbf{y}>\boldsymbol{0}\)_,_ \[\Lambda\left(\bigcup_{1\leq g<h\leq G}\left\{\mathbf{x}\in E,\exists a\in O_{ g},x^{(a)}>y^{(a)},\exists b\in O_{h},x^{(b)}>y^{(b)}\right\}\right)=0\] _._
These conditions generalize straightforwardly those stated in Proposition 5.24 of Resnick 2008 (see Exercise 5.5.1 of the book aforementioned or the Lemma in Strokorb 2020), we refer to Appendix B.1 for a proof. Furthermore, Equation (9) is geometrically involving: the exponent measure concentrates only the positive orthants where maxima are dependent, we refer to Fig. 3 in Appendix C.3.1 to clarify this statement.
In higher dimensions, the translation of asymptotic independence for random vectors can be computationally expensive and may require parametric assumptions to be tractable. Ideally, we would like a summary statistic that can be estimated empirically and that accurately reflects the underlying clusters. In extreme value theory, independence between the components \(X^{(1)},\ldots,X^{(d)}\) of an extreme-value random vector \(\mathbf{X}\in\mathbb{R}^{d}\) can be characterized in a useful way: according to Takahashi 1987, 1994, total independence of \(\mathbf{X}\) is equivalent to the existence of a vector \(\mathbf{x}=(x^{(1)},\ldots,x^{(d)})\in\mathbb{R}^{d}\) such that \(H(\mathbf{x})=H^{(1)}(x^{(1)})\ldots H^{(d)}(x^{(d)})\). In the following, we find conditions on \(F\) such that \(F\in D(H)\) and \(H=\Pi_{g=1}^{G}H^{(O_{g})}\) where \(H^{(O_{g})}\) are extreme value distributions for \(g\in\{1,\ldots,G\}\) to obtain a similar statement of Takahashi 1987, 1994 translated to our framework.
**Condition \(\mathcal{A}\)**.: There exist sequences \(r_{m}\), \(\ell_{m}\) such that the following statements hold:
1. \(r_{m}\to\infty\) and \(r_{m}=o(m)\),
2. \(\ell_{m}\to\infty\) and \(\ell_{m}=o(r_{m})\),
3. \((r_{m}/l_{m})\alpha(l_{m})=o(1)\), where the coefficient \(\alpha\) is given in (1).
**Theorem 2**.: _Suppose that \(H\) and \(H^{(O_{g})}\) are respectively \(d\) and \(d_{g}\) continuous extreme value distributions, for \(g\in\{1,\ldots,G\}\). Suppose that Conditions \(\mathcal{A}\) holds for \((\mathbf{Z}_{t},t\in\mathbf{Z})\), then_
\[\mathbb{P}\left\{\mathbf{M}_{m}\leq\mathbf{a}_{m}(\mathbf{x})\right\} \underset{m\to\infty}{\longrightarrow}\Pi_{g=1}^{G}H^{(O_{g})}(\mathbf{x}^{( O_{g})}),\quad\forall\mathbf{x}\in\mathbb{R}^{d},\]
_if and only if_
\[\mathbb{P}\{\mathbf{M}_{m}\leq\mathbf{a}_{m}(\mathbf{x})\}\underset{m\to \infty}{\longrightarrow}H(\mathbf{x}),\quad\forall\mathbf{x}\in\mathbb{R}^{d} \tag{10}\]
_and there exists a \(\mathbf{p}=(\mathbf{p}^{(O_{1})},\ldots,\mathbf{p}^{(O_{G})})\in\mathbb{R}^{d}\) such that \(0<H^{(O_{g})}(\mathbf{p}^{(O_{g})})<1\) and_
\[\mathbb{P}\left\{\mathbf{M}_{m}\leq\mathbf{a}_{m}(\mathbf{p})\right\} \underset{m\to\infty}{\longrightarrow}\Pi_{g=1}^{G}H^{(O_{g})}(\mathbf{p}^{( O_{g})}). \tag{11}\]
Notice that condition in (10) is natural in the study of the multivariate dependence of extreme. Indeed, this condition in AI-block model is directly obtained when subvectors \(\mathbf{Z}^{(O_{g})}\), \(g\in\{1,\ldots,G\}\) are in the max-domain of attraction of an extreme value distribution as asked in Definition 1. The random vector \(\mathbf{Z}\) is thus in the max-domain of attraction of a multivariate extreme value distribution which is the product of the corresponding subvectors as precisely written in Proposition 1.
The interested reader may find the proof in B.2. One direct application of this result in AI-block models is that \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) are independent if and only if there exists an extreme value distribution \(H\) such that the process \((\mathbf{Z}_{t},t\in\mathbb{Z})\) is in the max-domain of attraction and the following holds:
\[A\left(\frac{1}{d},\ldots,\frac{1}{d}\right)=\sum_{g=1}^{G}\frac{d_{g}}{d}A^{( O_{g})}\left(\frac{1}{d_{g}},\ldots,\frac{1}{d_{g}}\right).\]
**Definition 3** (Sum of Extremal COefficients (SECO)).: The extremal coefficient of an extreme value random vector \(\mathbf{X}\) is defined as (see Smith, 1990):
\[\theta:=\theta^{(\{1,\ldots,d\})}=d\,A(d^{-1},\ldots,d^{-1}), \tag{12}\]
where \(A\) is the Pickands dependence function. For a partition \(O=\{O_{1},\ldots,O_{G}\}\) of \(\{1,\ldots,d\}\), we also define the extremal coefficient of the subvectors \(\mathbf{X}^{(O_{g})}\) as \(\theta^{(O_{g})}=d_{g}A^{(O_{g})}(d_{g}^{-1},\ldots,d_{g}^{-1})\), where \(d_{g}=|O_{g}|\) is the size of the set \(O_{g}\) and \(A^{(O_{g})}\) is the Pickands dependence function of \(\mathbf{X}^{(O_{g})}\). Using these coefficients, we define the following quantity SECO as
\[\text{SECO}(O)=\sum_{g=1}^{G}\theta^{(O_{g})}-\theta. \tag{13}\]
The SECO is a measure that quantifies the deviation of the sum of extremal coefficient of the subvectors \(\mathbf{X}^{(O_{g})}\) from the extremal coefficient of the full vector \(\mathbf{X}\). When this measure is \(0\), it indicates that the subvectors \(\mathbf{X}^{(O_{1})},\ldots,\mathbf{X}^{(O_{G})}\) constitute an independent partition. This means that the SECO in (13) captures the asymptotic independent block structure of the random vector \(\mathbf{X}\), regardless of any distributional assumptions.
To perform statistical inference, we state a condition based on the extremal dependence of each cluster. This condition allows to present a simple, yet powerful, algorithm which compare the pairwise extreme dependence between components of the vector, enabling us to make informed conclusions about the dependence structures present in the data.
**Condition \(\mathcal{B}\)**.: For every \(g\in\{1,\ldots,G\}\), the extreme value random vector \(\mathbf{X}^{(\bar{O}_{g})}\), where \(\bar{O}_{g}\) is the maximal element of \(\mathcal{L}(\mathbf{Z})\), exhibits dependence between all components.
One sufficient condition to satisfy Condition \(\mathcal{B}\) is to suppose that exponent measures of the extreme value random vectors \(\mathbf{X}^{(\bar{O}_{g})}\) have nonnegative Lebesgue densities on the nonnegative orthant \([0,\infty)^{d_{g}}\setminus\{\mathbf{0}^{(\bar{O}_{g})}\}\) for every \(g\in\{1,\ldots,G\}\). The domination of the exponent measure by the Lebesgue measure is a prerequisite assumption to define conditional independence between nodes of an extremal graphical models, (see, e.g., Engelke and Hitz, 2020; Engelke and Volgushev, 2022) and Section 7 for a more detailed discussion on this condition). Various classes of tractable extreme value distributions satisfy Condition \(\mathcal{B}\). These popular models, commonly used for statistical inference, include the asymmetric logistic model (Tawn, 1990), the asymmetric Dirichlet model (Coles and Tawn, 1991), the pairwise Beta model (Cooley, Davis, and Naveau, 2010) or the Husler Reiss model (Husler and Reiss, 1989).
## 3 Consistent estimation of minimaly separated clusters
### Multivariate tail coefficient
Throughout this section assume that we observe copies \(\mathbf{Z}_{1}\ldots,\mathbf{Z}_{n}\) of the \(d\)-dimensional stationary random process \((\mathbf{Z}_{t},t\in\mathbb{Z})\), which is in the max-domain of attraction of \(\mathbf{X}\), an AI-block model as in Definition 1. The sample of size \(n\) of \((\mathbf{Z}_{t},t\in\mathbb{Z})\) is divided into \(k\) blocks of length \(m\), so that \(k=\lfloor n/m\rfloor\), the integer part of \(n/m\) and there may be a remaining block of length \(n-km\). For the \(i\)-th block, the maximum value in the \(j\)-th component is denoted by
\[M_{m,i}^{(j)}=\max\left\{Z_{t}^{(j)}\,:\,t\in(im-m,im]\cap\mathbb{Z}\right\}.\]
Let us denote by \(\mathbf{M}_{m,i}=(M_{m,i}^{(1)},\ldots,M_{m,i}^{(d)})\) the vector of the componentwise maxima in the \(i\)-th block. For a fixed block length \(m\), the sequence of block maxima \((\mathbf{M}_{m,i})_{i}\) forms a stationary process that exhibits the same regularity of the process \((\mathbf{Z}_{t},t\in\mathbb{Z})\). The distribution functions of block maxima are denoted by
\[F_{m}(\mathbf{x})=\mathbb{P}\left\{\mathbf{M}_{m,1}\leq\mathbf{x}\right\}, \quad F_{m}^{(j)}(X^{(j)})=\mathbb{P}\left\{M_{m,i}^{(j)}\leq X^{(j)}\right\},\]
with \(\mathbf{x}\in\mathbb{R}^{d}\) and \(j\in\{1,\ldots,d\}\). Denote by \(U_{m,1}^{(j)}=F_{m}^{(j)}(M_{m,1}^{(j)})\) the unknown uniform margin of \(M_{m,1}^{(j)}\) with \(j\in\{1,\ldots,d\}\). Let \(C_{m}\) be the unique (as the margins of \(\mathbf{M}_{m,1}\) are continuous) copula of \(F_{m}\). In the present context of serial dependence, the domain of attraction condition reads as follows.
**Condition \(\mathcal{C}\).** There exists a copula \(C\) such that
\[\lim_{m\to\infty}C_{m}(\mathbf{u})=C(\mathbf{u}),\quad\mathbf{u}\in[0,1]^{d}.\]
One way to measure tail dependence for a \(d\)-dimensional extreme value random vector is through the use of the extremal coefficient, as defined in Equation (12). According to Schlather and Tawn 2002, the coefficient \(\theta\) can be interpreted as the number of independent variables that are involved in the given random vector. Let \(x\in\mathbb{R}\) be the coefficient \(\theta_{m}(x)\) for the vector of maxima \(\mathbf{M}_{m,1}\), which is defined by the following relation:
\[\mathbb{P}\left\{\bigvee_{j=1}^{d}U_{m,1}^{(j)}\leq x\right\}=\mathbb{P}\{U_ {m,1}^{(1)}\leq x\}^{\theta_{m}(x)}.\]
Under Condition \(\mathcal{C}\), the coefficient \(\theta_{m}(x)\) of the componentwise maxima \(\mathbf{M}_{m,1}\) converges to the extremal coefficient \(\theta\) of the random vector \(\mathbf{X}\), that is:
\[\theta_{m}(x)\underset{m\to\infty}{\longrightarrow}\theta,\quad\forall x\in \mathbb{R}.\]
It is worth noting that \(\theta\) is a constant since \(\mathbf{X}\) is a multivariate extreme value distribution. To generalize the bivariate madogram for the random vectors \(\mathbf{M}_{m,1}\) we follow the same approach as in Marcon et al. 2017; Boulin et al. 2022 and define:
\[\nu_{m}=\mathbb{E}\left[\bigvee_{j=1}^{d}U_{m,1}^{(j)}-\frac{1}{d}\sum_{j=1}^ {d}U_{m,1}^{(j)}\right],\quad\nu=\mathbb{E}\left[\bigvee_{j=1}^{d}H^{(j)}(X^ {(j)})-\frac{1}{d}\sum_{j=1}^{d}H^{(j)}(X^{(j)})\right]. \tag{14}\]
Condition \(\mathcal{C}\) implies that the distribution of \(\mathbf{M}_{m,1}\) is sub-asymptotically extreme valued. A common approach for estimating the extremal coefficient in this scenario consists of supposing that the
sample follows exactly the extreme value distribution and to consider \(\theta_{m}(x):=\theta_{m}\) a sub-asymptotic extremal coefficient which is constant for every \(x\). Thus, we have
\[\theta_{m}=\frac{1/2+\nu_{m}}{1/2-\nu_{m}},\quad 1\leq\theta_{m}\leq d.\]
One issue with the sub-asymptotic extremal coefficient is that it is misspecified, as extreme value distributions only arise in the limit as the block size \(m\) tends to infinity, while in practice we must use a finite sample size. We study this misspecification error in Section 3.3. A plug-in estimation process can be obtained using:
\[\hat{\theta}_{n,m}=\frac{1/2+\hat{\nu}_{n,m}}{1/2-\hat{\nu}_{n,m}}, \tag{15}\]
where \(\hat{\nu}_{n,m}\) is an estimate of \(\nu_{m}\) obtained using:
\[\hat{\nu}_{n,m}=\frac{1}{k}\sum_{i=1}^{k}\left[\bigvee_{j=1}^{d}\hat{U}_{n,m, i}^{(j)}-\frac{1}{d}\sum_{j=1}^{d}\hat{U}_{n,m,i}^{(j)}\right], \tag{16}\]
and \((\hat{U}_{n,m,1}^{(j)},\dots,\hat{U}_{n,m,k}^{(j)})\) are the empirical counterparts of \((U_{m,1}^{(j)},\dots,U_{m,k}^{(j)})\) or, equivalently, scaled ranks of the sample. To establish the strong consistency of this estimator, certain conditions on the mixing coefficients must be satisfied.
**Condition \(\mathcal{D}\)**.: Let \(m_{n}=o(n)\). The series \(\sum_{n\geq 1}\beta(m_{n})\) is convergent, where \(\beta\) is defined in (2).
For the sake of notational simplicity, we will write \(m=m_{n}\), \(k=k_{n}\). The convergence of the series of \(\beta\)-mixing coefficients in Condition \(\mathcal{D}\) is necessary to obtain the strong consistency of \(\hat{\nu}_{n,m}\), and it can be achieved through the sufficiency condition of the Glivencko-Cantelli lemma for almost sure convergence.
**Proposition 4**.: _Let \((\mathbf{Z}_{t},t\in\mathbb{Z})\) be a stationary multivariate random process. Under Conditions \(\mathcal{C}\) and \(\mathcal{D}\), the Madogram estimator in (16) is strongly consistent, i.e.,_
\[|\hat{\nu}_{n,m}-\nu|\underset{n\rightarrow\infty}{\overset{a.s.}{\longrightarrow}}0,\]
_with \(\nu\) the theoretical Madogram of the extreme value random vector \(\mathbf{X}\) given in (14)._
The consistency stated above is derived for data of fixed dimension \(d\) with the sample size \(n\) increases to infinity. In the following, we provide non-asymptotic bounds for the error \(|\hat{\nu}_{n,m}-\nu_{m}|\).
**Proposition 5**.: _Let \((\mathbf{Z}_{t},t\in\mathbb{Z})\) be a stationary process with algebraic \(\varphi\)-mixing distribution, \(\varphi(n)\leq\lambda n^{-\zeta}\) where \(\lambda>0\), and \(\zeta>1\) and \(\varphi\) defined in Equation (3). Then the following concentration bound holds_
\[\mathbb{P}\left\{|\hat{\nu}_{n,m}-\nu_{m}|\geq C_{1}k^{-1/2}+C_{2}k^{-1}+t \right\}\leq(d+2\sqrt{e})\exp\left\{-\frac{t^{2}k}{C_{3}}\right\},\]
_where \(k\) is the number of block maxima and \(C_{1}\), \(C_{2}\) and \(C_{3}\) are constants depending only on \(\zeta\) and \(\lambda\)._
In Proposition 4, we require \((\mathbf{Z}_{t},t\in\mathbb{Z})\) to be absolutely regular, or \(\beta\)-mixing, in order to apply the coupling lemma of Berbee 1979, which is sufficient for the asymptotic analysis. The non-asymptotic analysis in Proposition 5 is more stringent and requires the use of \(\varphi\)-mixing in order to apply Hoeffding and McDiarmid inequalities in a dependent setting, as described in Mohri and Rostamizadeh 2010; Rio 2017. By using the chain of inequalities in (4), the conditions in Proposition 5 therefore imply Conditions \(\mathcal{A}\) and \(\mathcal{D}\).
### Inference in AI-block models
In this section, we present an adapted version of the algorithm developed in Bunea et al. 2020 for clustering variables based on a metric on their covariances, named as CORD. Our adaptation involves the use of the extremal correlation as a measure of dependence between the extremes of two variables.
The SECO in (13) can be written in the bivariate setting as
\[\text{SECO}(\{a,b\})=2-\theta(a,b),\]
where for notational convenience, \(\theta(a,b):=\theta^{(\{a,b\})}\) is the bivariate extremal coefficient between \(X^{(a)}\) and \(X^{(b)}\) as defined in (12). This metric has a range between \(0\) and \(1\), with the boundary cases representing asymptotic independence and comonotonic extremal dependence, respectively. In fact, the bivariate SECO is exactly equal to the extremal correlation \(\chi\) defined in Coles, Heffernan, and Tawn 1999 as
\[\chi(a,b)=\lim_{q\to 0}\chi_{q}(a,b),\text{ where }\chi_{q}(a,b)=\mathbb{P}\left\{H^{(a)}(X^{(a)})>1-q|H^{(b)}(X^{(b)})>1-q \right\},\]
whenever the limit exists. In particular, if \(\mathbf{X}\) is a multivariate extreme-value distribution, then \(\chi(a,b)=\chi_{q}(a,b)\) for \(q\in(0,1)\). In an AI-block model, the statement
\[\mathbf{X}^{(O_{g})}\perp\!\!\!\perp\mathbf{X}^{(O_{h})},\quad g\neq h,\]
is equivalent to
\[\chi(a,b)=\chi(b,a)=0,\quad\forall a\in O_{g},\forall\,b\in O_{h},\quad g\neq h. \tag{17}\]
Thus using Proposition 3, Condition \(\mathcal{B}\) and Equation (17), the extremal correlation is a sufficient statistic to recover clusters in an AI-block model. Indeed, by Condition \(\mathcal{B}\) and Equation (17), two variables \(X^{(a)}\) and \(X^{(b)}\) belong to the same cluster of an AI-block model if and only if \(\chi(a,b)>0\). For the estimation procedure, using tools introduced in the previous section, we give a sample version of the extremal correlation associated to \(M_{m,1}^{(a)}\) and \(M_{m,1}^{(b)}\) by
\[\hat{\chi}_{n,m}(a,b)=2-\hat{\theta}_{n,m}(a,b),\quad a,b\in\{1,\ldots,d\},\]
where \(\hat{\theta}_{n,m}(a,b)\) is the sampling version defined in (15) of \(\theta(a,b)\). The strong consistency of this estimate follows directly from Proposition 4.
Let us denote by \(\mathcal{X}=(\chi(a,b))_{a,b\in\{1,\ldots,d\}}\) be the matrix of all extremal correlations and \(\hat{\mathcal{X}}=(\hat{\chi}_{n,m}(a,b))_{a,b\in\{1,\ldots,d\}}\) be its sampling version. We present an algorithm, named ECO (Extremal COrrelation), for estimating the partition \(\tilde{O}\) using a dissimilarity metric based on the extremal correlation. This algorithm, outlined in Algorithm (ECO), does not require the specification of the number of groups \(G\), as it is automatically estimated by the procedure. The algorithm complexity for computing the \(k\) vectors \(\hat{\mathbf{U}}_{n,m,i}=(\hat{U}_{n,m,i}^{(1)},\ldots,\hat{U}_{n,m,i}^{(d)})\) for \(i\in\{1,\ldots,k\}\) is of order \(O(dk\ln(k))\)(Cormen et al., 2022, Section 2). Given the empirical ranks, computing \(\hat{\mathcal{X}}\) and performing the algorithm require \(O(d^{2}\lor dn\ln(k))\) and \(O(d^{3})\) computations, respectively. So the overall complexity of the estimation procedure is \(O(d^{2}(d\lor k\ln(k))))\).
In the following, we provide conditions ensuring that our algorithm is consistent.
**Theorem 3**.: _Consider the AI-block model as defined in Definition 1 under Condition \(\mathcal{B}\) and \((\mathbf{Z}_{t},t\in\mathbb{Z})\) be a stationary multivariate random process. For a given \(\mathcal{X}\) and its corresponding estimator \(\hat{\mathcal{X}}\), if Conditions \(\mathcal{C}\) and \(\mathcal{D}\) holds, then for any \(\tau>0\)_
\[\mathbb{P}\left\{\hat{O}=\bar{O}\right\}=1,\quad\text{as }n\to\infty.\]
A key consideration is the choice of the threshold \(\tau\). If \(\tau\approx 0\), the algorithm is likely to return the unique cluster \(\{1,\ldots,d\}\), while if \(\tau\approx 1\), it is likely to return the greatest cluster \(\{\{1\},\ldots,\{d\}\}\). However, this issue can be addressed through a non-asymptotic analysis of the algorithm, which we present in the next section.
### Estimation in growing dimensions
We provide an extension of Theorem 3, allowing estimation in the case of growing dimensions, by adding non asymptotic bounds on the probability of consistently estimating the maximal element \(\bar{O}\) of an AI-block model. Furthermore, this result provides an answer for how to leverage \(\tau\) in Algorithm (ECO). The difficulty of clustering in AI-block models can be assessed via the size of the Minimal Extremal COrrelation (MECO) separation between two variables in a same cluster:
\[\text{MECO}(\mathcal{X}):=\min_{a\stackrel{{\mathcal{O}}}{{ \sim}}b}\chi(a,b).\]
In AI-block models, with Condition \(\mathcal{B}\), we always have \(\text{MECO}(\mathcal{X})>\eta\) with \(\eta=0\). However, a large value of \(\eta\) will be needed for retrieving consistently the partition \(\bar{O}\) from identical and independent observations.
We are now ready to state the main result of this section.
Theorem 4 ().: _We consider the AI-block model as defined in Definition 1 under Condition \(\mathcal{B}\), and \((\mathbf{Z}_{t},t\in\mathbb{Z})\) be a \(d\)-multivariate stationary process with algebraic \(\varphi\)-mixing distribution, \(\varphi(n)\leq\lambda n^{-\zeta}\) where \(\lambda>0\) and \(\zeta>1\) and \(\varphi\) defined in Equation (3). Define_
\[d_{m}=\max_{a\neq b}\left|\chi_{m}(a,b)-\chi(a,b)\right|.\]
_Let \((\tau,\eta)\) be parameters fulfilling_
\[\tau \geq d_{m}+C_{1}k^{-1/2}+C_{2}k^{-1}+C_{3}\sqrt{\frac{(1+\gamma) \ln(d)}{k}},\] \[\eta \geq d_{m}+C_{1}k^{-1/2}+C_{2}k^{-1}+C_{3}\sqrt{\frac{(1+\gamma) \ln(d)}{k}}+\tau,\]
_where \(C_{1},C_{2},C_{3}\) are universal constants depending only on \(\lambda\) and \(\zeta\), \(k\) is the number of block maxima, and \(\gamma>0\). For a given \(\mathcal{X}\) and its corresponding estimator \(\hat{\mathcal{X}}\), if \(\mathrm{MECO}(\mathcal{X})>\eta\), then the output of Algorithm (ECO) is consistent, i.e.,_
\[\mathbb{P}\left\{\hat{O}=\bar{O}\right\}\geq 1-2(1+\sqrt{e})d^{-2\gamma}.\]
Unsurprising, as Theorem 4 is not concerned with asymptotics, we did not actually assume Condition \(\mathcal{C}\). A link between \(\mathbf{Z}\) and \(\mathbf{X}\) is implicitly provided through the bias term \(d_{m}\) which measures the distance between \(\chi_{m}(a,b)\) and \(\chi(a,b)\). This quantity vanishes when Condition \(\mathcal{C}\) holds as \(m\to\infty\).
Some comments on the implications of Theorem 4 are in order. On a high level, larger dimension \(d\) and bias \(d_{m}\) lead to a higher threshold \(\tau\). The effects of the dimension \(d\) and the bias \(d_{m}\) are intuitive: larger dimension or more bias make the partition recovery problem more difficult. It is clear that the partition recovery problem becomes more difficult as the dimension or bias increases. This is reflected in the bound of the MECO value below which distinguish between noise and asymptotic independence is impossible by our algorithm. Thus, whereas the dimension \(d\) increases, the dependence between each component should be stronger in order to distinguish between the two. In other words, for alternatives that are sufficiently separated from the asymptotic independence case, the algorithm will be able to distinguish between asymptotic independence and noise at the \(\sqrt{\ln(d)k^{-1}}\) scale. For a more quantitative discussion, our algorithm is able to recover clusters when the data dimension scales at a polynomial rate, i.e. \(d=o(n^{p})\), with \(p>1\) as \(\eta\) in Theorem 4 decreases with increasing \(n\).
**Remark 2**.: According to Theorem 4, when the bias term is zero (\(d_{m}=0\)), the threshold \(\tau\) is of order \(\sqrt{\ln(d)k^{-1}}\). Thresholding \(\hat{\chi}_{n,m}(a,b)\) at this level guarantees exact recovery if the separation MECO is at least \(2\tau\). These results are similar to those for the hard thresholding estimator in Gaussian sequence models, as demonstrated in Section 4.1 of Tsybakov 2014.
### Data-driven selection of the threshold parameter
The performance of Algorithm (ECO) depends crucially on the value of the threshold parameter \(\tau\). This threshold involves known quantities such as \(d\) and \(k\) and a unknown parameter \(d_{m}\) (see Theorem 4). For the latter, there is no simple manner to choose optimally this parameter, as there is no simple way to determine how fast is the convergence to the asymptotic extreme behavior, or how far into the tail the asymptotic block dependence structure appears. Second order conditions, which are commonly used in the literature to ensure convergence to the stable tail dependence function at a certain rate, are theoretically relevant (see Dombry and Ferreira 2019; Einmahl, Krajina, and Segers 2012; Fougeres, De Haan, and Mercadier 2015 for examples). However, finding the optimal value for the block length parameter remains a challenging task. In practice, it is advisable to use a data-driven procedure to select the threshold in Algorithm (ECO). We propose to use the following type of cross-validation for this purpose. The idea is to use the SECO criteria presented in Equation (13). Let \(\mathbf{Z}\sim O\), given a partition \(\hat{O}=\{\hat{O}_{g}\}_{g}\), we know from Theorem 2 that the SECO similarity given by
\[\mathrm{SECO}(\hat{O})=\sum_{g}\theta^{(\hat{O}_{g})}-\theta \tag{18}\]
is equal to \(0\) if and only if \(\hat{O}\leq\bar{O}\). We thus construct a loss function given by the SECO where we evaluate its value over a grid of the \(\tau\) values. The value of \(\tau\) which the SECO similarity has minimum values is also the value of \(\tau\) for which we have consistent recovery of our communities. Our procedure ask to split the data in three subsamples: on one subset, we construct a set of candidate
partitions. The other two subsamples are used to estimate the extremal dependence coefficient for the sub-vector of the candidate partition \(\mathbf{X}^{(\hat{O}_{g})}\) and for the whole distribution \(\mathbf{X}\). We denote \(\hat{\theta}^{(\hat{O}_{g})}_{(1)}\) and \(\hat{\theta}_{(2)}\) the two madogram-based estimators in the spirit of (15) of \(\theta^{(\hat{O}_{g})}\) and \(\theta\) in (18) based on the two independent samples of size \(n\). The cross-validation based estimator of the SECO in (18) is thus defined as
\[\widehat{\text{SECO}}(\hat{O})=\sum_{g}\hat{\theta}^{(\hat{O}_{g})}_{(1)}-\hat {\theta}_{(2)}. \tag{19}\]
Let \(\widehat{\mathcal{O}}\) be a collection of partitions computed with Algorithm (ECO) from one subsample, by varying \(\tau\) around its theoretical optimal value, of order \((d_{m}+\sqrt{\ln(d)k^{-1}})\), on a fine grid. For any \(\hat{O}\in\widehat{\mathcal{O}}\), we evaluate our cross-validation SECO in (19). Proposition 6 offers theoretical support for this procedure, for large \(n\). It shows that, in expectation, the minimum of the proposed criterion is asymptotically attained for subpartitions of \(\bar{O}\).
**Proposition 6**.: _We consider an AI-block model as in Definition 1 and the partial order \(\leq\) between two partitions in (6). If Assumptions \(\mathcal{C}\) and \(\mathcal{D}\) hold, thus_
\[\lim_{n\to\infty}\mathbb{E}\left[\widehat{\text{SECO}}(\bar{O})\right]<\lim_{ n\to\infty}\mathbb{E}\left[\widehat{\text{SECO}}(\hat{O})\right],\quad\hat{O} \not\leq\bar{O}.\]
Moreover, we give the weak convergence of an estimator of \(\text{SECO}(O)\) where \(\mathbf{Z}\sim O\) (see Appendix C.2, for details).
## 4 Hypotheses discussion for a multivariate random persistent process
A trivial example of an AI-block model is given by a partition \(O\) such that \(\mathbf{Z}^{(O_{g})}\) is in domain of attraction of an extreme value random vector \(H^{(O_{g})}\), \(g\in\{1,\dots,G\}\) such that \(\mathbf{Z}^{(O_{1})},\dots,\mathbf{Z}^{(O_{G})}\) are independent. In this simple model, block independent clusters are sub-asymptotic hence asymptotic and the peculiar dependence structure under study is not inherent of the tail behaviour of the random vector.
More interestingly, in this section we will focus on a process where the dependence between clusters disappears in the distribution tails. To this aim, we recall here a \(\varphi\)-algebraically mixing process. The interested reader is referred for instance to Bucher and Segers 2014. This process satisfies Conditions \(\mathcal{A}\) and \(\mathcal{D}\). We show that Conditions \(\mathcal{B}\) and \(\mathcal{C}\) hold with a bit more work.
Consider i.i.d \(d\)-dimensional random vectors \(\mathbf{Z}_{0},\mathbf{\xi}_{1},\mathbf{\xi}_{2},\dots\) and independent Bernoulli random variables \(I_{1},I_{2},\dots\) i.i.d. with \(\mathbb{P}\{I_{t}=1\}=p\in(0,1]\). For \(t=1,2,\dots\), define the stationary random process \((\mathbf{Z}_{t},t\in\mathbb{Z})\) by
\[\mathbf{Z}_{t}=\mathbf{\xi}_{t}\delta_{1}(I_{t})+\mathbf{Z}_{t-1}\delta_{0}(I_{t}), \tag{20}\]
where we suppose without loss of generality that the process is defined for all \(t\in\mathbb{Z}\) using stationarity. The persistence of the process \((\mathbf{Z}_{t},t\in\mathbb{Z})\) arises from repeatable values in (20). From this persistence, \((\mathbf{Z}_{t},t\in\mathbb{Z})\) is \(\varphi\)-mixing with coefficient of order \(O((1-p)^{n})\) (see Lemma B.1 of Bucher and Segers 2014), hence algebraically mixing. Using Proposition 4.2 of Bucher and Segers 2014, if \(C_{1}\), i.e. the copula of \(Z_{1}\), is in the copula domain of attraction of an extreme value copula \(C\), i.e.,
\[\left\{C_{1}\left(\{u^{(1)}\}^{1/m},\dots,\{u^{(d)}\}^{1/m}\right)\right\}^{m} \underset{m\to\infty}{\longrightarrow}C(u^{(1)},\dots,u^{(d)}),\quad\mathbf{u }\in[0,1]^{d},\]
then also \(C_{m}\) as \(m\to\infty\) with \(C_{m}\) the copula of the maxima of \(\mathbf{Z}_{1},\dots,\mathbf{Z}_{m}\). For \(\theta>0\) and \(\beta\geq 1\), let us consider the multivariate outer power transform of a Clayton copula defined such as
\[C_{\theta,\beta}(\mathbf{u})=\left[1+\left\{\sum_{j=1}^{d}(\{u^{(j)}\}^{- \theta}-1)^{\beta}\right\}^{1/\beta}\right]^{-1/\theta},\quad\mathbf{u}\in[0,1]^{d}.\]
The copula of multivariate componentwise maxima of an i.i.d. sample of size \(m\) from a continuous distribution with copula \(C_{\theta,\beta}\) is equal to
\[\left\{C_{\theta,\beta}(\{u^{(1)}\}^{1/m},\ldots,\{u^{(d)}\}^{1/m})\right\}^{m}=C _{\theta/m,\beta}(u^{(1)},\ldots,u^{(d)}). \tag{21}\]
As \(m\to\infty\), this copula converges to the Gumbel copula with shape parameter \(\beta\geq 1\)
\[C_{0,\beta}(u^{(1)},\ldots,u^{(d)}):=\lim_{m\to\infty}C_{\theta/m,\beta}(u^{(1 )},\ldots,u^{(d)})=\exp\left[-\left\{\sum_{j=1}^{d}(-\ln u^{(j)})^{\beta} \right\}^{1/\beta}\right],\]
uniformly in \(\mathbf{u}\in[0,1]^{d}\). This result is initially stated in Proposition 4.3 in Bucher and Segers 2014 for the bivariate case. The extension to an arbitrary dimension implies no further arguments, the proof is thus omitted. Now let us consider the nested Archimedean copula given by
\[C_{\theta,\beta_{0}}\left(C_{\theta,\beta_{1}}^{(O_{1})}(\mathbf{u}^{(O_{1})}),\ldots,C_{\theta,\beta_{G}}^{(O_{G})}(\mathbf{u}^{(O_{g})})\right). \tag{22}\]
We aim to show that this copula is in the domain of attraction of an AI-block model. That is the purpose of the proposition stated below.
**Proposition 7**.: _Consider \(1\leq\beta_{0}\leq\min\{\beta_{1},\ldots,\beta_{G}\}\), then the nested Archimedean copula given in (22) is in the copula domain of attraction of an extreme value copula given by_
\[C_{0,\beta_{0}}\left(C_{0,\beta_{1}}^{(O_{1})}(\mathbf{u}^{(O_{1})}),\ldots, C_{0,\beta_{G}}^{(O_{G})}(\mathbf{u}^{(O_{G})})\right).\]
_In particular, taking \(\beta_{0}=1\) gives an AI-block model where each extreme value random vector \(\mathbf{X}^{(O_{g})}\) dependencies correspond to a Gumbel copula with parameter shape \(\beta_{g}\)._
From the last conclusion of Proposition 7, we obtain Condition \(\mathcal{C}\), that is \((\mathbf{Z}_{t},t\in\mathbb{Z})\) in (20) is in max-domain of attraction of an AI-block model. Noticing that the exponent measure of each cluster is absolutely continuous with respect to the Lebesgue measure, Condition \(\mathcal{B}\) is thus valid.
**Remark 3**.: Notice that, using results from Bucher and Segers 2014, one can show that the bias \(d_{m}\) in Theorem 4 is of order \(1/m\) in the i.i.d. case, i.e. when \(p=1\), see A.3 for details.
## 5 Competitor clustering algorithms for extremes
In this section, we present some competitor algorithms: the spherical \(k\)-means (Chautru 2015; Fomichov and Ivanovs 2022; Janssen and Wan 2020), \(k\)-means and hierarchical clustering using madogram as a dissimilarity (Bador et al. 2015; Bernard et al. 2013; Saunders, Stephenson, and Karoly 2021). The performance of the spherical \(k\)-means and hierarchical clustering will be compared with our Algorithm (ECO) in Section 6.
The \(k\)-means procedure is a way to identify distinct groups within a population. This procedure involves partitioning a set of data into \(G\) groups (to be consistent with our notation). To do this, we first choose cluster centers \(\psi_{1},\ldots,\psi_{G}\) for the points \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\in\mathbb{R}^{d}\) in order to minimize
\[W_{n}:=\frac{1}{n}\sum_{i=1}^{n}\min_{g\in\{1,\ldots,G\}}d(\mathbf{Z}_{i},\psi _{g}),\]
where \(d:\mathbb{R}^{d}\times\mathbb{R}^{d}\to[0,\infty)\) is a distance function or, more generally, a dissimilarity function in \(\mathbb{R}^{d}\). The motivation is to identify cluster centers such that distances of the observations to their nearest
cluster center are minimized. Accordingly, all observations which are closest to the same cluster center are viewed as belonging to the same group.
While the original version of \(k\)-means uses the Euclidean distance, several alternatives choices of \(d\) have been suggested. As the extremal dependence structure can be described with the angular measure \(S\) (see Resnick, 2008, section 5 for details), a natural way to measure the distance between two points is by their angle. This corresponds to the spherical \(k\)-means clustering which is described as follow: for a given integer \(G\), solve the following optimization problem
\[\frac{1}{n}\sum_{i=1}^{n}\min_{g\in\{1,\ldots,G\}}d(\mathbf{Y}_{i},\psi_{g}),\]
with \(\mathbf{Y}_{i}\), i.i.d. observations from \(\mathbf{Y}\), a random variable living on the unit sphere with law \(S\). Consistency results with i.i.d. observations and for sufficiently many large observations had been proved for this algorithm in Janssen and Wan (2020). The consistency result gives that the centroids obtained by minimizing the program above are close to the true centroids of the angular distribution.
In the framework of Bador et al. (2015); Bernard et al. (2013); Saunders et al. (2021), the madogram is considered as a dissimilarity measure. This criterion can be read in the present context of block maxima method as
\[W_{n}=\frac{1}{k}\sum_{i=1}^{k}\min_{g\in\{1,\ldots,G\}}\frac{1}{2}\left| \hat{\mathbf{U}}_{n,m,i}-\psi_{g}\right|=\int_{[0,1]^{d}}\min_{g\in\{1,\ldots, G\}}\frac{1}{2}\left|\mathbf{u}-\psi_{g}\right|d\hat{C}_{n,m}(\mathbf{u}),\]
where \(\hat{C}_{n,m}\) is the empirical copula defined as
\[\hat{C}_{n,m}(\mathbf{u})=\frac{1}{k}\sum_{i=1}^{k}\mathds{1}_{\{\hat{\mathbf{ U}}_{n,m,i}\leq\mathbf{u}\}},\quad\mathbf{u}\in[0,1]^{d}. \tag{23}\]
For a copula \(C_{m}\) in the domain of attraction of an extreme value copula \(C\), let \(\Psi=\{\psi_{1},\ldots,\psi_{G}\}\), be a set of cluster centers with \(\psi_{g}\in\mathbb{R}^{d}\), \(g\in\{1,\ldots,G\}\) and consider the averaged distance from any observation to the closest element of \(\Psi\) as
\[W(\Psi,C)=\int_{[0,1]^{d}}\min_{\psi\in\Psi}\frac{1}{2}|\mathbf{u}-\psi|dC( \mathbf{u}).\]
To the best of our knowledge, consistency results for \(k\)-means procedure using the madogram have not yet been established. The following proposition tries to bridge this gap.
**Proposition 8**.: _Let \((\mathbf{Z}_{t},t\in\mathbb{Z})\) be a stationary multivariate random process with continuous univariate margins such that Conditions \(\mathcal{C}\) and \(\mathcal{D}\) hold. For each \(\hat{C}_{n,m}\) in (23) and a given value \(G\in\mathbb{N}\), denote by \(\Psi_{G}^{n}\) a random set which minimizes_
\[W(\Psi,\hat{C}_{n,m})=\int_{[0,1]^{d}}\min_{\psi\in\Psi}\frac{1}{2}|\mathbf{ u}-\psi|d\hat{C}_{n,m}(\mathbf{u}),\]
_among all sets \(\Psi\subset[0,1]^{d}\) with at most \(G\) elements. Accordingly, let us define \(\Psi_{G}\) the optimal set when we replace \(\hat{C}_{n,m}\) by \(C\) and assume that for a given value of \(G\), the set \(\Psi_{G}\) is uniquely determined. Thus \(\Psi_{G}^{n}\) converges almost surely to \(\Psi_{G}\) as \(n\to\infty\)._
From Proposition 8, the madogram seems to be a relevant dissimilarity to estimate the set of theoretical cluster centers with respect to the extreme value copula of \(\mathbf{X}\). As far as we know, the madogram was used for clustering using the partitioning around medoids algorithm (Bador et al., 2015; Bernard et al., 2013) and the hierarchical clustering (Saunders et al., 2021). For computational convenience, only the hierarchical clustering and spherical \(k\)-means are considered in the next Section 6.
Numerical results
In this section, we investigate the finite-sample performance of our algorithm to retrieve clusters in AI-block models. We consider a number of AI-block models of increasing complexity where we compare the performance of our algorithm with state-of-the-art methods in literature, the Hierarchical Clustering (HC) using the madogram as dissimilarity and the spherical \(k\)-means (SKmeans) algorithms. We design three resulting partitions in the limit model:
1. \(\mathbf{X}\) is composed of two blocks \(O_{1}\) and \(O_{2}\), of equal lengths where \(\mathbf{X}^{(O_{1})}\) and \(\mathbf{X}^{(O_{2})}\) are extreme-valued random vectors with a Logistic distribution and \(\beta_{1}=\beta_{2}=10/7\).
2. \(\mathbf{X}\) is composed of \(G=5\) blocks of random sample sizes \(d_{1},\ldots,d_{5}\) from a multinomial distribution with parameter \(q_{g}=0.5^{g}\) for \(g\in\{1,\ldots,4\}\) and \(q_{5}=1-\sum_{g=1}^{4}q_{g}\). Each random vector is distributed according to a Logistic distribution where parameters \(\beta_{g}=10/7\) for \(g\in\{1,\ldots,5\}\).
3. We consider the same model as 2 where we add \(5\) singletons. Then we have \(10\) resulting clusters. Model with singletons are known to be the hardest model to recover in the clustering literature.
We consider observations from the model in (20) in Section 4, where we simulate them with a nested Archimedean copula as in (22) using the method proposed by the copula R package (Marius Hofert and Martin Machler, 2011). The goal of our algorithm is to cluster \(d\) variables in \(\mathbb{R}^{n}\). Thus, to make comparisons, we transpose the dataset for the \(k\)-means algorithm in order to obtain centroids in \(\mathbb{R}^{d}\). In contrast to our "blindfolded" algorithm that automatically infers the number of clusters, we need to specify it for SKmeans and HC. These procedures with this wisely chosen parameter are called "oracles". Several simulation frameworks are considered and detailed in the following.
1. We first investigate the choice of the intermediate sequence \(m\) of the block length used for estimation. We let \(m\in\{3,6,\ldots,30\}\) with a fixed sample size \(n=10000\) and \(k=\lfloor n/m\rfloor\).
2. We compute the performance of the structure learning method for varying sample size \(n\). Since the value of \(m\) which is required for consistent estimation is unknown in practice we choose \(m=20\).
3. We show the relationship between the average SECO and exact recovery rate of the method presented in Section 3.4. We use the case \(n=16000\), \(k=800\) and \(d=1600\) to study the "large \(k\), large \(d\)" of our cross-validation approach.
In the simulation study, we use the fixed threshold \(\alpha=2\times(1/m+\sqrt{\ln(d)/k})\) for F1 and F2 since our theoretical results given in Theorem 4 suggest the usage of a threshold proportional \(d_{m}+\sqrt{\ln(d)/k}\) and we can show, in the i.i.d. settings (where \(p=1\)) that \(d_{m}=O(1/m)\) (see details in Section A.3). For F3, we vary \(\alpha\) around its theoretical optimal value, on a fine grid.
Figure 1 states all the results we obtain from each experiment and framework considered in this numerical section. We plot exact recovery rate for Algorithm (ECO) with dimensions \(d=200\) and \(d=1600\). In the "large \(d\)" setting with \(d=1600\), we consider the performance of the HC algorithm using the madogram as a dissimilarity measure and the spherical \(k\)-means in the first two frameworks. Each experiment is performed using \(p=0.9\). We refer the reader to 3 for numerical results using \(p\in\{0.5,0.7,1.0\}\). As expected, the performance of our algorithm in F1 (see Figure 1, first row) is initially increasing in \(m\), reaches a peak, and then decreases. This phenomenon depicts a trade-off between the sub-asymptotically regime and the accuracy of inference. Indeed, a large block's length \(m\) induces a lesser bias as we reach the domain of attraction. However, the number of block \(k\) is consequently decreasing and implies a high variance for the inference process. These joint phenomenon explain the parabolic form of the exact recovery rate for our algorithms
for \(d\in\{200,1600\}\). Considering the framework F2 the performance of our algorithm is better as the number of block-maxima increases (see Figure 1, second row).
A classical pitfall for learning algorithms is high dimensional settings. Here, when the dimension increases from 200 to 1600, our algorithm consistently reports the maximal element \(\bar{O}\) with a reasonable number of blocks. This is in accordance with our theoretical findings, as the difficulty of clustering in AI-block models, as quantified by \(\eta\) in Theorem 4, scales at a rate of \(\sqrt{\ln(d)k^{-1}}\). This rate has a moderate impact on the dimension \(d\). In the framework F3, the numerical studies in Figure 1 (third row) shows that the optimal ranges of \(\tau\) values, for high exact recovery percentages, are also associated with low average SECO losses. This supports our data-driven choices of \(\tau\) provided in Section 3.4.
We notice that the HC algorithm using the madogram as dissimilarity performs very well in each configuration even when the inference is strongly biased, i.e., the block length \(m\) is small hence we are far from the domain of attraction. This can be explained by the fact that madograms are lower when a \(\overset{O}{\sim}b\) and higher when \(a\overset{\bar{O}}{\not\sim}b\). This is effectively true by construction of the madogram in the domain of attraction of \(\mathbf{X}\) but it is even true in our considered sub-asymptotic framework. Hence, by construction of the HC, i.e., by merging the two closest centroids in terms of madogram, we obtain the correct partitioning of \(\mathbf{X}\) even when the domain of attraction is not reached. To compare, our algorithm gives one and unique cluster, i.e., the vector is completely dependent, when the block's length \(m\) is too small and we are not yet in the domain of attraction of \(\mathbf{X}\). This behavior is desirable as it corresponds to what it is effectively observed, the whole vector is dependent. This is a leading argument for model-based clustering which are designed for a specific model and where the inference remains coherent with the constructed target. One drawback of using HC with the madogram, as previously described, is the need to specify the number of groups \(G\) beforehand, which is not always straightforward. Despite this limitation, the HC procedure with the madogram performs well in retrieving clusters in AI-block models when the true number of clusters is known. Further researches can be lead in order to adapt our algorithm with a hierarchical design as proposed by Lee, Deng, and Ning 2021 for the algorithm of Bunea et al. 2020.
For the same reasons as for the HC case, the SKmeans performs well for Experiment (E1) and Experiment (E2) for all considered values of \(m\). However, when we consider Experiment (E3), its performance drastically decreases. Furthermore, the exact recovery rate decreases as \(m\) increases, which is not desirable in extreme settings. However, a rigorous method for choosing \(G\) is currently lacking and it remains an hyperparameter that must be chosen by the statistician. When the hyperparameter is known and equal to the true value, clusters are correctly inferred for Experiments E1 and E2 for the HC algorithm and the SKmeans, but not for Experiment E3 for the SKmeans. Our algorithm, with the threshold specified in Theorem 4, can reach this level of performance depicted by the HC with madogram without specifying the number of clusters.
## 7 Conclusions
Our main focus in this work was to develop and analyze an algorithm for recovering clusters in AI-block models, and to understand how the dependence structure of maxima impacts the difficulty of clustering in these models. This is particularly challenging when we are dealing with high-dimensional data and weakly dependent observations that are sub-asymptotically distributed. In order to better understand these phenomena, we ask stronger assumptions about the extremal dependence structure in our theoretical analysis. Specifically, we assume the asymptotic independence between blocks, which is the central assumption of AI-block models. This assumption allows us to study the effects of the dependence structure and to develop and analyze an efficient algorithm for recovering clusters in these AI-block models. This procedure can recover the clusters with high probability by setting
a threshold that is only logarithmic in the dimension \(d\). However, it is still of interest to relax assumptions to cover a wider range of scenarios and problems that might be encountered in practice. We outline some potential directions for further researches below.
In this paper, we find a bound for the minimal extremal correlation separation \(\eta>0\). A further goal is to find the minimum value \(\eta^{*}\) below which it is impossible, with high probability, to exactly
Figure 1: Simulation results with \(p=0.9\). From top to bottom: Framework F1, Framework F2, Framework F3. From left to right: Experiment E1, Experiment E2, Experiment E3. Exact recovery rate for our algorithm (red, diamond points), for the HC (blue, plus points) and the SKmeans (green, star points) for F1 and F2 across 100 runs. Dotted lines correspond to \(d=200\), solid lines to \(d=1600\). The threshold \(\tau\) is taken as \(2\times(1/m+\sqrt{\ln(d)}/k)\). For F3, average SECO losses (red solid lines, circle points) and exact recovery percentages (blue dotted lines, diamond points) across 100 simulations. For better illustration, the SECO losses are standardized first by subtracting the minimal SECO loss in each figure, and the standardized SECO losses plus 1 are then plotted on the logarithmic scale.
recover \(\bar{O}\) by any method. This question can be formally expressed using Le Cam's theory as follows:
\[\inf_{\hat{O}}\sup_{\mathcal{X}\in\mathbb{X}(\eta)}\mathbb{P}_{\mathcal{X}}(\hat{O }\neq\bar{O})\geq\text{constant}>0,\quad\forall\,\eta<\eta^{*},\]
with \(\mathbb{X}(\eta)=\{\mathcal{X},\text{MECO}(\mathcal{X})>\eta\}\) and the infimum is taken over all possible estimators. One possible direction to obtain such a result is to follow methods introduced by Drees 2001 for risk bounds of extreme value index. An interesting consequence of this result is to determine whether our procedure is optimal (in a minimax sense), i.e., whether the order of \(\eta^{*}\) and the one found in Theorem 4 are the same.
In practice, the dependence structure can be much more complicated and Condition \(\mathcal{B}\) may not hold. In its seminal work Ryabko 2017 proposed a conditional independence test to determine whether an element \(j\) belongs to a cluster \(\hat{O}_{1}\) without considering pairwise dependence between all components of the cluster. Specifically, it is asked whether
\[\mathbf{X}^{(\hat{O}_{1})}\perp\!\!\!\perp X^{(j)}|\left(\{1,\ldots,d\}\setminus (\hat{O}_{1}\cup\{j\})\right).\]
Defining a suitable conditional independence for extreme modeling is involving, as noted in Papastathopoulos and Strokorb 2016, because the classical definition of conditional independence leads to trivial structure for max-stable distributions. To overcome this hindrance, conditional independence for extremes often involves the distribution of a random vector
\[(\mathbf{X},Y)=(X^{(1)},\ldots,X^{(d)},Y),\]
where the conditional limit of \(\mathbf{X}\) given \(Y\) is large (see, e.g., Heffernan and Tawn 2004; Heffernan and Resnick 2007 or Aghbalou et al. 2021 for a different approach designed for supervised learning). A notion of multivariate conditional independence for extremes, which is used for graph inference, is relatively new. Engelke and Hitz 2020 define such a notion for multivariate Pareto and requires absolute continuity with respect to the Lebesgue measure of the exponent measure, hence our Condition \(\mathcal{B}\). The graphical model introduced in Segers 2020 only requires the existence of a density in the vector tree \(\mathbf{X}\) and this can exhibit asymptotic independence. However, when applied in practice (see e.g. Asenova, Mazo, and Segers 2021), an Husler-Reiss density is asked, which implies Condition \(\mathcal{B}\). This leads to a trivial maximal element, \(\bar{O}=\{1,\ldots,d\}\). In Gissibl and Kluppelberg 2018, a causal-max linear Bayesian network is proposed and leads to graphical models with discrete dependence structure on a directed acyclic graph. Considering our framework in directed acyclic graph points out new research directions and possible future works. We may also drop Condition \(\mathcal{B}\) by taking advantage of the specific geometry of the angular measure in AI-block models by infering clusters with methods from topological data analysis.
A more general version of the extremal conditional independence notion in Engelke and Hitz 2020 is given in Definition 3.1 in Engelke, Ivanov, and Strokorb 2022, which naturally translates to a forest of asymptotically independent trees in the context of graphs. Motivated by modern applications in which extremal graphs are constructed based on a pre-clustering method such as \(k\)-medoids (see, e.g., Hentschel, Engelke, and Segers 2022), we see our work as a dedicated tool for this purpose, as the target clusters of our model have an interpretation in terms of extremal graphs. Therefore, combining our work on threshold exceedances (specifically the result of Theorem 4) and extending Theorem 3 of Engelke and Volgushev 2022 to handle mixing observations can lead to a strongly consistent method for learning extremal forests based on the maxima of a multivariate weakly dependent random process.
## Acknowledgments
This work has been supported by the project ANR McLaren (ANR-20-CE23-0011). This work has been partially supported by the French government, through the 3IA Cote d'Azur Investments in the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002. This work was also supported by the french national programme LEFE/INSU
|
2307.14147 | MorphoLander: Reinforcement Learning Based Landing of a Group of Drones
on the Adaptive Morphogenetic UAV | This paper focuses on a novel robotic system MorphoLander representing
heterogeneous swarm of drones for exploring rough terrain environments. The
morphogenetic leader drone is capable of landing on uneven terrain, traversing
it, and maintaining horizontal position to deploy smaller drones for extensive
area exploration. After completing their tasks, these drones return and land
back on the landing pads of MorphoGear. The reinforcement learning algorithm
was developed for a precise landing of drones on the leader robot that either
remains static during their mission or relocates to the new position. Several
experiments were conducted to evaluate the performance of the developed landing
algorithm under both even and uneven terrain conditions. The experiments
revealed that the proposed system results in high landing accuracy of 0.5 cm
when landing on the leader drone under even terrain conditions and 2.35 cm
under uneven terrain conditions. MorphoLander has the potential to
significantly enhance the efficiency of the industrial inspections, seismic
surveys, and rescue missions in highly cluttered and unstructured environments. | Sausar Karaf, Aleksey Fedoseev, Mikhail Martynov, Zhanibek Darush, Aleksei Shcherbak, Dzmitry Tsetserukou | 2023-07-26T12:22:23Z | http://arxiv.org/abs/2307.14147v2 | MorphoLander: Reinforcement Learning Based Landing of a Group of Drones on the Adaptive Morphogenetic UAV
###### Abstract
This paper focuses on a novel robotic system MorphoLander representing heterogeneous swarm of drones for exploring rough terrain environments. The morphogenetic leader drone is capable of landing on uneven terrain, traversing it, and maintaining horizontal position to deploy smaller drones for extensive area exploration. After completing their tasks, these drones return and land back on the landing pads of MorphoGear. The reinforcement learning algorithm was developed for a precise landing of drones on the leader robot that either remains static during their mission or relocates to the new position. Several experiments were conducted to evaluate the performance of the developed landing algorithm under both even and uneven terrain conditions. The experiments revealed that the proposed system results in high landing accuracy of 0.5 cm when landing on the leader drone under even terrain conditions and 2.35 cm under uneven terrain conditions. MorphoLander has the potential to significantly enhance the efficiency of the industrial inspections, seismic surveys, and rescue missions in highly cluttered and unstructured environments.
_Keywords -- Swarm of Drones, Precise UAV Landing, Morphogenetic UAV, Reinforcement Learning_
## I Introduction
The use of drones in monitoring and inspection tasks has increased significantly in recent years due to their high mobility and ability to access areas isolated from unmanned ground vehicles (UGVs). Teams of heterogeneous robots can collaborate to achieve tasks with high efficiency. For example, construction site monitoring, high-altitude operations, and exploration of rough terrain can be done more efficiently with the collaborative capabilities of multi-agent systems, where drones working alongside a leader robot can execute operations. However, continuous exploration by drones remains a challenging problem caused by the lack of a power supply and the inability of drones to land on uneven surfaces.
Intelligent unmanned aerial vehicles (UAVs) shown superior capabilities for autonomous inspections, path planning, and data collection, through adaptation to dynamic uncertain environments and complex tasks [1]. While multi-agent aerial systems were widely studied [2], the complementary skills of agents in heterogeneous formations performed more efficiently in missions [3]. Several concepts of heterogeneous robotic teams were proposed in previous research, such as object classification, inspection or digital art with UAVs carrying different tools [4, 5, 6, 7].
However, applications for multi-agent teams are still limited due to their inability to take off and land in unstructured terrains. While there were efforts to address this issue with different robots, e.g., the application of UAVs to deliver mini-UGVs for exploration in environments with sparse obstacles [10], there are remaining challenges in exploration of cluttered terrains, which require novel approaches to expand the scope of UAV applications.
In this paper, we propose a novel system MorphoLander (Fig. 1) that utilizes the capabilities of the hybrid drone as a landing platform for smaller drones. Two major contributions of this work are the development of a compensation system for surface inclination with a multi-terrain drone and the development of an algorithm for landing in unstructured environments.
## II Related Works
In UAV missions performed in unstructured environments, taking off and landing pose the highest challenge for the system due to the surface inclination and low controllability of the drone at these stages of a mission.
Several researchers proposed utilizing human capabilities as a means to achieve a safe landing with mini- and nano
Fig. 1: Landing of the group of drones on the MorphoLander robot standing on an uneven surface.
UAVs. For example, Tsykunov et al. [11] and Auda et al. [12] explored drone landing on human limbs with the help of visual and haptic cues. While showing sufficient precision and stability in landing with several drones, these scenarios require the substantial presence of a human at the remote site and are not feasible for autonomous systems.
To achieve a precise docking without a human-in-the-loop, more complex algorithms are required for both the drone and the platform carrying the landing pads. Several systems were proposed to achieve precise landing on a platform with limited sizes, e.g., Nguyen et al. [13] developed a precise drone landing system for charging, while Kooi et al. [14] proposed an RL-based algorithm for landing on an inclined platform. While showing high stability in static environments, the performance of both algorithms in environments with a changing surface layout requires further investigation. A landing approach on a moving wheeled platform was developed by Gupta et al. [15], however, the proposed wheeled platform is not able to compensate for the inclination of the surface. Fedoseev et al. [16] proposed a system for drone docking in midair by utilizing robotic arm with soft gripper. This approach is indifferent to dynamic unstructured terrain, however, it requires additional manipulators to be placed on the landing site for each drone. Finally, Jain et al. [17] suggested using a mothership drone for docking in midair. This approach achieved a precise landing through the mobility of both platform and single landing drone. However, its stability in dynamic conditions and the presence of several landing agents require further exploration due to the changing dynamics of the hovering platform.
## III System Overview
### _Design of the Landing Platform_
To achieve efficient and safe autonomous landing we designed a platform that would allow for drone docking both on ground and in midair. Thus, landing on a morphogenetic UAV [18] provides a realistic and versatile test environment. MorphoLander's landing platform is the morphogenetic robot with four robotic legs. Each leg has three degrees of freedom (DoFs) with the shoulder oriented perpendicularly to the central axis of the robot. The legs are actuated by Dynamixel MX-106 and MX-28 servomotors in the hip joints and MX-64 servomotors in the knee joints. The landing gear is constructed using lightweight PLA material except for the base, which comprises two 3 mm thick carbon disks supporting the legs and hexacopter axes. An overview of the multi-terrain drone, including the locomotion algorithm, was described in our previous work [20]. In [19], we explored various leader-follower drone formations. In this work, we have enhanced the design of the morphogenetic leader drone by developing landing sites for the follower drones to land on. To accomplish this, we embedded two landing pads with diameter of 20 cm each into the body of the landing platform (Fig. 3).
### _Adaptive Landing Gear_
To land the mothership robot on an uneven surface, a stabilization algorithm was developed based on the currently applied load on the servomotors, since no external force-torque sensors were applied to preserve the weight of the drone. Dynamixel servomotors are capable of measuring load as the internal output, proportional to the percentage of the maximum motor torque. Consequently, the load on the drives depends on the force exerted on the robot's end-effector perpendicular to the surface. By selecting the rate of the current load proportionally to the weight of the robot and its limbs, we were able to filter out the noise present in statics from the values obtained in dynamics. When moving clockwise, the torque on the servo is within 20-50% of stall torque and when moving counterclockwise it is from -50% to -20%. If the limb does not touch the ground, the load on the servo is lower then 4% of stall torque and reaches 4% when standing. When a single limb is at a higher stage than the others, the torque on it increases up to 15-20% of stall torque. When receiving such values, drone raises this limb higher to reduce the load and return the robot into a stable position.
The algorithm below shows the given data filtering method. The idea of this stabilization algorithm is that the end-effector of each limb moves along a strictly vertical line through inverse kinematics (Fig. 3).
For the developed algorithm the maximum height of the raised limb equals 10 cm. This height is subdivided into 100 segments (percent). When subjected to a load, the limb ascends by 10%, while in the absence of any load, it descends by 5%. The algorithm for filtering the values and moving the limb depending on the load in the shoulder joint is shown in Alg. 1.
## IV Reinforcement Learning Methodology
In order to effectively train the Reinforcement Learning (RL) agent, it is necessary to have a thorough understanding
Fig. 2: 3D CAD model of MorphoLander.
of the system that is being controlled. The focus of this study is on the landing maneuvers of Bitcraze Crazyflie 2.1 micro-UAVs, which are equipped with both low- and high-level controllers. The controller scheme is depicted in Fig. 4.
To utilize the capability of the pre-existing controllers, the reinforcement learning agent that we train outputs an optimal velocity vector, represented as follows:
\[\mathbf{u}=\left[v_{x}\ v_{y}\ v_{z}\right]^{T}, \tag{1}\]
where \(v_{i}\) is the velocity along the \(i\)-th axis. This velocity vector will then be fed into a cascade of proportional-integral-derivative (PID) controllers, which are implemented on the Crazyflie drone.
To enable the RL agent to learn an optimal control policy, it should be able to interact with the environment and receive feedback in the form of rewards. It is widely accepted that the usage of simulated environments for training of the agent can ensure a safe and efficient learning process. However, the accuracy of the simulation is crucial for the success of the learned policy in real-world scenarios. Hence, we designed a simulation environment that closely resembles the real-world environment to ensure that the learned policy is transferable to the physical system.
### _Drone Landing Simulator_
A simulated environment was developed with the Unity real-time development platform to train the RL agent (Fig. 5). Unity game engine was applied based on its ability to simulate physics and to use machine learning tools of the MLAgents package [21] for RL agent training. Additionally, communication between ROS and Unity scripts was established via RosBridge library, allowing a low-delay control of the drone. In the simulated environment, the drone is controlled by thrust forces generated by four motors, as shown in Fig. 4. To match a real-world scenario, three PID regulators were implemented to control the attitude rate, attitude, and velocity of the drone.
The accuracy of the simulated environment plays a critical role in the effectiveness of the RL agent. Therefore, the simulation was designed to closely resemble real-world conditions to ensure that the agent could operate effectively not only in the simulated environment but also in real-world scenarios.
### _Training the Reinforcement Learning Controller_
The primary objective of the RL algorithm is to discover a stochastic policy \(\pi_{k}(a|s)\) that maximizes the discounted return, denoted as follows:
\[\eta(\pi_{\phi})=E_{\tau}[\sum_{t=0}^{T}\gamma^{t}r(s_{k},a_{k})], \tag{2}\]
where \(\tau\) represents the trajectory followed by policy \(\pi_{\phi}\), \(r(s_{k},a_{k})\) is the reward function, and \(\gamma\) is the discount factor.
Fig. 4: Crazyflie controller scheme. The state of the drone is the observation of the RL agent. The output of the agent, which is the target velocity, is sent to the onboard controller. The onboard controller regulates the RPM of the propellers to create thrust.
Fig. 3: The currently applied load (generated torque in % of stall torque) acting on the landing gear during the MorphoLander platform adaptation to the uneven surface.
In this study, we define the state of the drone as the linear kinematic values and denote it as:
\[s=[x,y,z,v_{x},v_{y},v_{z},a_{x},a_{y},a_{z}]^{T}. \tag{3}\]
We define the state of the landing platform \(s_{p}\) up to third order as:
\[s_{p}=[x_{p},y_{p},z_{p},v_{x}^{p},v_{y}^{p},v_{z}^{p},a_{x}^{p},a_{y}^{p},a_{z }^{p}]^{T}. \tag{4}\]
Finally, the observation of drone is derived as the difference between the two states:
\[\Delta s=s_{p}-s. \tag{5}\]
The reward function is defined as:
\[r=-e_{d}-\alpha\cdot e_{v}-\beta\cdot e_{a}-\gamma\cdot e_{u}+\xi, \tag{6}\]
where \(e_{d}\), \(e_{v}\), and \(e_{a}\) denote the Euclidean distance error of the position, velocity, and acceleration, respectively. The value \(e_{u}\) is the magnitude of the control velocity generated by the RL agent, and \(\xi\) is the additional reward if the distance between the drone and the platform is less than the threshold value.
To train the RL agent, we employed the Proximal Policy Optimization (PPO) algorithm [22] due to its high performance and sample efficiency. Additionally, we employed curriculum learning to speed up the learning process. The agent training was conducted in two stages: position hold, to teach the drone to maintain its position after reaching the desired location, and position set, to teach the drone to reach a specific position and land on the platform.
## V Experimental Evaluation
To assess the efficacy of the developed MorphoLander system, we performed a series of three experiments. In each experiment, a swarm consisting of two Bitcraze Crazyflie drones was tasked with landing on the MorphoLander platform. The positions of the drones were estimated using the Vicon V5 indoor localization system, and the control commands were generated by the RL agent in the Unity environment and transmitted to the drones through the Robot Operating System (ROS). Each experiment was conducted in a series of 16 trials. In the subsequent sections, we will provide a detailed account of each experiment and evaluate the controller's performance.
### _Landing on a Static Platform and Even Terrain_
#### V-A1 Experiment Procedure
During the experiment, two Crazyflie drones took off from the floor and landed on the MorphoLander robot placed on an even terrain. The distance between the drones was 75 cm, and the starting distance between the drones and the landing platform was 1.5 m. The drones were required to land on opposite platforms, with the right drone landing on the left platform and vice versa.
#### V-A2 Experimental Results
Fig. 6 shows the results of the experiments with robot standing on even terrain.
The landing shift of a drone was measured and analyzed over six experiments with a mean value of 0.55 cm, indicating that the RL agent was successful in landing the swarm of two drones on the platform with high precision.
### _Landing on a Static Platform and Uneven Terrain_
#### V-B1 Experiment Procedure
This experiment evaluated the performance of the RL landing controller on the MorphoLander leader robot in adaptive stabilization mode on unstructured terrain. Two Crazyflie drones took off from the ground and landed on the platform.
#### V-B2 Experiment Results
The results of the experiment are presented in Fig. 7.
The mean drone shift was 2.35 cm. This value was considered sufficient as it falls within the precise landing error of 6.8 cm achieved in previous experiments in this field [23]. However, drone landing shift was higher than in the first experiment due to the added complexity of the terrain. The performance of the RL agent was also affected by low tolerance of the landing gear.
### _Landing on a Relocated Platform and Uneven Terrain_
#### V-C1 Experiment Procedure
This experiment aimed to replicate a real-world scenario, where the landing controller is expected to operate in long-term missions. The experiment involves transporting the Crazyflie drones using landing gear. The drones then took off and hovered in place while the platform relocated to a different position. Subsequently, the platform will come to a halt and await the landing of the drones.
#### V-C2 Experiment Results
The detailed results of the experiment are presented in Fig. 8.
In this experiment, the displacement of the landing gear to certain positions obstructed the landing trajectory of the drones, resulting in a maximum mean landing shift of 3.5 cm.
Fig. 5: The simulated environment in Unity 3D. Blue drones are being controlled by the RL agent. Orange drones show the goal landing positions.
## VI Conclusion and Future Work
This paper presented a novel robotic system MorphoLander consisting of a morphogenetic leader drone and follower micro-drones capable of docking on the leader by RL-based algorithm. The system leverages the ability of the leader drone to land on uneven terrain and deploy smaller drones for extensive area exploration. The RL-based controller was developed to achieve precise drone landing on a static and relocated leader robot under even and uneven terrain conditions. Experimental evaluations demonstrated a high level of landing accuracy with mean drone shift of 2.35 cm from the landing pad center under even terrain condition. The lowest mean landing shift of 0.55 cm was achieved while landing on even terrain.
The proposed technology can potentially enhance industrial inspections, delivery, rescue missions with supply preservation, and even provide on-board recharging of the micro-drones. In the future, we will increase the robustness of the system by teaching the agent to directly change the PWM signals that control the individual thrusts. This scenario will allow the RL agent to learn how to control drones in the presence of ground effect, downwash from other drones, wind, and etc. We will additionally increase the stability of the drone by teaching the RL agent to directly control the thrust of each motor, allowing for more reliable and agile control in various external conditions.
|
2304.12066 | Response to the paper: Theoretical understanding of evolutionary
dynamics on inhomogeneous networks | As a co-author of the paper Theoretical understanding of evolutionary
dynamics on inhomogeneous networks, I would like to express my disagreement
with the conclusion of the paper. In this response, I present a thorough
examination of the assertions and methods in the paper. Although I may disagree
with several practices in the research group, I will confine my discussion to
the academic analysis of the paper in this response. | Christopher Li | 2023-04-21T00:39:20Z | http://arxiv.org/abs/2304.12066v1 | Response to the paper: Theoretical understanding of evolutionary dynamics on inhomogeneous networks
###### Abstract
As a co-author of the paper "Theoretical understanding of evolutionary dynamics on inhomogeneous networks" (the paper), I would like to express my disagreement with the conclusion of the paper. In this response, I present a thorough examination of the paper's assertions and methods. Although I may disagree with several practices in the research group, I will confine my discussion to the academic analysis of the paper in this response.
## 1 Introduction
Evolutionary dynamics on graphs is a relatively new area of research that has attracted considerable interest. The fixation probability and fixation time of birth-death processes on complete bipartite graphs, which encompass star graphs, have been well-established through rigorous investigation [1][2][3][4][5][6].
The work in "Theoretical understanding of evolutionary dynamics on inhomogeneous networks" (the paper) [7] primarily consists of a repetition of Ref. [1]. In addition, it presents an approximate model and a novel interpretation, both of which I believe to be meaningless.
Side note: The "B" in author name "Christopher B Li" that appears in the paper was a result of a typographical error. I do not have a middle name. The "B" may not appear in future editions.
## 2 Detailed Analysis of the Paper's Methods and Claims
In the supporting information of the paper (Fig.S1), first-passage method was used to calculate the transition probabilities. However, this method was unnecessary and resulted in complicating a straightforward problem. To see this, I use an as an example to demonstrate.
When the central node is mutated and there are (n-1) mutations in the "leaves", the rate at which the system evolves to the (n + 1) state (mutated central node plus n mutated cells in the leaves) is simply r x 1 (the rate at which the central node produces offspring) multiplied by (N-n)/(N-1) (The probability that a normal cell in the leaves is chosen to die).
Similarly, all other transition rates can be obtained without utilizing the so-called first passage method.
While the transition rates presented in the paper are factually correct, the use of the first-passage method is absolutely unnecessary and can cause confusion among readers.
I raised this concern with the group, and it was also brought up by one of the referees of the paper. Despite these concerns, the final version of the paper still presented this confusing argument.
### The Approximate Model
In the paper, an approximate model was introduced. However, as I mentioned, the fixation probability can be easily found by Monk's martingale method in Ref. [2].
Given that there is an exact solution to the fixation probability [2][1], the approximate model lacks meaningful or relevant insights.
One may argue that the approximate model gives an important time scale, however, this is not the case. The time scale of the discrete-time birth-death process has been given in Ref. [6] and Ref. [5] (Be careful with the definition of "fixation time" when reading the papers).
One may argue that previous research gave the Nln(N) time scale by using simulation which may be treated as "inferior". However, the approximate model underestimates the fixation time, it should be considered even more inferior than the simulation result.
Another important point worth noting is that in the continuous time model presented in the paper, if the parameter r is sufficiently large, the fixation time approaches zero. In contrast, for the discrete time model, the fixation time approaches N, the population size. In my view, this subtlety is significant and should have been explicitly mentioned in the paper. While it may or may not challenge the claims made in the paper, it is an important aspect to consider.
According to the paper, the significance of the approximate model and the use of a continuous time model lies in their ability to facilitate new interpretations. However, in the following section, I will present arguments to counter this claim.
### 2.3 Claims in the main text
In the conclusion of the paper, it says "The presented theoretical method allowed us to better understand the microscopic origin of fixation amplification that is accompanied by significant increase in fixation times".
In my opinion, this claim is false. The idea of "pathways" described in the paper is merely an illusion. While the concept might be helpful for those who struggle with mathematics, it is presented in a hand-waving manner and should not be included as a formal conclusion in the paper.
Moreover, the same "pathway" argument can be easily constructed using discrete-time models as well.
### 2.4 Unreliability of the Monte Carlo Simulation Codes
The Monte Carlo simulation codes employed by the research group were found (by me) to contain several bugs, making the simulation results unreliable occasionally. Upon joining the group, I corrected the codes. However, it is important to note that I cannot personally guarantee the validity of all simulations.
## 3 Conclusion
As one of the co-authors of the paper, I hold the opinion that the results presented in the paper lack publishable quality. A significant portion of the paper merely reiterates what others have already accomplished. Moreover, the conclusion drawn is not scientifically sound and is based on hand-waving arguments.
While it is my opinion that this pattern of repeating existing work and presenting meaningless interpretations has been observed in other publications from this research group, for the purposes of this discussion, I will focus solely on this particular paper.
If grant permission to all other authors of the paper to republish the paper without including my name.
If grant permission to Anatoly Kolomeisky's research group to use my previous work on the so-called "L-star model" (usually called complete bipartite graphs) in their future publications without including my name.
If do not give my consent to have my name included in any of their future publications, whether as a co-author or in the acknowledgements.
## 4 Acknowledgements
If express my appreciation to all the individuals in the scientific community who are committed to pursuing truth and conducting serious research.
|
2306.13501 | Knowledge-Infused Self Attention Transformers | Transformer-based language models have achieved impressive success in various
natural language processing tasks due to their ability to capture complex
dependencies and contextual information using self-attention mechanisms.
However, they are not without limitations. These limitations include
hallucinations, where they produce incorrect outputs with high confidence, and
alignment issues, where they generate unhelpful and unsafe outputs for human
users. These limitations stem from the absence of implicit and missing context
in the data alone. To address this, researchers have explored augmenting these
models with external knowledge from knowledge graphs to provide the necessary
additional context. However, the ad-hoc nature of existing methods makes it
difficult to properly analyze the effects of knowledge infusion on the many
moving parts or components of a transformer. This paper introduces a systematic
method for infusing knowledge into different components of a transformer-based
model. A modular framework is proposed to identify specific components within
the transformer architecture, such as the self-attention mechanism, encoder
layers, or the input embedding layer, where knowledge infusion can be applied.
Additionally, extensive experiments are conducted on the General Language
Understanding Evaluation (GLUE) benchmark tasks, and the findings are reported.
This systematic approach aims to facilitate more principled approaches to
incorporating knowledge into language model architectures. | Kaushik Roy, Yuxin Zi, Vignesh Narayanan, Manas Gaur, Amit Sheth | 2023-06-23T13:55:01Z | http://arxiv.org/abs/2306.13501v1 | # Knowledge-Infused Self Attention Transformers
###### Abstract.
Transformer-based language models have achieved impressive success in various natural language processing tasks due to their ability to capture complex dependencies and contextual information using self-attention mechanisms. However, they are not without limitations. These limitations include hallucinations, where they produce incorrect outputs with high confidence, and alignment issues, where they generate unhelpful and unsafe outputs for human users. These limitations stem from the absence of implicit and missing context in the data alone. To address this, researchers have explored augmenting these models with external knowledge from knowledge graphs to provide the necessary additional context. However, the ad-hoc nature of existing methods makes it difficult to properly analyze the effects of knowledge infusion on the many moving parts or components of a transformer. This paper introduces a systematic method for infusing knowledge into different components of a transformer-based model. A modular framework is proposed to identify specific components within the transformer architecture, such as the self-attention mechanism, encoder layers, or the input embedding layer, where knowledge infusion can be applied. Additionally, extensive experiments are conducted on the General Language Understanding Evaluation (GLUE) benchmark tasks, and the findings are reported. This systematic approach aims to facilitate more principled approaches to incorporating knowledge into language model architectures.
knowledge graphs, language models, knowledge-infusion +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition |
2305.09009 | Convex Geometric Trajectory Tracking using Lie Algebraic MPC for
Autonomous Marine Vehicles | Controlling marine vehicles in challenging environments is a complex task due
to the presence of nonlinear hydrodynamics and uncertain external disturbances.
Despite nonlinear model predictive control (MPC) showing potential in
addressing these issues, its practical implementation is often constrained by
computational limitations. In this paper, we propose an efficient controller
for trajectory tracking of marine vehicles by employing a convex error-state
MPC on the Lie group. By leveraging the inherent geometric properties of the
Lie group, we can construct globally valid error dynamics and formulate a
quadratic programming-based optimization problem. Our proposed MPC demonstrates
effectiveness in trajectory tracking through extensive-numerical simulations,
including scenarios involving ocean currents. Notably, our method substantially
reduces computation time compared to nonlinear MPC, making it well-suited for
real-time control applications with long prediction horizons or involving small
marine vehicles. | Junwoo Jang, Sangli Teng, Maani Ghaffari | 2023-05-15T20:46:32Z | http://arxiv.org/abs/2305.09009v2 | # Convex Geometric Trajectory Tracking using Lie Algebraic MPC for Autonomous Marine Vehicles
###### Abstract
Controlling marine vehicles in challenging environments is a complex task due to the presence of nonlinear hydrodynamics and uncertain external disturbances. Despite nonlinear model predictive control (MPC) showing potential in addressing these issues, its practical implementation is often constrained by computational limitations. In this paper, we propose an efficient controller for trajectory tracking of marine vehicles by employing a convex error-state MPC on the Lie group. By leveraging the inherent geometric properties of the Lie group, we can construct globally valid error dynamics and formulate a quadratic programming-based optimization problem. Our proposed MPC demonstrates effectiveness in trajectory tracking through extensive-numerical simulations, including scenarios involving ocean currents. Notably, our method substantially reduces computation time compared to nonlinear MPC, making it well-suited for real-time control applications with long prediction horizons or involving small marine vehicles.
Autonomous marine vehicles, Trajectory tracking, Model predictive control, Geometric control, Lie groups
## I Introduction
Marine vehicles have become increasingly important due to their diverse applications, such as underwater exploration [1], the oil and gas industry [2], transportation and environmental monitoring [3]. Advancements in automation technology have led to the development of more advanced marine vehicles capable of performing complex tasks in harsh and challenging environments. However, controlling these vehicles is still arduous due to their highly nonlinear dynamics from complex hydrodynamic interactions and uncertain external disturbances. Additionally, characteristics such as low controllability, low motion frequency, and long control signal response time can lead to unstable behavior or overshoot, posing potential risks in situations where precise control is crucial, like collision avoidance, station keeping, and docking [4].
While classical control methods like proportional-integral-derivative (PID) controllers have been widely used for controlling marine vehicles [5], they struggle to operate effectively in narrow waterways or heavy traffic circumstances [6]. As a result, modern control techniques such as model predictive control (MPC) have gained popularity in recent years [7, 8]. MPC is a powerful control approach that is capable of handling constraints, nonlinear dynamics, and disturbances. However, nonlinear MPC for marine vehicles requires solving complex optimization problems, which can be computationally demanding and difficult to implement in real-time applications.
With recent advancements in computational capabilities, direct nonlinear optimization using nonlinear MPC (NMPC) has been employed in real marine vehicle experiments [9, 10, 11, 12]. However, real-time NMPC requires certain approximations. For instance, [9] employs simplified hydrodynamics with first-order and diagonal damping force terms, and [10, 11, 12] limits the vehicle maneuverability by lowering control frequency and speed of the vehicle. Moreover, these studies focus only on 3D motion, which limits their applicability for general marine vehicles control, such as station-keeping with heave motion or underwater vehicle control. In the case of small marine vehicles that cannot accommodate high-end computing devices, there is a need to reduce computational demands significantly.
Computationally-efficient control algorithms capable of handling highly nonlinear hydrodynamics are crucial for marine vehicle control. Several promising approaches involve improved system representation using reasonable approximation or efficient control optimization to address this. Adaptive MPC [13] utilizes multiple approximated linear models to reduce the computational burden. MPC with projection neural network [14] efficiently solves constrained optimization problems with parallel computational capability. Distributed optimization [15] decomposes the original optimization problems into small subproblems by leveraging the dynamic properties of marine vehicle motion and then solving them with a significant reduction in computational complexity.
Another promising approach to achieving computational efficiency is geometric control, based on the Lie group framework, to exploit existing symmetry in the problem [16, 17]. Unlike methods that rely on approximations of the hydrodynamics model, geometric control leverages the intrinsic geom
Fig. 1: The proposed geometric trajectory tracking algorithm framework. The algorithm incorporates tracking error and hydrodynamics, which are defined on a Lie group and linearized to construct a convex MPC algorithm. The proposed MPC is applied to a marine vehicle within a simulation environment.
etry of the system and represents the dynamics in an invariant and symmetric manner. Considering that the configuration of the vehicle space is a nonlinear manifold rather than a linear space, trajectory tracking algorithms for mobile robots and surface vehicles on \(\mathrm{SE}(2)\) are presented in [18, 19]. A recent study introduces error-state MPC on \(\mathrm{SE}(3)\) for controlling legged robots [20], providing an accurate estimation of error dynamics. Geometric control guarantees that the error dynamics are globally valid and evolve independently of the system trajectory, enabling efficient quadratic programming (QP)-based control optimization.
Motivated by the work of [20], we develop an error-state MPC on the Lie group for marine vehicle control, as illustrated in Fig. 1, constructing an efficient and accurate trajectory tracking controller. The marine domain imposes more challenging (and perhaps interesting) scenarios as the higher water density leads to significant environmental forces and state-dependent vehicle models. Our key contributions are summarized as follows.
1. We establish a nonlinear hydrodynamics model on the Lie group to ensure that error dynamics are globally valid and evolve independently of the system.
2. We develop a convex error-state MPC by employing first-order approximations of dynamics and error dynamics on the Lie group.
3. We demonstrate the effectiveness of the proposed algorithm in controlling surface vehicles for trajectory tracking in the presence of external disturbances using the Marine Systems Simulator [21].
4. Implementation of the proposed MPC is available at [https://github.com/UMich-CURLY/Lie-MPC-AMVs](https://github.com/UMich-CURLY/Lie-MPC-AMVs).
The remainder of this paper is organized as follows. The dynamics of marine vehicles and its expression on the Lie group are presented in Section II. The convex error-state MPC is derived from the linearization of dynamics and error dynamics on the Lie group in Section III. In Section IV, we present numerical simulations to evaluate the performance of our method in scenarios involving ocean currents and discuss potential directions for future research. Finally, we conclude the paper in Section V.
## II Dynamics of Marine Vehicles
In this section, we present a background on the general hydrodynamics model of marine vehicles and Lie groups. Subsequently, we define the vehicle model within the framework of Lie groups.
### _Hydrodynamcis modeling_
Extensive research has been conducted to comprehend the hydrodynamics of marine vehicles due to the complex nature of fluid interactions. While it is difficult to represent a vehicle model compactly in an analytical form, researchers have developed various methods to approximate the main forces acting on a vehicle. We follow Fossen's analytical approach [22] to select the most dominant forces and model a marine vehicle.
In general, surface vehicles are modeled with 3 degrees of freedom (DOF). However, for modeling and controlling a general marine vehicle, such as an autonomous underwater vehicle (AUV), we describe 6-DOF motion equations. The 6-DOF model can be easily simplified to a 3-DOF model by neglecting certain axis motions.
Let the rotation matrix \(R\in\mathrm{SO}(3)\) and identity matrix \(I_{3}\in\mathbb{R}^{3\times 3}\), where
\[\mathrm{SO}(3)=\{R|R\in\mathbb{R}^{3\times 3},RR^{\mathsf{T}}=R^{\mathsf{T}}R=I_ {3},\det(R)=1\}. \tag{1}\]
We use the notation \((\cdot)^{\wedge}\) to represent the cross-product operation, where \(\lambda\times a:=\lambda^{\wedge}a\). \(\lambda^{\wedge}\) is a skew-symmetric matrix,
\[\lambda^{\wedge}=\begin{bmatrix}0&-\lambda_{z}&\lambda_{y}\\ \lambda_{z}&0&-\lambda_{x}\\ -\lambda_{y}&\lambda_{x}&0,\end{bmatrix}. \tag{2}\]
The transformation matrix for converting from body-fixed frame to spatial frames is given by:
\[\hat{\eta}=J_{\Theta}(\nu)\nu, \tag{3}\]
where \(J_{\Theta}\) is the Euler angle transformation matrix for 6-DOF kinematic equations, the vector \(\nu\) includes body-frame velocity and angular velocity, \(\nu=[u,v,w,p,q,r]^{\mathsf{T}}\) and the vector \(\eta\) is the generalized position and orientation in North-East-Down (NED) frame, \(\eta=[x,y,z,\phi,\theta,\psi]^{\mathsf{T}}\). For a marine vehicle, the six different motion components are conveniently defined as shown in Fig. 2.
In Fossen's model, the equations of motion for a marine vehicle in the body frame are expressed as a set of equations in the following form:
\[\begin{split} M_{RB}\dot{\nu}+C_{RB}(\nu)\nu+M_{AM}\dot{\nu}_{r} +C_{AM}(\nu_{r})\nu_{r}\\ +D(\nu_{r})\nu_{r}+g(\eta)=\tau_{c}+\tau_{wind}+\tau_{waves}, \end{split} \tag{4}\]
where \(M\) is the mass matrix, \(C(v)\) is the Coriolis matrix, \(D(v)\) is the drag (damping) matrix, and \(\tau\) is the vector of forces and moments. The subscripts \(RB\) and \(AM\) associated with the mass and Coriolis matrices denote the rigid body and additional mass, respectively. The added mass is assumed to be proportional only to the relative velocity \(\nu_{r}\), which is commonly used for marine vehicle models. The force \(\tau_{c}\) is generated by the propulsion system, which is the control input, and \(\tau_{wind}\) and \(\tau_{wave}\) are the external disturbances caused by the wind and waves. The vector \(\nu_{r}\) is the relative velocity vector to the ocean current velocity \(\nu_{c}\) (i.e., \(\nu_{r}=\nu-\nu_{c}\)). Note that terms relating to \(\nu_{r}\) represent hydrodynamic forces, and \(g(\eta)\) is the hydrostatic force.
Fig. 2: The 6 DOF velocities in the body-fixed reference frame following Fossen’s convention [22].
The rigid body mass matrix is symmetric and represented as:
\[\begin{split} M_{RB}&=\begin{bmatrix}m&0&0&0&mz_{g}&-my_{g}\\ 0&m&0&-mz_{g}&0&mx_{g}\\ 0&0&m&my_{g}&-mx_{g}&0\\ 0&-mz_{g}&my_{g}&I_{x}x&-I_{xy}&-I_{xz}\\ mz_{g}&0&-mx_{g}&-I_{yx}&I_{yy}&-I_{yz}\\ -my_{g}&mx_{g}&0&-I_{zx}&-I_{zy}&I_{zz}\\ \end{bmatrix}\\ &=\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix},\end{split} \tag{5}\]
where \(x_{g},y_{g}\), and \(z_{g}\) are the distances to the center of gravity from the origin coordinate, and \(I\) is the inertia dyadic about the origin coordinate. In an ideal fluid, the hydrodynamic system's added inertia matrix at infinite frequency is represented as positive definite and constant.
The Coriolis matrix captures the inertial forces in the dynamics of marine vehicles and is skew-symmetric. The Coriolis matrix is derived from the mass matrix and the relative velocity vector,
\[\begin{split} C(\nu)=\\ &\begin{bmatrix}0&-(M_{11}\nu_{1}+M_{12}\nu_{2})^{\wedge}\\ -(M_{11}\nu_{1}+M_{12}\nu_{2})^{\wedge}&-(M_{21}\nu_{1}+M_{22}\nu_{2})^{ \wedge}\end{bmatrix},\end{split} \tag{6}\]
where \(\nu_{1}=[u,v,w]^{\mathsf{T}}\) and \(\nu_{2}=[p,q,r]^{\mathsf{T}}\).
The hydrodynamic damping matrix, \(D(\nu_{r})\), accounts for the forces due to various hydrodynamic effects, including potential damping, skin friction, wave drift damping, vortex shedding, and lifting forces. It is expressed as a sum of linear and quadratic terms, where the linear damping matrix \(D_{l}\) is constant, and the quadratic damping matrix \(D_{n}\) is proportional to the absolute value of the relative velocity \(|\nu_{r}|\). The dominance of linear or nonlinear damping depends on the surge velocity of the vehicle.
In the case of noncoupled motion, where the motion in one axis does not affect the other axes, a diagonal damping structure can be assumed. This simplifies the hydrodynamic damping matrix, which can be expressed as:
\[\begin{split} D(\nu_{r})=&-\mathrm{diag}([X_{u},Y_{v },Z_{w},K_{p},M_{q},N_{r}]^{\mathsf{T}})\\ &-\mathrm{diag}([X_{|u|}|u_{r}],Y_{|v|v}|v_{r}],Z_{|w|}w_{r}|,\\ &\qquad\qquad K_{|p|p}[p_{r}],M_{|q|q}|q_{r}],N_{|r|r|}|r|^{ \mathsf{T}}).\end{split} \tag{7}\]
where \(\mathrm{diag}(x)\) denotes a diagonal matrix where diagonal elements are composed of elements of vector \(x\).
If the ocean currents are assumed to be constant and irrotational, the equations of motion can be simplified by using only the relative velocity, \(\nu_{r}\). Furthermore, we assume that external disturbances of wind and waves are either unknown or negligible in their impact on the dynamics. Therefore, these disturbances can be treated as noise or modeling errors.
\[M\dot{\nu_{r}}+C(\nu_{r})\nu_{r}+D(\nu_{r})\nu_{r}+g(\eta)=\tau_{c}, \tag{8}\]
where \(M=M_{RB}+M_{AM}\) and \(C=C_{RB}+C_{AD}\).
Additionally, for surface vehicles or neutrally buoyant vehicles, the hydrostatic force \(g(\eta)\) can be neglected. The resulting simplified model is given by:
\[M\dot{\nu}+C(\nu)\nu+D(\nu)\nu=\tau_{c}. \tag{9}\]
It is important to note that while our controller uses this simplified model and does not explicitly account for the external forces, in the simulation, we will incorporate an external force to assess the ability of the controller to handle significant disturbances. We will present experimental results to demonstrate the controller's performance under realistic conditions.
### _Dynamics on a Lie group_
This section briefly introduces the dynamics model on a Lie group \(\mathrm{SE}(3)\) and the commonly used notation. For a more comprehensive understanding of Lie groups, refer to [23, 24, 25, 26]. A group is a set, \(\mathcal{G}\), with a composition operation for its elements that satisfies the axioms of closure, identity, inverse, and associativity. In a Lie group, the manifold is symmetric and looks the same at every point, making all tangent spaces isomorphic. The tangent space at the identity (since all groups have the identity element), \(T_{e}\mathcal{G}=\mathfrak{g}\), is defined as the Lie algebra of \(\mathcal{G}\). The exponential map, \(\exp:\mathfrak{g}\rightarrow\mathcal{G}\), maps elements of the Lie algebra to elements of the Lie group. The inverse operation of the exponential map is the \(\log\) map.
The Lie algebra is a vector space whose elements can be associated with vectors \(\xi\in\mathbb{R}^{n}\), where \(\dim\mathfrak{g}=n\). The conversion between \(\mathfrak{g}\) and \(\mathbb{R}^{n}\) is facilitated by the following isomorphism, commonly known as the hat and wee operators:
\[(\xi)^{\wedge}:\mathbb{R}^{n}\rightarrow\mathfrak{g},\;\;(\xi)^{\vee}: \mathfrak{g}\rightarrow\mathbb{R}^{n}. \tag{10}\]
In matrix Lie groups, the \(\exp\) map naturally arises by exactly integrating the group reconstruction equation,
\[\dot{X}=X\xi^{\wedge}. \tag{11}\]
The vehicle state in \(\mathrm{SE}(3)\) can be represented by a rotation matrix \(R\in\mathrm{SO}(3)\) and position \(p\in\mathbb{R}^{3}\). The homogeneous representation of an \(\mathrm{SE}(3)\) element is given by:
\[X=\begin{bmatrix}R&p\\ 0&1\end{bmatrix}. \tag{12}\]
We define the twist as the concatenation of the angular velocity \(\omega\) and the linear velocity \(v\) in the body frame, denoted as \(\xi=[\nu_{2}^{\mathsf{T}},\nu_{1}^{\mathsf{T}}]^{\mathsf{T}}=[\omega^{\mathsf{T }},v^{\mathsf{T}}]^{\mathsf{T}}\in\mathbb{R}^{6}\). The hat operator is then used to obtain the corresponding \(T_{e}\mathrm{SE}(3)=\mathfrak{se}(3)\) element:
\[\xi^{\wedge}=\begin{bmatrix}\omega^{\wedge}&v\\ 0&0\end{bmatrix}. \tag{13}\]
. For an \(X\in\mathrm{SE}(3)\), it can be shown that both \(X^{-1}\dot{X}\) and \(\dot{X}X^{-1}\) belong to \(\mathfrak{se}(3)\). The former is the body velocity in the body-fixed frame, while the latter is the spatial velocity in the spatial frame. The relationship between these two velocities is given by the adjoint map, \(\mathrm{Ad}_{X}:\mathfrak{g}\rightarrow\mathfrak{g}\), that enables change of
frame for velocities defined in the Lie algebra via the following matrix similarity.
\[\mathrm{Ad}_{X}\xi=X\xi^{\wedge}X^{-1}. \tag{14}\]
The adjoint map describes how elements of a Lie group or a Lie algebra act on other elements of a Lie algebra. The adjoint map is a linear transformation that maps an element \(\xi\in\mathfrak{g}\) to \(\mathrm{Ad}_{X}(\xi)\). For an element \(\xi\in\mathfrak{g}\), its derivative at the identity denoted \(\mathrm{ad}_{\xi}:\mathfrak{g}\rightarrow\mathfrak{g}\) and it maps an element from \(\eta\in\mathfrak{g}\) to \(\mathrm{ad}_{\xi}(\eta)\). The (little) adjoint describes how the Lie bracket acts on an element of the Lie algebra. The Lie bracket is a bilinear operation on the Lie algebra that measures the failure of the product of two Lie group elements to commute (Lie derivative).
The adjoint map and adjoint in the Lie algebra in \(\mathrm{SE}(3)\) can be represented by matrices as follows:
\[\mathrm{Ad}_{X}=\begin{bmatrix}R&0\\ p^{\wedge}R&R\end{bmatrix},\ \ \mathrm{ad}_{\xi}=\begin{bmatrix}\omega^{\wedge}&0\\ v^{\wedge}&\omega^{\wedge}\end{bmatrix}. \tag{15}\]
Euler-Poincare equations [27] is aligned with the hydrodynamic equation (9) as:
\[M\dot{\xi} =\mathrm{ad}_{\xi}^{\mathsf{T}}M\xi+f \tag{16}\] \[=-C(\xi)\xi-D(\xi)\xi+\tau_{c},\] \[\begin{bmatrix}\dot{R}&\dot{p}\\ 0&0\end{bmatrix}=\begin{bmatrix}R&p\\ 0&1\end{bmatrix}\begin{bmatrix}\omega^{\wedge}&v\\ 0&0\end{bmatrix}. \tag{17}\]
where \(f\in\mathfrak{g}^{*}\)1 is the external force applied to the body fixed principal axes, including damping force and control force, i.e., \(f:=-D(\xi)\xi+\tau_{c}\).
Footnote 1: Technically, quantities that depend on mass and inertial belong to the co-tangent space \(\mathfrak{g}^{*}\)[28].
Note that the variables \(\nu\) and \(\xi\) are equivalent, but they differ in the order of their elements. Consequently, the matrices \(M\), \(C\), and \(D\) have different orderings.
## III Geometric convex error-state MPC
We consider a desired trajectory in the Lie group \(\mathcal{G}\) as a function of time \(t\), denoted \(X_{d,t}\in\mathcal{G}\). We define the left-invariant error [29]\(\Psi\) and its dynamics as:
\[\Psi=X_{d,t}^{-1}X_{t}\in\mathcal{G}, \tag{18}\] \[\frac{d}{dt}\Psi=\frac{d}{dt}(X_{d,t}^{-1})X_{t}+X_{d,t}^{-1}\frac {d}{dt}X_{t}=\Psi_{t}\xi_{t}^{\wedge}-\xi_{d,t}^{\wedge}\Psi_{t}. \tag{19}\]
where \(\xi_{t}\) and \(\xi_{d,t}\) are the velocity vectors corresponding to \(X_{t}\) and \(X_{d,t}\), respectively.
To compare velocities from different reference frames, we use the transport adjoint map \(\mathrm{Ad}_{\Psi}\) and obtain the error dynamics as:
\[\dot{\Psi}=\Psi_{t}(\xi^{\wedge}-\Psi_{t}^{-1}\xi_{d,t}^{\wedge}\Psi_{t})= \Psi(\xi_{t}-\mathrm{Ad}_{\Psi_{t}^{-1}}\xi_{d,t})^{\wedge}. \tag{20}\]
We linearize the error and motion dynamics to establish a convex MPC framework. Given the first-order approximation of the exponential map, we define the error in the Lie algebra corresponding to \(\Psi_{t}\) as:
\[\Psi_{t}=\exp(\psi_{t}^{\wedge})\approx I+\psi_{t}^{\wedge}. \tag{21}\]
We then obtain the linearized error dynamics in the Lie algebra as:
\[\dot{\Psi}_{t}\approx\dot{\psi}_{t}^{\wedge}\approx(I+\psi_{t}^{\wedge})(\xi_ {t}-\mathrm{Ad}_{I-\psi_{t}^{\wedge}}\xi_{d,t})^{\wedge}. \tag{22}\]
Here, \(\psi_{t}\) is the corresponding error in the Lie algebra for \(\Psi_{t}\), and we use the property \(\mathrm{Ad}_{\Psi}=\exp(\mathrm{ad}_{\psi})\). Given a first-order approximation, \(\mathrm{Ad}_{I+\psi^{\wedge}}=I+\mathrm{ad}_{\psi}\). Finally, we obtain the linearized velocity error in the Lie algebra as:
\[\dot{\psi}_{t}=-\mathrm{ad}_{\xi_{d,t}}\psi_{t}+\xi_{t}-\xi_{d,t}. \tag{23}\]
Since we now have a linear model for the error dynamics, we proceed with the linearization of the hydrodynamics described in (16). The linearization is performed around the operating point \(\bar{\xi}\):
\[M\dot{\xi} \approx-C(\bar{\xi})\bar{\xi}-D(\bar{\xi})\bar{\xi}\] \[\quad-\frac{\partial C(\xi)\xi}{\partial\xi}|_{\xi}(\xi-\bar{\xi} )-\frac{\partial D(\xi)\xi}{\partial\xi}|_{\xi}(\xi-\bar{\xi})+\tau_{c}\] \[=(-C(\bar{\xi})-D(\bar{\xi})-\frac{\partial C(\xi)\bar{\xi}}{ \partial\xi}-\frac{\partial D(\xi)\bar{\xi}}{\partial\xi})\xi \tag{24}\] \[\quad+(\frac{\partial C(\xi)\bar{\xi}}{\partial\xi}+\frac{ \partial D(\xi)\bar{\xi}}{\partial\xi})\bar{\xi}+\tau_{c}\] \[=H_{t}\xi+b_{t}.\]
As shown in (6) and (7), \(C(\xi)\) and \(D(\xi)\) are generally represented as first-order functions of \(\xi\), so partial differentiation will yield a constant. In such cases,
\[\frac{\partial C(\xi)\bar{\xi}}{\partial\xi}\] \[=-\frac{\partial}{\partial\xi}\begin{bmatrix}(M_{21}v+M_{22} \omega)^{\wedge}&(M_{11}v+M_{12}\omega)^{\wedge}\\ (M_{11}v+M_{12}\omega)^{\wedge}&0\end{bmatrix}\begin{bmatrix}\bar{\omega}\\ \bar{v}\end{bmatrix}\] \[=\frac{\partial}{\partial\xi}\begin{bmatrix}\bar{\omega}^{ \wedge}M_{22}+\bar{v}^{\wedge}M_{12}&\bar{\omega}^{\wedge}M_{21}+\bar{v}^{ \wedge}M_{11}\\ \bar{\omega}^{\wedge}M_{12}&\bar{\omega}^{\wedge}M_{11}\end{bmatrix}\xi)\] \[=\begin{bmatrix}\bar{\omega}^{\wedge}M_{22}+\bar{v}^{\wedge}M_{1 2}&\bar{\omega}^{\wedge}M_{21}+\bar{v}^{\wedge}M_{11}\\ \bar{\omega}^{\wedge}M_{12}&\bar{\omega}^{\wedge}M_{11}\end{bmatrix}, \tag{25}\]
\[\frac{\partial D(\xi)\bar{\xi}}{\partial\xi}=-\mathrm{diag}([X_{|u| }|\bar{u}],Y_{|v|}\bar{v}|,Z_{|w|}\bar{w}|,\] (26) \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
\[G_{t}=\begin{bmatrix}I&0\\ -\mathrm{ad}_{\xi_{d,t}}&0\end{bmatrix},d_{t}=\begin{bmatrix}0\\ \xi_{d,t}\end{bmatrix}, \tag{30}\]
the cost function is formulated as follows.
\[J=y_{t_{f}}^{\mathsf{T}}Py_{t_{f}}+\int_{t=0}^{t_{f}}(y_{t}^{\mathsf{T}}Qy_{i}+ \tau_{i}^{\mathsf{T}}R\tau_{i})dt, \tag{31}\]
where \(t_{f}\) is the length of the prediction horizon time in MPC, and \(P\), \(Q\), and \(R\) are semi-positive definite cost matrices. After discretizing the system given the time step, we can construct a QP problem that can be solved efficiently using a QP solver such as OSQP [30].
**Problem 1**: _(Proposed MPC) Find \(u_{k}\in\mathfrak{g}^{*}\) such that_
\[\min_{u_{k}} y_{N}^{\mathsf{T}}Py_{N}+\sum_{k=1}^{N-1}y_{k}^{\mathsf{T}}Qy_{k}+u _{k}^{\mathsf{T}}Ru_{k}\] \[s.t. x_{k+1}=A_{k}x_{k}+B_{k}x_{k}+h_{k}\] \[x(0)=x_{0},\;\;k=0,1,...,N-1.\]
_where \(\mathfrak{g}^{*}\) is the cotangent space, \(A_{k}=I+A_{t}\Delta t\), \(B_{k}=B_{t}\Delta t\), and \(h_{k}=h_{t}\Delta t\)._
## IV Numerical Simulations
We evaluate the performance of our controller by applying it to a marine vehicle simulator. To thoroughly evaluate its robustness, we compare our controller with NMPC methods while considering the presence of external disturbances.
### _Surface vehicle dynamics model_
We validate the control performance of the proposed algorithm using Marine Systems Simulator [21]. The simulator has hydrodynamic models of various types of real-world vehicles, including the autonomous surface vehicle Otter. The Otter is a 2 m catamaran equipped with two propellers on the starboard and port sides, allowing it to achieve a maximum speed of 5.5 knots.
The Otter model is unique because it is defined in 6-DOF and accounts for hydrostatic forces, making it a suitable demonstration vehicle. In the hydrodynamics model of the Otter, the Coriolis matrices \(C_{RB}\) and \(C_{AM}\) are set differently. Specifically, the elements in \(C_{AM}\) related to the yaw angle and horizontal velocity are neglected. To handle this, we separately calculate the partial differentiations (25) of the Coriolis matrices. The damping matrix of the Otter model is set to be diagonal with an additional nonlinear term only for the yaw motion.
The Otter is an under-actuated system that requires controlling its position and orientation using only two control inputs, the rotational speeds of the two motors. The propulsion force generated by each motor is modeled as proportional to the square of the rotational speed, with the coefficient changing for reverse motion. Although this relationship is nonlinear, it is bijective, allowing us to use the propulsion forces as a control input.
\[\tau=Tu=\begin{bmatrix}0&0&l&1&0&0\\ 0&0&-l&1&0&0\end{bmatrix}^{\mathsf{T}}\begin{bmatrix}u_{1}\\ u_{2}\end{bmatrix}, \tag{32}\]
where \(u_{1}\) and \(u_{2}\) represent the port and starboard thrust forces, respectively, and \(l\) denotes the distance from the thrusters to the center of gravity along the y-axis.
### _Nonlinear MPC_
Due to the intricate hydrodynamics model of the vehicle, nonlinear optimization is required for a controller, which can be time-consuming. Contrarily, the proposed geometric MPC provides computational efficiency by formulating the problem as a convex QP problem. However, the employment of linearization in the proposed algorithm has the potential to undermine control performance. Therefore, we consider NMPC as the baseline and regard its tracking performance as the benchmark for comparison with our method.
The NMPC algorithm was implemented using CasADi [31], which is an open-source software tool for nonlinear optimization and algorithmic differentiation. We distinguish between two forms of NMPC: the original form, referred to as _NMPC_, which considers hydrostatic forces as per (8); and _NMPC-simple_, which utilizes a simplified model (9) equivalent to the model employed in the proposed method. We assume that no information pertaining external disturbances, such as ocean current speed, is available. Therefore, we use the model with \(\nu_{c}=0\) to optimize the control inputs.
Let the error variables be denoted as \(z_{t}=([\eta_{d,t}^{\mathsf{T}},\xi_{d,t}^{\mathsf{T}}]-[\eta_{t}^{\mathsf{T} },\xi_{t}^{\mathsf{T}}])^{\mathsf{T}}\), then _NMPC_ is defined as follows:
**Problem 2**: _(NMPC) Find \(u_{k}\in\mathbb{R}\) such that_
\[\min_{u_{k}} z_{k}^{\mathsf{T}}Pz_{k}+\sum_{k=1}^{N-1}z_{k}^{\mathsf{T}}Qz_{k}+u _{k}^{\mathsf{T}}Ru_{k}\] \[s.t. \xi_{k+1}=M^{-1}(-C(\xi_{k})\xi_{k}-D(\xi_{k})\xi_{k}-g(\eta_{k}) +Tu_{k})\] \[\eta_{k+1}=J_{\Theta}(\eta_{k})\xi_{k}\] \[\eta(0)=\eta_{0},\;\;\xi(0)=\xi_{0},\;\;k=0,1,...,N-1.\]
_NMPC-simple_ is an equivalent optimization problem, but it does not take into account the hydrostatic term \(g(\eta)\).
### _Simulation setup_
The control frequency of the MPC and its sampling time were set to 20 Hz and 0.05 seconds, respectively, while the simulation frequency was set to 80 Hz to capture the nonlinear dynamics of the vehicle accurately. The MPC horizon length is chosen as 100 steps (i.e., \(N=100\)), which corresponds to predicting 5 second future trajectories. Each simulation episode has a duration of 60 seconds. The MPC cost weights \(P,Q,R\) are carefully adjusted to facilitate smooth tracking of the desired trajectory with minimal steady-state error. We empirically found that while position error plays a significant role in achieving accurate tracking, considering angle and velocity errors help prevent divergence and fluctuations.
To assess the control performance of the vehicle, we evaluated the tracking accuracy during zigzag and constant turning maneuvers. The initial position and orientation of the vehicle are randomly placed within a 5 m radius. The desired trajectory has a surge speed of 0.5 m/s. For turning motion, a desired yaw velocity is 0.1 rad/s, and for zigzag motion, a
desired yaw velocity is designed using a sinusoidal function, \(0.1\sin(t/5)\) rad/s.
All simulations were conducted with 10 random Monte Carlo tests and were performed using a Geforce GTX 1050 Ti, Intel(R) Core i7-8850H CPU @ 2.60 GHz \(\times\) 12, a memory of 64 GB, and Ubuntu 22.04.
### _Trajectory tracking in a current-free environment_
In a current-free environment, the known dynamics model used in MPC is equivalent to the simulation model except for the difference in sampling time. Figure 3 illustrates the results of trajectory tracking control, aligned with the desired zigzag and turning trajectory. Direct optimization using precise nonlinear dynamics, such as _NMPC_ and _NMPC-simple_, yields accurate optimization and rapid convergence. The proposed algorithm, which employs first-order approximations in the dynamics and error dynamics models, follows a roundabout path. Once convergence is achieved, the algorithm successfully tracks the predetermined reference path, similar to the NMPC methods.
Fig. 4 shows the positional errors over time for 10 episodes in each maneuver. Both _NMPC_ and _NMPC-simple_ achieve fast convergence, and their performance are nearly indistinguishable. The proposed algorithm performs similarly to NMPC methods in cases with small initial errors. However, for larger initial errors, the algorithm exhibits some overshoot, resulting in longer stabilization times. Nevertheless, the algorithm converges within 30 seconds in all cases and successfully performs trajectory tracking. The _proposed MPC_ and _NMPC-simple_ exhibits a small position error bias since they do not use perfect dynamics, although the value is very marginal, less than 0.1 m.
### _Trajectory tracking in a current-carrying environment_
To validate the applicability of the proposed algorithm in the presence of an external disturbance, experiments were conducted in an environment with the ocean current. The current is assumed to have a constant direction with speeds ranging from 0 to 0.5 m/s in 0.1 m/s increments. Despite the controller's lack of knowledge about the ocean current information, the iterative feedback of MPC enables robust system control in response to the disturbance. Fig. 5 and 6 depict the trajectories generated by each controller and tracking errors of 10 episodes, respectively, at an ocean current speed of 0.5 m/s.
As observed in previous control results without the ocean current, NMPC methods exhibit aggressive turning compared to the _Proposed MPC_, enabling them to converge to the desired path quickly. However, this aggressive turning behavior can lead to fluctuations in the presence of modeling gaps. _NMPC-simple_, which utilizes a less accurate dynamics model, deviates more in terms of the vehicle's position and orientation from the desired path. When the orientation error becomes significant, it makes aggressive adjustments to the vehicle's position, resulting in a fluctuated trajectory. In contrast, although _Proposed MPC_ also employs an equivalent approximated model, it generates a smoother trajectory by utilizing linearized dynamics, which is desirable in practical applications.
Figure 7 illustrates the average final position error in the turning maneuver at each ocean current speed. The position
Fig. 4: Tracking errors of the controllers evaluated in a current-free environment using 10 randomly sampled initial states for (a) zigzag motion and (b) turning motion. Our controller may exhibit overshoot; however, the performance difference is negligible once it converges.
Fig. 3: Simulation results in a current-free environment for Otter USV trajectory tracking in (a) zigzag motion and (b) turning motion. The reference path is represented by the solid black line, which includes the desired orientation and velocity over time. The controlled trajectories according to each method are represented by the dashed line. The proposed controller exhibits less sharp turning compared to NMPC controllers where rapid turning motion requires reducing surge speed. However, after the tracking error converges, the vehicle accurately follow the reference path.
error at the final position increases with the current speed except for _NMPC-simple_. As _NMPC-simple_ produces fluctuated vehicle trajectory, the final position error is not significantly affected by the ocean current. _Proposed MPC_ exhibits slightly higher errors compared to _NMPC_. Nevertheless, given the low maneuverability of the marine vehicle, the maximum position error not exceeding 0.4 m even at high ocean currents of 0.5 m/s indicates that the proposed algorithm demonstrates sufficiently high control performance.
While the tracking performance of the proposed MPC is comparable to that of NMPC, the proposed algorithm provides a significant computational efficiency advantage. Table I presents the average time taken for each optimization. The optimization time encompasses all the processes necessary for generating control inputs, including formulating a problem, constructing a solver, and performing optimization. In the case of the _Proposed MPC_ with OSQP, a solver needs to be built in each iteration, whereas NNMPC methods utilizing CasADi build the problem once and update problem parameters to minimize the overall computation time. In NMPC methods, a warm start that utilizes the solution from the previous iteration as an initial guess is employed to expedite the optimization process. The proposed algorithm can run at 20 Hz, while NMPC methods can run at 1 to 2 Hz. _Proposed MPC_ requires approximately 10 times less computation time compared to NMPC methods, even when using simplified dynamics. This indicates that using NMPC is difficult for real-time control in such long-horizon prediction problems.
### _Discussions_
The proposed algorithm leverages the geometric properties of the Lie group to define error dynamics and construct a convex error-state MPC for rapid control optimization. Although it exhibits slightly inferior trajectory tracking performance compared to NMPC, it offers significant reductions in
Fig. 5: Simulation results under an ocean current speed of 0.5 m/s for (a) zigzag motion and (b) turning motion. _NMPC_ shows a smooth trajectory with low steady-state error. While both _proposed MPC_ and _NMPC-simple_ employ the same approximated dynamics model, _prosed MPC_ generates a smoother trajectory during turning maneuvers.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Current & Proposed MPC & NMPC & NMPC-simple \\ \hline
0 m/s & **49** (\(\pm 2\)) & 764 (\(\pm 13\)) & 478 (\(\pm 12\)) \\
0.5 m/s & **50** (\(\pm 1\)) & 953 (\(\pm 125\)) & 460 (\(\pm 11\)) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Average computation time (ms) for single optimization.
Fig. 6: Tracking errors of the controllers under an ocean current speed of 0.5 m/s for (a) zigzag motion and (b) turning motion. Although the external disturbance is not accounted for in MPC algorithms, MPC can robustly respond to the disturbance and successfully track the desired trajectory, albeit with some steady-state errors.
Fig. 7: Average final position error in the turning motion when a constant ocean current occurs. As the speed of the ocean current increases, the final position error increases. The final error varies depending on the angle of encounter with the current; thus, these results are made when the final error is maximized. _Proposed MPC_ demonstrates the ability to control within an appropriate range of errors like _NMPC_, even with relatively high ocean current speeds. |
2306.08647 | Language to Rewards for Robotic Skill Synthesis | Large language models (LLMs) have demonstrated exciting progress in acquiring
diverse new capabilities through in-context learning, ranging from logical
reasoning to code-writing. Robotics researchers have also explored using LLMs
to advance the capabilities of robotic control. However, since low-level robot
actions are hardware-dependent and underrepresented in LLM training corpora,
existing efforts in applying LLMs to robotics have largely treated LLMs as
semantic planners or relied on human-engineered control primitives to interface
with the robot. On the other hand, reward functions are shown to be flexible
representations that can be optimized for control policies to achieve diverse
tasks, while their semantic richness makes them suitable to be specified by
LLMs. In this work, we introduce a new paradigm that harnesses this realization
by utilizing LLMs to define reward parameters that can be optimized and
accomplish variety of robotic tasks. Using reward as the intermediate interface
generated by LLMs, we can effectively bridge the gap between high-level
language instructions or corrections to low-level robot actions. Meanwhile,
combining this with a real-time optimizer, MuJoCo MPC, empowers an interactive
behavior creation experience where users can immediately observe the results
and provide feedback to the system. To systematically evaluate the performance
of our proposed method, we designed a total of 17 tasks for a simulated
quadruped robot and a dexterous manipulator robot. We demonstrate that our
proposed method reliably tackles 90% of the designed tasks, while a baseline
using primitive skills as the interface with Code-as-policies achieves 50% of
the tasks. We further validated our method on a real robot arm where complex
manipulation skills such as non-prehensile pushing emerge through our
interactive system. | Wenhao Yu, Nimrod Gileadi, Chuyuan Fu, Sean Kirmani, Kuang-Huei Lee, Montse Gonzalez Arenas, Hao-Tien Lewis Chiang, Tom Erez, Leonard Hasenclever, Jan Humplik, Brian Ichter, Ted Xiao, Peng Xu, Andy Zeng, Tingnan Zhang, Nicolas Heess, Dorsa Sadigh, Jie Tan, Yuval Tassa, Fei Xia | 2023-06-14T17:27:10Z | http://arxiv.org/abs/2306.08647v2 | # Language to Rewards for Robotic Skill Synthesis
###### Abstract
Large language models (LLMs) have demonstrated exciting progress in acquiring diverse new capabilities through in-context learning, ranging from logical reasoning to code-writing. Robotics researchers have also explored using LLMs to advance the capabilities of robotic control. However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applying LLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered control primitives to interface with the robot. On the other hand, reward functions are shown to be flexible representations that can be optimized for control policies to achieve diverse tasks, while their semantic richness makes them suitable to be specified by LLMs. In this work, we introduce a new paradigm that harnesses this realization by utilizing LLMs to define reward parameters that can be optimized and accomplish variety of robotic tasks. Using reward as the intermediate interface generated by LLMs, we can effectively bridge the gap between high-level language instructions or corrections to low-level robot actions. Meanwhile, combining this with a real-time optimizer, MuJoCo MPC, empowers an interactive behavior creation experience where users can immediately observe the results and provide feedback to the system. To systematically evaluate the performance of our proposed method, we designed a total of 17 tasks for a simulated quadruped robot and a dexterous manipulator robot. We demonstrate that our proposed method reliably tackles \(90\%\) of the designed tasks, while a baseline using primitive skills as the interface with Code-as-policies achieves \(50\%\) of the tasks. We further validated our method on a real robot arm where complex manipulation skills such as non-prehensile pushing emerge through our interactive system.
Large language model (LLM), Legged locomotion, Dexterous manipulation
## 1 Introduction
The recent advancements in large language models (LLMs) pretrained on extensive internet data [1; 2] has revolutionized the ability to interpret and act on user inputs in natural language. These LLMs exhibit remarkable adaptability to new contexts (such as APIs [3], task descriptions [4], or textual feedback [5]), allowing for tasks ranging from logical reasoning [6; 7] to code generation [8] with minimal hand-crafted examples.
These diverse applications have extended to the field of robotics as well, where substantial progress has been made in using LLMs to drive robot behaviors [3; 5; 4; 9; 10; 11]: from step-by-step planning [4; 9; 12], goal-oriented dialogue [10; 11], to robot-code-writing agents [3; 13]. While these methods impart new modes of compositional generalization, they focus on using language to concatenate together new behaviors from an existing library of control primitives that are either manually-engineered or learned a priori. Despite having internal knowledge about robot motions, LLMs struggle with directly outputting low-level robot commands due to the limited availability of relevant training data (Fig. 1). As a result, the expression of these methods are bottlenecked by the breadth of the available primitives, the design of which often requires extensive expert knowledge or massive data collection [14; 15; 16].
To tackle these challenges, we need to operate at a level of abstraction that allows harnessing the intuitive and interactive capabilities offered by LLMs. Our key insight is to leverage reward functions as an interface that
bridges the gap between language and low-level robot actions. This is motivated by the fact that language instructions from humans often tend to describe behavioral outcomes instead of low-level behavioral details (e.g. "robot standing up" versus "applying 15 Nm to hip motor"), and therefore we posit that it would be easier to connect instructions to rewards than low-level actions given the richness of semantics in rewards. In addition, reward terms are usually modular and compositional, which enables concise representations of complex behaviors, goals, and constraints. This modularity further creates an opportunity for the user to interactively steer the robot behavior. However, in many previous works in reinforcement learning (RL) or model predictive control (MPC), manual reward design requires extensive domain expertise [17; 18; 19]. While reward design can be automated, these techniques are sample-inefficient and still requires manual specification of an objective indicator function for each task [20]. This points to a missing link between the reward structures and task specification which is often in natural language. As such, we propose to utilize LLMs to automatically generate rewards, and leverage online optimization techniques to solve them. Concretely, we explore the code-writing capabilities of LLMs to translate task semantics to reward functions, and use MuJoCo MPC, a real-time optimization tool to synthesize robot behavior in real-time [21]. Thus reward functions generated by LLMs can enable non-technical users to generate and steer novel and intricate robot behaviors without the need for vast amounts of data nor the expertise to engineer low-level primitives.
The idea of grounding language to reward has been explored by prior work for extracting user preferences and task knowledge [22; 23; 24; 25; 26; 27]. Despite promising results, they usually require training data to learn the mapping between language to reward. With our proposed method, we enable a data efficient interactive system where the human engages in a dialogue with the LLM to guide the generation of rewards and, consequently, robot behaviors (Fig. 1).
Across a span of 17 control problems on a simulated quadruped and a dexterous manipulator robot, we show that this formulation delivers diverse and challenging locomotion and manipulation skills. Examples include getting a quadruped robot to stand up, asking it to do a moonwalk, or tasking a manipulator with dexterous hand to open a faucet. We perform a large-scale evaluation to measure the overall performance of our proposed method. We compare our method to a baseline that uses a fixed set of primitive skills and an alternative formulation of grounding language to reward. We show that our proposed formulation can solve \(40\%\) more skills than baselines and is more stable in solving individual skills. We further deploy our approach to a real robot manipulator and demonstrate complex manipulation skills through language instructions.
## 2 Related Work
Here we discuss relevant prior work that reason about language instructions to generate robot actions, code, or rewards to ground natural language to low-level robotic control. We then discuss work focused on responding to interactive human feedback such as language corrections.
**Language to Actions.** Directly predicting low-level control actions based on a language instruction has been studied using various robot learning frameworks. Early work in the language community studied mapping templated language to controllers with temporal logic [28] or learning a parser to motion prim
Figure 1: LLMs have some internal knowledge about robot motions, but cannot directly translate them into actions (left). Low-level action code can be executed on robots, but LLMs know little about them (mid). We attempt to bridge this gap, by proposing a system (right) consisting of the Reward Translatorth inteprets the user input and transform it into a reward specification. The reward specification is then consumed by a Motion Controller that interactively synthesizes a robot motion which optimizes the given reward.
itives [29], while more recent work utilize end-to-end models that produce actions conditioned on natural language descriptions. One example is instruction following methods in navigation [30]. However, they often assume low-dimensional actions navigating from one node of the graph to another [30; 31]. To extend the end-to-end approaches to manipulation, a common approach is to utilize latent embeddings of language commands as multitask input context, and train with behavioral cloning [14; 32; 16], offline reinforcement learning [33], goal-conditioned reinforcement learning [34], or in a shared autonomy paradigm [35]. While end-to-end trained policies can be performant, they require significant amount of data in the form of offline datasets or online environment interaction. In contrast, we study a less data hungry approach where low-level actions are not directly produced by an end-to-end policy but instead by an optimal controller.
**Language to Code.** Code generation models have been widely studied both in and outside robotics context [36; 8; 37]. The capability of those models range from solving coding competition questions [38] and benchmarks [39], to drawing simple figures [40], generating policies that solve 2D tasks [41], and complex instruction following tasks [3]. In this work, we study LLMs for generating code for reward functions, and show that the expression of the rewards can lead to expressive low-level policies.
**Language to Rewards.** The idea of translating natural language instructions to rewards has been explored by several prior work [26; 23; 25; 42; 22; 43; 27]. A common strategy in this direction is to train domain-specific reward models that map language instructions to reward values [23; 22; 42] or constraints [25]. Although these methods can achieve challenging language conditioned robotic tasks such as object pushing [25], and drawer opening [42], they require considerable language-labeled data to train the reward model. Recent works investigated using LLMs directly as a reward function for inferring user intentions in negotiation games or collaborative human-AI interaction games [26; 27]. By leveraging LLMs to assign reward values during RL training, they demonstrate training agents that are aligned with user intentions and preferences without explicit reward modeling. However, these works receive reward values of rollouts when training RL policies, which requires a large number of queries to LLMs during training. In contrast, we levrage LLMs to produce a parameterized reward function that can then be optimized. A similar direction to this work is automated parameterization of reward functions, which had been explored in AutoRL [20], however, they don't provide a language interface.
**Incorporating Iterative Human Feedback.** Correcting plans with iterative language feedback has also been explored in the past. Broad et al. enable efficient online corrections using distributed correspondence graphs to ground language [44]. However, this work relies on a semantic parser with pre-defined mappings to ground language corrections. More end-to-end approaches have also demonstrated learning a language correction conditioned policy, but they are similarly data hungry and thus fall back to shared autonomy to reduce complexity [45]. Later work explore mapping language corrections to composable cost functions similar to our work by training a prediction model from demonstration and apply trajectory optimization to perform control [25]. Followup works further simplifies the system by integrating language corrections to directly modify the waypoints of a trajectory using extensive datasets of paired corrections and demonstrations [46; 47]. In contrast to these prior work, we demonstrate a flexible and data-efficient approach that leverages LLMs to allow for multi-step correction of reward functions based on human feedback.
## 3 Grounding Language to Actions Using Rewards
### Background and Reward Interface
Our system takes user instruction in natural language and synthesizes corresponding robot motions by leveraging reward function as the interface to communicate with low-level controllers. We define the reward function in the context of Markov Decision Process (MDP), commonly used to formulate robot control problems: \((S,A,R,P,p_{0})\), where \(S\) is the state space, \(A\) is the action space, \(R\colon S\times A\mapsto\mathbb{R}\) is the reward function, \(P\colon S\times A\mapsto S\) is the dynamics equation, and \(p_{0}\) is the initial state distribution. Given a reward function \(R\), an optimal controller finds a sequence of actions \(\mathbf{a}_{1:H}=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{H}\}\) that maximizes the expected accumulated reward: \(J(\mathbf{a}_{1:H})=\mathbb{E}_{\tau=(\mathbf{s}_{0},\mathbf{a}_{0},\ldots, \mathbf{s}_{H})}\sum_{t=0}^{H}R(\mathbf{s}_{t},\mathbf{a}_{t})\), where \(H\) is the rollout horizon.
In this work, we assume the reward takes a particular form, suitable for use with MJPC (see below). The reward is the sum of a set of individual terms:
\[R(\mathbf{s},\mathbf{a})=-\sum_{i=0}^{M}w_{i}\cdot\mathbf{n}_{i}\big{(}r_{i}( \mathbf{s},\mathbf{a},\psi_{i})\big{)}, \tag{1}\]
where \(w\in\mathbf{R}_{+}\) is a non-negative weight, \(\mathbf{n}(\cdot):\mathbf{R}\rightarrow\mathbf{R}_{+}\) is a twice-differentiable norm that takes its minimum at \(0\), \(r\in\mathbf{R}\) is a residual term that achieves optimality when \(r=0\), and \(\psi_{i}\) is the parameters of
the \(i_{th}\) residual term. For example, if we want to have the robot raise its body height \(h\) to a desired height, we may design a residual term \(r_{h}(h,\psi)\!=\!h\!-\!\psi\), where the reward parameter \(\psi\) denotes the desired height, and use the l2 norm to construct the final reward function: \(R_{h}\!=\!-w|r_{h}||_{2}\). In principle, one may design task-specific residual terms that can solve particular controller tasks. However, designing these residuals requires domain expertise and may not generalize to novel tasks. In this work, we use a set of generic and simple residual terms, and leverage the power of LLMs to compose different terms to generate complex behaviors. The full set of residual terms used in this work can be found in the Appendix A.6.
Our proposed system consists of two key components (Fig. 1 right): i) a _Reward Translator_, built upon pre-trained Large Language Models (LLMs) [10], that interacts with and understands user intents and modulates all reward parameters \(\psi\) and weights \(w\), and ii) a _Motion Controller_, based on MuJoCo MPC [21], that takes the generated reward and interactively optimize the optimal action sequence \(\mathbf{a}_{1:H}\). Below we provide more details on the design of Reward Translatorand Motion Controller.
### Reward Translator
Inspired by recent progress on Large Language Models (LLMs), we propose to build the Reward Translator-based on LLMs to map user interactions to reward functions corresponding to the desired robot motion. As reward tuning is highly domain-specific and requires expert knowledge, it is unsurprising that LLMs trained on generic language datasets (e.g. [1]) cannot directly generate a reward for a specific hardware. Instead, we explore the in-context learning ability of LLMs to achieve this goal, inspired by prior work that demonstrated a variety of in-context learning skills for LLMs [2; 48]. Furthermore, we decompose the problem of language to reward into two stages: motion description and reward coding task, as illustrated in Fig. 2.
**Motion Description** In the first stage, we design a _Motion Descriptor_ LLM that interprets and expands the user input into a natural language description of the desired robot motion following a pre-defined template (see example in Fig. 2). Although it is possible for LLMs to directly generate reasonable reward functions for relatively simple task, it often fails for tasks that necessitates complex reasoning. On the other hand, as observed in Fig. 1 left, LLMs can describe complex motions in detailed natural language successfully.
Inspired by this observation, we design a template that describes common movements of a robot (see Fig. 2 top right for an example of the template and the prompt for the LLM) to effectively harness LLMs' internal knowledge about motions. The role of the _Motion Descriptor_ is to complete the provided template (e.g., replacing certain elements such as CHOICE and NUM in the example. This helps the _Motion Descriptor_ produce more structured and predictable outputs and improves stability of the overall system. In addition, as we are describing the motion in natural language, we do not need to provide any specific examples in the prompt and can rely entirely on LLMs to generate the result.
**Reward Coding** In the second stage, we translate the generated motion description into the reward function using a second LLM. We formulate the problem of language to reward function as a code-writing
Figure 2: Detailed dataflow of the Reward Translator. A _Motion Descriptor_ LLM takes the user input and describe the user-specified motion in natural language, and a _Reward Coder_ translates the motion into the reward parameters.
task to benefit from the LLMs' knowledge of coding and code structure, thus we name the second LLM the _Reward Coder_. We design a prompt for instructing the LLM to generate reward specifying code (see example in Fig. 2 bottom right). The prompt consists of three parts: i) description of the reward APIs that the LLM can call to specify different parameters of the reward function, ii) an example response that we expect the _Reward Coder_ to produce, and iii) the constraints and rules that the _Reward Coder_ needs to follow. Note that the example is to demonstrate to the LLM how the response should look like, instead of teaching it how to perform a specific task. As such, the _Reward Coder_ needs to specify the reward parameters based on its own knowledge about the motion from the natural language description.
### Motion Controller
The Motion Controller needs to map the reward function generated by the Reward Translatorto low-level robot actions \(\mathbf{a}_{1:H}\) that maximize the accumulated reward \(J(\mathbf{a}_{1:H})\). There are a few possible ways to achieve this, including using reinforcement learning (RL), offline trajectory optimization, or, as in this work, receding horizon trajectory optimization, i.e., model predictive control (MPC). At each control step, MPC plans a sequence of optimized actions \(\mathbf{a}_{1:H}\) and sends to the robot. The robot applies the action corresponding to its current timestamp, advances to the next step, and sends the updated robot states to the MJPC planner to initiate the next planning cycle. The frequent re-planning in MPC empowers its robustness to uncertainties in the system and, importantly, enables interactive motion synthesis and correction. Specifically, we use an open-source implementation based on the MuJoCo simulator [49], MJPC [21]. MJPC has demonstrated the interactive creation of diverse behaviors such as legged locomotion, grasping, and finger-gaiting while supporting multiple planning algorithms, such as iLQG and Predictive Sampling. Following the observation by Howell et al [21], second-order planners such as iLQG produces smoother and more accurate actions while zeroth-order planners such as Predictive Sampling is better at exploring non-smooth optimization landscape, we use iLQG for legged locomotion tasks, while use Predictive Sampling for manipulation tasks in this work.
## 4 Experiments
We design experiments to answer the following questions:
1) Is our proposed method, by combining LLMs and MJPC, able to generate diverse and complex robot motions through natural language interface?
2) Does interfacing with the reward function result in a more expressive pipeline than interfacing directly with low-level or primitive actions and is Motion Descriptor necessary for achieving reliable performance?
3) Can our method be applied to real robot hardware?
### Experiment Setup
We evaluate our approach on two simulated robotic systems: a quadruped robot, and a dexterous robot manipulator (Fig. 3). Both robots are modeled and simulated in MuJoCo MPC [21]. In all experiments we use GPT-4 as the underlying LLM module [50]. Here we describe the key setups of each robot. More details regarding the full prompts and reward function can be found in Appendix A.5 and A.6.
Quadruped RobotIn this example, we demonstrate using our system to command a four legged robot (Fig. 3 (a)) to perform a variety of motor skills. The quadruped robot has \(12\) joints, \(3\) on each leg. Quadruped robots have been demonstrated to perform a large variety of skills including locomotion [40], hopping [18], biped standing [51; 52], parkour [53], etc. We apply our system to the quadruped robot to perform a similar suite of skills while only using natural language as input.
LLM generates a plan for the robot motion using a set of pre-defined robot primitive skills instead of reward functions. For the Code-as-Policies (CaP) baseline, we design the primitive skills based on common commands available to the robot. Due to limited space we put the full list of primitives in Appendix A.3.
### Tasks
We design nine tasks for the quadruped robot and eight tasks for the dexterous manipulator to evaluate the performance of our system. Fig. 3 shows samples of the tasks. The full list of tasks can be found in Appendix A.2. Videos of sampled tasks can also be found in supplementary video and project website 4.
Footnote 4: language-to-reward.github.io
For the quadruped robot, the tasks can be categorized into four types: _1) Heading direction control_, where the system needs to interpret indirect instructions about the robot's heading direction and to control the robot to face the right direction (e.g., identify the direction of sunrise or sunset). _2) Body pose control_, where we evaluate the ability of the system to understand and process commands to have the robot reach different body poses, inspired by common commands issued by human to dogs such as sit and roll over. _3) Limb control_, where we task the system to lift particular foot of the robot. Furthermore, we also test the ability of the system to take additional instructions to modify an existing skill, such turn in place with lifted feet. _4) Locomotion styles_, where we evaluate our proposed system in generating different locomotion styles. In particular, we design a challenging task of having the quadruped stand up on two back feet to walk in a bipedal mode.
For the dexterous manipulator, we design tasks to test its ability to achieve different types of interactions with objects such as lifting, moving, and re-orienting. We test the system on a diverse set of objects with significantly different shapes and sizes (Fig. 3) for each task. We further include two tasks that involves interacting with articulated objects of different joint types.
### Evaluation results
For each task and method considered, we generate 10 responses from Reward Translator, each evaluated in MJPC for \(50\) times, thus we measure the end-to-end stability of the full pipeline. Fig. 4 shows the results for both robots. Our proposed approach achieves significantly higher success rate for \(11/17\) task categories and comparable performance for the rest tasks, showing the effectiveness and reliability of the proposed method.
Figure 3: The two robots used in our experiments and sampled tasks. (a) a Quadruped robot with 12 DoFs. (b) a dexterous manipulator robot with 27 DoFs. (c) example rollouts produced by our algorithm.
When compared to the CaP baseline, our method achieves better success rate in almost all tasks. This is due to that CaP can perform well on tasks that can be expressed by the given primitives (e.g. Touch object) or very close to the given examples in prompt (e.g. Sit down), but fails to generalize to novel low-level skills. On the other hand, using Reward Coder only can achieve success on some tasks but fails in ones that requires more reasoning. For example, when asked to open a drawer, the Reward Coder only baseline often forget to task the robot hand to get closer to the drawer handle and only design the reward for encouraging the drawer to be open. Sampled responses from different method can be found in Appendix A.8.
To further understand the overall performance of different systems, we also show the pass rate in Fig. 4 right, which is a standard metric for analyzing code generation performance [8]. For each point in the plot, it represents the percentage of tasks the system can solve, given that it can generate N pieces of code for each task and pick the best performing one. As such, the pass rate curve measures the stability of the system (the more flat it is, the more stable the system is) as well as the task coverage of the system (the converged point represents how many tasks the system can solve given sufficient number of trials). It is clear from the result that for both embodiments, using reward as the interface empowers LLMs to solve more tasks more reliably, and the use of Structured Motion Description further boosts the system performance significantly.
### Interactive Motion Synthesis Results
One benefit of using a real time optimization tool like MJPC is that humans can observe the motion being synthesized in real time and provide feedback. We showcase two examples where we teach the robot to perform complex tasks through multiple rounds of interactions. In the first example, we task the quadruped robot to stand up and perform a non-walk skill (Fig. 5a). We give four instructions to achieve the task, as shown in Fig. 5. Each instruction improves the behavior towards the desired behavior based on the interactively synthesized results. This showcase that users can interactively shape the behavior of the robot in natural language. In the second example, we showcase a different way of leveraging the interactivity of our system by sequentially commanding the dexterous manipulator robot to place an apple in a drawer, as seen in Fig. 5b. Results of the interactive results are best viewed in the supplementary video and full code output from our method can be found in Appendix A.9.
### Real-robot experiments
We implement a version of our method onto a mobile manipulator, and tested it on nonprehensile manipulation tasks in the real world. In simulation, we have access to the ground-truth state for objects in the scene. In the real world, we detect objects in image-space using an open-vocabulary detector: F-VLM [54]. We extract the associated points from point cloud behind the mask and perform outlier rejection for points that might belong to the background. From a birds-eye view, we fit a minimum volume rectangle and take the extremes to determine the extent in the z-axis. We use this 3D bounding box as state estimation for corresponding object in simulation. To detect the surface of the table with proper orientation, we use an AprilTag [55]. In addition, as seen in the supplementary video, MJPC can discover highly dexterous and dyanmic maneuvers to accomplish the desired task. However, these movements are
Figure 4: Comparison of our method and alternative methods in terms of pass rate: if we generate N pieces of code for each task and pick the best performing one, what’s the percentage of tasks that the system can successfully tackle.
beyond the capabilities of current real hardwares. To mitigate this issue, we design a regularization residual term specific to encourage steady and stable robot movements when applying our system to the real robot (set_sim2real_regularization_reward() in Fig. 6, see Appendix A.6.3 for details for this term).
We demonstrate sim-to-real transfer on two tasks: object pushing and object grasping. Our system is able to generate relevant reward code and the Mujoco MPC is able to synthesize the pushing and grasping motion. For rollouts please refer to the supplementary video/website and Fig. 7.
## 5 Discussion and Conlusion
In this work, we investigate a new paradigm for interfacing an LLM with a robot through reward functions, powered by a low-level model predictive control tool, MuJoCo MPC. Using reward function as the interface enables LLMs to work in a semantic-rich space that play to the strengths of LLMs, while ensures the expressiveness of the resulting controller. To further improve the performance of the system, we propose to use a motion description template to better extract internal knowledge about robot motions from LLMs. We evaluate our proposed system on two simulated robotic platforms: a quadruped robot and a dexterous manipulator robot. We apply our approach to both robots to acquire a wide variety of skills. Compared to alternative methods that do not use reward as the interface, or do not use the motion description template, our method achieves significantly better performance in terms of stability and the number of tasks it can solve.
**Limitations and Future Work.** Though we show that our system can obtain a diverse set of skills through natural language interactions, there are a few limitations. First, we currently design templates of motion descriptions for each type of robot morphology, which requires manual work. An interesting future direction is to unify or automate the template design to make the system easily extendable to novel robot morphologies. Second, our method currently relies on language as the interaction interface with human users. As such, it can be challenging to design tasks that are not easily described in language (e.g., "walk gracefully"). One potential way to mitigate this issue is to extend the system to multi-modal inputs to allow richer forms of user interactions (e.g., by showing a video of the desirable behavior). Thirdly, we currently use pre-defined reward terms whose weights and parameters are modulated by the LLMs. Constraining the reward design space helps improve stability of the system while sacrifices some flexibility. For example, our current design does not support time-varying rewards and would require re-designing the prompt to support that. Enabling LLMs to reliably design reward functions from scratch is thus an important and fruitful research direction.
Figure 5: The two interactive examples using our proposed system.
Figure 6: Implementation and rollouts of the proposed system in the real world.
#### Acknowledgments
The authors would like to acknowledge Ken Caluwaerts, Kristian Hartikainen, Steven Bohez, Carolina Parada, Marc Toussaint, and the greater teams at Google DeepMind for their feedback and contributions.
|
2305.00993 | Haunted haloes: tracking the ghosts of subhaloes lost by halo finders | Dark matter subhaloes are key for the predictions of simulations of structure
formation, but their existence frequently ends prematurely due to two technical
issues, namely numerical disruption in N-body simulations and halo finders
failing to identify them. Here we focus on the second issue, using the
phase-space friends-of-friends halo finder ROCKSTAR as a benchmark (though we
expect our results to translate to comparable codes). We confirm that the most
prominent cause for losing track of subhaloes is tidal distortion rather than a
low number of particles. As a solution, we present a flexible post-processing
algorithm that tracks all subhalo particles over time, computes subhalo
positions and masses based on those particles, and progressively removes
stripped matter. If a subhalo is lost by the halo finder, this algorithm keeps
tracking its so-called ghost until it has almost no particles left or has truly
merged with its host. We apply this technique to a large suite of N-body
simulations and restore lost subhaloes to the halo catalogues, which has a
dramatic effect on key summary statistics of large-scale structure.
Specifically, the subhalo mass function increases by about 50% and the halo
correlation function increases by a factor of two at small scales. While these
quantitative results are somewhat specific to our algorithm, they demonstrate
that particle tracking is a promising way to reliably follow haloes and reduce
the need for orphan models. Our algorithm and augmented halo catalogues are
publicly available. | Benedikt Diemer, Peter Behroozi, Philip Mansfield | 2023-05-01T18:00:00Z | http://arxiv.org/abs/2305.00993v1 | # Haunted haloes: tracking the ghosts of subhaloes lost by halo finders
###### Abstract
Dark matter subhaloes are key for the predictions of simulations of structure formation, but their existence frequently ends prematurely due to two technical issues, namely numerical disruption in \(N\)-body simulations and halo finders failing to identify them. Here we focus on the second issue, using the phase-space friends-of-friends halo finder Rockstar as a benchmark (though we expect our results to translate to comparable codes). We confirm that the most prominent cause for losing track of subhaloes is tidal distortion rather than a low number of particles. As a solution, we present a flexible post-processing algorithm that tracks all subhalo particles over time, computes subhalo positions and masses based on those particles, and progressively removes stripped matter. If a subhalo is lost by the halo finder, this algorithm keeps tracking its so-called ghost until it has almost no particles left or has truly merged with its host. We apply this technique to a large suite of \(N\)-body simulations and restore lost subhaloes to the halo catalogues, which has a dramatic effect on key summary statistics of large-scale structure. Specifically, the subhalo mass function increases by about 50% and the halo correlation function increases by a factor of two at small scales. While these quantitative results are somewhat specific to our algorithm, they demonstrate that particle tracking is a promising way to reliably follow haloes and reduce the need for orphan models. Our algorithm and augmented halo catalogues are publicly available.
keywords: methods: numerical - dark matter - large-scale structure of Universe
## 1 Introduction
Structure in the Universe forms hierarchically, meaning that small dark matter haloes collapse first and merge to create larger haloes (e.g., Bond et al., 1991). This picture implies that haloes are filled with numerous smaller subhaloes, which has been confirmed in simulations (Moore et al., 1999; Klypin et al., 1999; Diemand et al., 2005; Springel et al., 2008). The smallest possible subhalo size is set by the initial power spectrum, which is likely cut off at some scale due to a finite initial temperature (warm dark matter), self-interactions, or quantum mechanics. As long as no such cutoff has been found observationally, however, we must attempt to simulate substructure down to at least the smallest sizes that are expected to form observable galaxies. At the low-mass end, about 30% of haloes are subhaloes (although this number strongly depends on the definition of the host halo radius, Diemer, 2021). This large fraction means that there is no hope of accurately predicting statistics such as the galaxy correlation function if subhaloes are not modelled correctly (e.g., Colin et al., 1999).
One important factor determining the abundance of subhaloes is how long they survive in their hosts after infall. The fundamental timescale is the dynamical time, which we define as a crossing time (about 5 Gyr at \(z=0\); see Section 2.2). Unlike the instantaneous merging that is implicit in Press-Schechter type models (Press & Schechter, 1974; Lacey & Cole, 1993), real subhaloes tend to survive for at least a few orbits. While they lose mass to tidal stripping during each orbit (e.g., Kravtsov et al., 2004; Zavala & Frenk, 2019), numerical research has repeatedly confirmed that they can lose most of their mass before entirely disrupting (Tormen et al., 1998; Diemand et al., 2006; Green et al., 2021; Errani & Navarro, 2021; Amorisco, 2021).1 One exception are large subhaloes, roughly speaking those with more than \(1/10\) or their host's mass. In this regime, strong dynamical friction can drag subhaloes to the host centre within few orbits (Chandrasekhar, 1943; van den Bosch et al., 1999; Boylan-Kolchin et al., 2008; Adhikari et al., 2016; Naidu et al., 2021; Vasiliev et al., 2022; Banik & van den Bosch, 2022).
Footnote 1: Similarly, sharp potential gradients can “heat” dynamical systems (Ostriker et al., 1972; Gnedin & Ostriker, 1997, 1999; Gnedin et al., 1999) but do not necessarily disrupt them (van den Bosch et al., 2018; Green et al., 2022). Given that our dark matter-only simulations do not contain galactic discs, we do not carefully distinguish tidal stripping and tidal heating.
A similar picture applies to the satellite galaxies within subhaloes, whose tightly bound stellar distribution can often survive even stronger tidal forces than their dark matter. Understanding their orbits and evolution is critical because their eventual mergers play a major role in the evolution of the galaxy population (e.g., Toomre & Toomre, 1972; Barnes, 1988; Hernquist, 1992). Moreover, the distri
bution of satellites is thought to be one of the most sensitive tracers of the nature of dark matter (e.g., Cyr-Racine et al., 2016; Lovell, 2020; Nadler et al., 2021). The tightest constraints on a potential cutoff in the power spectrum will come from the smallest, ultra-faint satellites that we can observe from our own position near the centre of the Milky Way (Drlica-Wagner et al., 2020). These objects will inhabit subhaloes with \(\approx 10^{-4}\) the mass of the Milky Way (Nadler et al., 2020).
Reliably simulating such tiny, strongly stripped subhaloes near the centres of their hosts is arguably the greatest challenge to \(N\)-body simulations today. Two separate issues are currently preventing us from achieving this goal: numerical disruption in \(N\)-body simulations, and halo finders struggling to identify certain substructures. The first issue has long been known as the "over-merging problem," where unphysical effects such as two-body scattering "heat" the subhalo and possibly destroy it (White et al., 1987; Carlberg, 1994; van Kampen, 1995; Moore et al., 1996, 1999a; Klypin et al., 1999a). This problem is far from resolved even in modern simulations, including the ones discussed in this paper (van den Bosch, 2017; van den Bosch et al., 2018; van den Bosch & Ogiya, 2018). The resulting lack of substructure leads to poor fits to the observed clustering of galaxies (Campbell et al., 2018), necessitating ad-hoc solutions such as representing the orbits of disrupted subhaloes by those of their single most bound particle ("orphans," Summers et al., 1995; Wang et al., 2006; Guo et al., 2010; Moster et al., 2010), multiple particles ("cores," Heitmann et al., 2019, 2021; Rangel et al., 2020; Sultan et al., 2021; Korytov et al., 2023), or analytically integrating orbits forward in time (e.g., Taylor & Babul, 2001; Zentner et al., 2005; Behroozi et al., 2019). We do not tackle the over-merging issue in this paper, but our goal of creating more complete subhalo catalogues is an important step towards a full understanding of how and why numerical disruption occurs.
This second issue is that state-of-the-art halo finders can lose certain subhaloes even though they are composed of significant numbers of particles and are discernible by eye. Most halo finders agree reasonably well on the positions and properties of isolated haloes, but robustly finding subhaloes is much harder because they can blend into the varying background density of their larger host (Knebe et al., 2011). This issue can be fixed by using a phase-space friends-of-friends algorithm to group particles not only in position but also in velocity space (Davis et al., 1985; Diemand et al., 2006; Behroozi et al., 2013; Elabi et al., 2019), but another problem remains: the tidal streams coming off a subhalo can be orders of magnitude more massive than the subhalo itself. The halo finding algorithm must balance a sensitivity to small, bound particle groups against the risk of falsely identifying spurious, unphysical noise. For example, the Rockstar halo finder used in this work tends to fail when a subhalo's tidal tails are gravitationally unbound but dense, a state which naturally occurs during strong tidal stripping (Behroozi et al., 2013). We emphasise that this issue is not solved by increasing the number of particles: the subhalo mass functions produced by both Rockstar and StaFind(Springel et al., 2001) appear to be independent of resolution (e.g. Springel et al., 2008; Nadler et al., 2023), but they are missing heavily stripped haloes at all resolutions.
The root cause of these issues is that haloes are most commonly identified in individual simulation snapshots and then stitched together into merger trees. This means that the same (sub-)haloes need to be identified over and over again, including in challenging situations such as during strong tidal stripping. A better solution would be to identify each halo once when it first emerges and to track its particles forward in time, using the knowledge that the given set of particles was a bound structure in the past. Such an algorithm can be computationally more demanding (e.g., because particle IDs need to be stored), but it automatically creates time-connected merger trees. Some schemes of this nature have been put forward in the literature (e.g., Gill et al., 2004; Poulton et al., 2020), most notably in the HBT+ halo finder (Han et al., 2012, 2018), which finds subhalo centres from tracked particles and determines membership based on gravitational binding. While the initial results based on these algorithms are encouraging, many questions remain. For example, Springel et al. (2021) implement the HBT+ algorithm in a combined SuFind-HBT halo finder, but this change seems to have a minimal effect on the abundance of subhaloes (figure 38 in Springel et al., 2021). Moreover, each particle-tracking algorithm has to make a number of choices whose impact is not yet clear. Which particles are initially included in the tracking, and can new particles be added? What are the criteria for removing stripped particles? When is a subhalo deemed to have physically merged?
In this paper, we present a post-processing algorithm to identify and track all particles in all subhaloes in simulations, allowing us to follow the "ghosts" of subhaloes that have been lost by the halo finder. Our algorithm is in principle independent of the halo finder used. It features user-adjustable parameters, avoids relying on gravitational boundness criteria, and outputs ghost data into a convenient merger tree format. Besides introducing this technique, the two main purposes of the paper are to understand where and when subhaloes are lost by Rockstar and to investigate to what extent ghosts improve the predictions of simulations. We show that ghosts extend the lives of subhaloes across a vast range of resolutions and that they make significant contributions to basic predictions such as the subhalo mass function and correlation function. Most importantly, however, this work is an exploratory study that opens numerous avenues for further research. For example, we apply our algorithm to dark matter-only simulations although baryons change the abundances of subhaloes significantly (e.g., Garrison-Kimmel et al., 2017; Richings et al., 2020).
The paper is organised as follows. We describe our simulations and algorithms in Section 2. In Section 3 we investigate how subhaloes lose mass and when they are lost by the halo finder. In Section 4 we show the impact of adding ghosts to basic simulation predictions. We further discuss these results in Section 5 and summarise our conclusions in Section 6. Our algorithm is implemented in the open-source framework Sparta(Diemer, 2017), and the ghost-augmented halo catalogues and merger trees are publicly available (Diemer, 2020).
## 2 Methods & Algorithms
We begin by briefly reviewing our simulations (Section 2.1) and halo catalogues (Section 2.2), largely referring the reader to Diemer (2020) for details. In Section 2.3 we introduce our algorithms to track subhaloes and ghosts via their constituent particles.
### \(N\)-body simulations
Our catalogues are based on the Erebos suite of dissipationless \(N\)-body simulations (Diemer & Kravtsov, 2014, 2015), and we have run our subhalo tracking algorithm on the entire suite (Diemer, 2020). Since variations in cosmology are not important for the purposes of this paper, we focus exclusively on seven simulations of a WMAP7 \(\Lambda\)CDM cosmology, which is the same as that of the _Bolshoi_ simulation (Komatsu et al., 2011; Klypin et al., 2011, \(\Omega_{\rm m}=0.27\),
\(\Omega_{\rm b}=0.0469\), \(h=0.7\), \(\sigma_{8}=0.82\), and \(n_{\rm z}=0.95\)). The initial conditions for the simulations were generated with 2LPTic (Croce et al., 2006) from power spectra computed by Camb(Lewis et al., 2000). The simulations were run with Gadget2(Springel et al., 2001; Springel, 2005).
The box sizes of the simulations increase by factors of two from \(31.25~{}h^{-1}\)Mpc to \(2000~{}h^{-1}\)Mpc (where the smallest box was run only to \(z=2\)). Each box contains \(1024^{3}\) particles, leading to mass resolutions between \(2.1\times 10^{6}\) and \(5.6\times 10^{11}~{}h^{-1}M_{\odot}\). For testing and visualisation, we use a smaller set simulation with \(256^{3}\) particles in a \(62.5~{}h^{-1}\)Mpc box (Diemer, 2017). For the purposes of this work, mass and force resolution are equally important. The force resolutions vary between \(0.25\) and \(65\) comoving \(h^{-1}\)kpc, and they were chosen such that the scale radius of a typical halo with \(1000\) particles is resolved with four force resolution lengths at \(z=0\). This equivalence means that the mass and force resolution are well matched in the sense that the lowest resolved halo mass roughly coincides for both criteria (though two-body relaxation complicates this picture as discussed in Appendix A of Diemer, 2022; see also Ludlow et al., 2019; Mansfield and Avestruz, 2021).
### Halo catalogues and definitions
We identify haloes and subhaloes using the Rockstar halo finder and Consistent-Trees merger tree tool (Behroozi et al., 2013, 2014). Rockstar uses a phase-space friends-of-friends algorithm that performs well at identifying substructure because it groups particles in velocity as well as position space (e.g., Onions et al., 2012). The resulting catalogues, as well as raw particle data, serve as input to Sparka code, which analyses the orbits of individual particles to compute various halo properties, such as splashback radii (Diemer, 2017) or the density profiles of only orbiting particles (Diemer, 2022, 2022). Eventually, the Morla extension recombines Sparka's output with the Rockstar catalogues to create enhanced catalogues and merger trees both in the original ascii format and in a new hdf5-based format (Diemer, 2020). The ghost subhaloes discussed in the following section were added to the Morla catalogues as if they were normal subhaloes identified by the halo finder. While our algorithm is independent of the halo finder, we emphasise that the results of this paper are specific to Rockstar and Consistent-Trees.
Throughout the paper, we discuss a number of spherical overdensity radius and mass definitions. Specifically, \(R_{200{\rm c}}\) is the radius enclosing a mean density of \(200\) times the critical density, \(R_{200{\rm c}}\) encloses \(200\) times the mean density, and \(R_{\rm vir}\) indicates a varying overdensity (Bryan and Norman, 1998). The corresponding enclosed masses are denoted \(M_{200{\rm c}}\), \(M_{200{\rm c}}\), and \(M_{\rm vir}\). We further distinguish masses and radii computed including all particles (denoted \(M_{200{\rm c},{\rm all}}\), \(R_{200{\rm c},{\rm all}}\), or simply \(R_{200{\rm c}}\)) and those computed using only gravitationally bound particles according to Rockstar's subhinding algorithm (denoted \(M_{200{\rm c},{\rm had}}\) and \(R_{200{\rm c},{\rm had}}\)). Our catalogues contain all haloes that have reached at least \(200\) particles within \(R_{200{\rm c},{\rm all}}\) at some point in their life.
In our Rockstar catalogues, subhaloes are defined to lie within \(R_{200{\rm c},{\rm had}}\) of a host halo with larger maximum circular velocity, \(V_{\rm max}\). We use this assignment throughout unless otherwise mentioned. Moylan calculates separate host-subhalo relations for numerous other definitions. We have verified that these relations agree exactly with Consistent-Trees if the same radius definition is used. Since we track ghosts regardless of whether they lie inside or outside the host radius, the subhalo definition does not have a large impact on our results.
We will generally quantify cosmic time in units of the dynamical time, which we define as a halo crossing time. Spherical overdensity radii allow for a convenient definition as the time to cross \(R_{200{\rm c}}\),
\[t_{\rm dyn}\equiv\frac{2R_{200{\rm c}}}{V_{200{\rm c}}}=\frac{2R_{200{\rm c }}}{\sqrt{GM_{200{\rm c}}/R_{200{\rm c}}}}=\frac{t_{\rm th}(\xi)\;\Omega_{\rm m }(z)^{-1/2}}{5}\,, \tag{1}\]
about \(5\) Gyr at \(z=0\) for our cosmology.
### Tracking subhaloes and ghosts based on their particles
Inspecting the Rockstar merger trees for our \(\Lambda\)CDM simulations, we find that about half of all halo trajectories reach the end of the simulation and half merge into larger haloes at some previous time (the exact numbers vary depending on box size). Some of the mergers are physical, meaning that the "real" subhalo would have been disrupted or mixed with the phase space of its host to the point where it is not a distinct, bound entity any longer; some mergers are due to numerical over-merging, as discussed in Section 1; and some subhalo trajectories end because the halo finder loses track of them.
To identify and fix instances of the latter, we introduce the concept of a ghost: a subhalo that is no longer detected by the halo finder but that can be identified in the particles that originally belonged to the subhalo at infall. Specifically, our algorithm identifies all particles that make up a subhalo at infall (Section 2.3.1 and Appendix A). It spawns a ghost whenever a subhalo merges according to the merger trees and computes its position and velocity based on the tracked particles (Section 2.3.2). It removes (but never adds) particles as the subhalo is being stripped (Section 2.3.3) and it ends the ghost once it has truly merged or contains too few particles to be tracked (Section 2.3.4). These algorithms are implemented in the Sparka/Moria framework (Diemer, 2017, 2020).
For visual guidance, Figs. 1 and 2 show examples of the lives of two ghost haloes. Fig. 3 shows a visualization of a merger tree in our test simulation with and without ghosts.
#### 2.3.1 Identifying subhalo member particles
The question of subhalo membership is fundamentally ill-posed. We wish to discern particles that physically belong to a subhalo from host particles that happen to be near it, but up to which point before infall can subhaloes accrete new particles? At what point does "infall" occur? Can subhaloes add particles once they are inside the host? Our approach is to attempt reasonable answers to these questions and to verify that they do not lead to sudden mass gains or losses when subhaloes cross the (arbitrary) infall boundary. In this section, we briefly summarise our algorithm, referring the reader to Appendix A for details. The new algorithm represents a significant improvement over the one of Diemer (2017), and it was used for all calculations in Diemer (2020).
When a halo becomes a subhalo by crossing a larger host halo's \(R_{200{\rm c},{\rm had}}\), we consider all particles within \(2~{}R_{200{\rm c},{\rm sub}}\) of the subhalo centre as possible members. We require at least one of two criteria to be fulfilled: that the particle entered the subhalo well outside the host's zone of influence, or that the particle is strongly gravitationally bound to the subhalo. Specifically, we consider a particle to belong to a subhalo if it first entered the subhalo's \(R_{200{\rm c}}\) at a distance at least \(2~{}R_{200{\rm c},{\rm had}}\) from the host centre, which excludes host particles that happen to be currently co-located with the subhalo. However, some particles can become physically bound to the subhalo as it travels through the host's outskirts (e.g., Behroozi et al., 2014). Thus, we also consider a particle to belong to a subhalo if its
kinetic energy is less than the gravitational binding energy to all particles within \(0.5~{}R_{200\rm{m},\rm{sub}}\) of the subhalo centre. We further discuss this algorithm (and a possible third criterion) in Appendix A.
Having determined a set of subhalo particles, we impose the condition that the subhalo will not accrete any more particles as long as it is a subhalo according to the halo finder. This condition may not be strictly true, especially in major mergers (e.g., Han et al., 2012). However, the number of added particles should generally be small compared to the subhalo's initial mass, and deciding subhalo membership at each snapshot would be complicated and computationally expensive.
#### 2.3.2 Ghost creation, position, and velocity
When a subhalo disappears from the merger tree because Rockstar either deems it to have merged or lost it otherwise, we continue its life a ghost. If the ghost's host merges into another halo itself, the ghost transfers to this new host. We cannot rely on the halo finder
Figure 1: Visual illustration of our subhalo tracking algorithm. Each panel shows a snapshot along the history of the same subhalo before and after it is lost by the halo finder. The scale is fixed in physical units. The view is shifted into the host (red) frame and rotated such that the subhalo (dark blue) position and velocity lie in the \(x\)-\(y\) plane. This orientation means that the azimuthal position of the subhalo is somewhat random, but it shows the correct distance and velocity with respect to the host centre. The velocity arrows of subhaloes and particles are scaled such that \(1/10\) of the width of the plot would correspond to 200 km/s and 1000 km/s, respectively, meaning that the subhalo arrows are five times longer. Gray dots show all particles inside the host radius, whereas coloured points show the tracked subhalo particles (with purple corresponding to strongly bound and yellow to weakly bound members; see colour scale in second panel). The dashed gray lines indicate the subhalo’s previous trajectory. The dark blue circles and arrows show \(R_{200\rm{m}}\) and the relative velocity, calculated only from strongly bound, tracked particles. A smaller dark blue circle shows the uncertainty on the position of the ghost (visible only in the final panel). Before the subhalo disappears from the halo catalogue, its radius is compared to the catalogue position, velocity; and radius (light blue circle and arrow); they agree well. The first panel shows the initial infall of the subhalo into its host, a fairly major merger. After about one orbit (between the 3rd and 4th panels), the host and sub fall into a yet larger host with high velocity. After about half an orbit, the halo finder loses the subhalo (between the 5th and 6th panels). The ghost undergoes more than one full orbit, with strong tidal disruptions at the apocentres (7th to 11th panels). Eventually, there are fewer than 10 particles within \(R_{200\rm{m}}\) of the ghost, and we consider it to have been disrupted (12th panel).
for the positions, velocities, and radii of the subhalo any longer, and thus compute them directly from the particle distribution as follows.
The mean particle position tends to be a poor estimate of the bound core's location since the distribution is often anisotropic and rapidly changing. We thus compute the gravitational self-binding energy of the tracked particles (colour scale in Figures 1 and 2) and consider only the most bound quartile of particles. If there are fewer than 10 particles in that set (or 40 tracked particles overall), we accept the position of the single most bound particle as the ghost centre. If there are between 10 and 20 (40 and 80 particles overall), we take their centre of mass to be the new ghost centre. If there are more than 20 (80 particles total), we still only consider the 20 most bound particles. Once the centre has been determined, we calculate \(R_{200\rm{m}}\) from the tracked particles and define the ghost velocity as the mean particle velocity of the tracked particles within \(R_{200\rm{m}}\) (as opposed to all tracked particles, which extend to \(2\,R_{200\rm{m}}\)). This definition avoids biases due to particles that have strayed far from the ghost centre.
We have compared our position estimate to Rockstar and find that it generally agrees well. For example, at \(z\approx 2\), the majority of subhalo centres are within 0.1 \(R_{200\rm{m}}\) of each other. The first panels of Figures 1 and 2 give a fairly typical example of the agreement between the Sparta and Rockstar positions and velocities, although the agreements tends to get worse for haloes with fewer particles. There are also cases of disrupting haloes where Sparta tracks a genuinely different set of particles than those in Rockstar's FOF group, leading to large offsets (e.g., Fig. 2).
Figure 2: Same as Fig. 1, but for a ghost that sinks to the host centre after experiencing strong dynamical friction. The subhalo falls in at \(z\approx 4\), tidally disrupts to some extent on its first orbit, and sinks close to the host centre (first three panels). For a few snapshots, Sparta’s tracked particles and Rockstar’s FOF group do not agree in terms of position and velocity (4th and 5th panels). Eventually, they fall into agreement again but Rockstar assigns the halo a larger \(R_{200\rm{m}}\) (6th panel). Nevertheless, in the next snapshot, the halo is no longer deemed a separate entity by Rockstar and becomes a ghost (Th panel). From that point onward, the particles orbit close to the halo centre and the ghost loses no particles (8th through 11th panels). Finally, Sparta’s algorithm ends the ghost because its centre has overlapped the host centre for a significant fraction of a dynamical time (12th panel). In this particular case, it is easy to see why Rockstar merged the two haloes, given that their phase space structure is similar (as indicated by the small relative velocity).
#### 2.3.3 Mass and radius evolution
At each snapshot, we locate the tracked particles belonging to each subhalo or ghost and compute its spherical overdensity radii including only tracked particles. We will refer to the corresponding radii and enclosed masses as "tracer radii" and "tracer masses." The tracer definition has a number of advantages. First, it is guaranteed not to include any host material. Second, it avoids spikes and other rapid changes as the particle distribution is compressed or stretched by tidal forces (e.g., first six panels of Fig. 2). Third, not all tracked particles that count into the tracer mass need to be gravitationally bound, which avoids ill-defined gravitational boundness criteria that can lead to a noisy subhalo mass evolution (Appendix A, van den Bosch, 2017). Tracer radii are allowed to be larger than the distance to the farthest tracked particle, a case that arises for compact particle distributions (typically near pericentre). The calculation fails only if the subhalo density is too low to reach the threshold at its centre.
After each snapshot, we permanently remove particles that have drifted beyond a maximum radius of \(2R_{200\text{m}}\) (computed from bound particles in normal subhaloes and from tracer particles in ghosts). Tracer masses typically decrease with time as particles are stripped, but they can increase if particles that were outside a given overdensity radius return to smaller distances. Tracer radii also grow through pseudo-evolution as the overdensity threshold decreases with cosmic time (Diemer et al., 2013). For example, the radius in the first panel of Fig. 1 appears to be about the same as in the 10th panel, even though the particle number has decreased from 229 to 54.
We find that the initial subhalo particle tagging does matter for the ghost masses. Erroneously tagged particles should quickly drift away from the subhalo or ghost, but nonetheless the initial assignment can have an effect on tracer masses. For example, simply assigning all particles within \(R_{200\text{m},\text{sub}}\) at infall significantly inflates the average tracer masses until the ghosts merge into the host's centre. We conclude that the absolute tracer masses are, to some extent, dependent on our definition of subhalo membership. We compare tracer and bound masses in detail in Section 3.2.
#### 2.3.4 Ending a ghost
The final step in our algorithm is to check whether a ghost is still a meaningful unit, whether it has been disrupted (physically or numerically), or whether it has merged with its host. The case of disruption is easy to detect: we abandon a ghost if \(N_{200\text{m}}<10\), a limit the user can adjust (Fig. 1). However, a significant fraction of ghosts sink to the host centre and remain there indefinitely (see also Han et al., 2018). As all of their particles are on low-radius orbits, they can maintain \(N_{200\text{m}}>10\) even though their particles are not meaningfully distinct from their host's any more. We thus end a ghost when it has been within \(0.05~{}R_{200\text{m},\text{host}}\) of the host centre for 0.5 current dynamical times2 (Fig. 2; the user can change these parameters). When a ghost ends, we assign its host (as previously determined by Consistent-TRES) as the halo into which the ghost has merged. If the host itself merges into another halo, the ghost can live on in that new host and eventually merge with it (Fig. 1).
Footnote 2: The noise in the measurement of the ghost centre can exceed \(0.05~{}R_{200\text{m}}\). We avoid artificially keeping the ghost alive by subtracting the uncertainty in the ghost position from the distance. This uncertainty is estimated as \(\sigma_{\text{x}}=\sigma_{\text{r}}/\sqrt{N}\), where \(\sigma_{\text{r}}\) is the standard deviation in the radius of the \(N\) particles used to determine the centre. This estimate is approximate, particularly if we take the position of the most bound particle as the ghost centre. Further simplifying the expression, we find that \(\sigma_{\text{r}}\approx 0.5~{}R_{200\text{m}}\) for the vast majority of haloes, leading to the simple estimate of \(\sigma_{\text{x}}\approx 0.5~{}R_{200\text{m}}/\sqrt{N}\).
## 3 Results I: subhalo evolution and loss
In this section we investigate the evolution and loss of subhaloes. We begin by quantifying when and where subhaloes become ghosts in Section 3.1. We compare their tracer masses to conventional bound masses in Section 3.2 and consider the evolution of these masses in Section 3.3. We investigate the impact of ghosts on various summary statistics for Section 4.
The following figures are based on combined subhalo trajectories from all redshifts and from seven simulations with different box sizes (Section 2.1). We have confirmed that results from the individual simulations broadly agree, but there are trends with halo mass, redshift, and resolution that we average over. We include all haloes that reach a mass \(M_{200\text{m}}\) of at least 200 particle masses at some point in their history. The resulting sample contains about 1.5 million subhalo trajectories, about 700,000 of which result in a ghost (while the rest of the subhaloes reaches the end of the respective simulation). A single halo can contribute multiple subhalo trajectories if it intermittently becomes a host again. Unless otherwise mentioned, we use \(R_{200\text{m},\text{had}}\) to separate subhaloes from hosts (Section 2.2).
### When, where, and why do subhaloes get lost?
Fig. 4 shows histograms of the properties of ghosts at their creation (at the epoch when subhaloes are lost, first four panels) and when they end (last two panels). To isolate resolution-dependent trends,
Figure 3: Visualization of the merger tree of the largest halo in our test simulation, with ghosts highlighted in magenta. The trajectories reflect the distance from the main halo in comoving units, but they are arbitrarily cut off on the left at a small radius. Each track represents a halo that becomes a subhalo of the main halo, merges into it, or merges into one of its subhaloes. The colour of the lines indicates epochs when those haloes are hosts (gray), subhaloes (light blue), and ghosts (purple). Including ghosts not only adds more subhaloes but also changes the times when mergers occur (gray dots). Ghosts merge into the main across a wide range of radii (highlighted with magenta squares), sometimes even outside of \(R_{200\text{m}}\).
we compare to highly resolved haloes (\(N_{\rm 200m}>2000\) at infall, purple) and minor mergers (\(\mu\equiv M_{\rm 200m,hot,sub}/M_{\rm 200m,hot,hot}\leq 0.1\)). The overall sample (blue) is dominated by major mergers with \(\mu>0.1\). The main conclusion is that most subhalo losses occur not due to low particle resolution but due to tidal deformations around (but not necessarily close to) the host centre. Rockstar sometimes drops subhaloes in such situations because more than half of their particles are gravitationally unbound. Simply not imposing such a limit is not a solution because it would lead to numerous tidal tails being spuriously be classified as haloes (Behroozi et al., 2013a).
The top left panel of Fig. 4 shows the number of tracked particles in subhaloes when they disappear from the halo catalogues. The vast majority of ghosts are created with more than 25 particles, which is roughly the limit to which Rockstar can identify haloes (Behroozi et al., 2013a). Moreover, the distribution shifts to higher numbers when we require the subhalo to have had a higher initial particle count, demonstrating that there is no "magic" number of particles at which subhaloes are lost (see also Behroozi et al., 2019). Strikingly, there also does not seem to be a clear upper cutoff, with some subhaloes being lost when they still contain 500,000 particles. Some such events are major mergers that may have physically joined the host, but some are still a distinct, moving substructure when they are lost. We highlight one such example in Section 5.1. The distribution of mass loss rates (top centre panel of Fig. 4) leads to similar conclusions as the particle number, namely that there is no preferred fractional mass loss that leads to missed subhaloes. The distribution peaks around a remaining mass of 60% compared to infall (20% for minor mergers) but it is broad and spans from 1% to unity. The tail towards strongly stripped subhaloes is more prominent in the high-resolution sample because mass losses of 99% cannot be resolved in subhaloes that have fewer than 2500 particles (assuming a lower limit of 25 particles for haloes to be detected).
We can gain further insight into the causes of subhalo losses from the locations and times where and when they occur. The top right panel of Fig. 4 shows that losses peak near the host centre, but the distribution is broad and reaches all the way to the host's \(R_{\rm 200m}\). We expect that physical mergers, where the subhalo has truly joined the host's phase space, should be strongly peaked in the innermost bins. Notably, the distribution of small subhalo losses (green) is peaked around a quarter of the host radius. For all samples, we find that about half of subhalo losses occur at or near pericentre (within about 30% of the smallest distance we record). This finding is mirrored in the time distribution (bottom left panel of Fig. 4), which shows broad peaks at 1/2 and 3/2 crossing times (near pericentre) that extend towards 1 and 2 crossing times (near apocentre).
In summary, a picture arises where some large subhaloes physically merge at small radii and some are lost prematurely, while small subhaloes are lost across a wide range of radii that includes their
Figure 4.— Properties of subhaloes when they merge or are lost by Rockstar. The first four panels show the final properties of subhaloes (and thus the initial properties of ghosts), whereas the last two panels show the properties of ghosts when they are abandoned. We compare all ghosts in the WMAP7 sample (blue), only those where the subhalo contained at least \(N_{\rm 200m}\geq 2000\) particles at infall (purple), and only those with \(M_{\rm sub}/M_{\rm host}<0.1\) at infall (green). We combine all simulations and redshifts because the differences between them do not change the overall picture. _Top left_: Subhaloes are lost at a wide range of particle numbers. The distribution strongly depends on the initial resolution, demonstrating that a low particle number is not the main reason for subhalo loss. _Top centre_: The typical mass loss experienced prior to becoming a ghost does not depend on resolution and peaks at about 40%, which reaffirls that sheet loss of particles cannot be the dominant factor. _Top right_: Subhaloes are much more likely to be lost near the host centre, but many of those cases are major mergers that have such quickly due to dynamical friction. Minor-merger subhaloes tend to get lost at intermediate radii. _Bottom left_: Peaks in the survival time at \(1/2\), \(3/2\), and \(5/2\) crossing times indicate losses near pericentre. However, the peaks significantly extend to later times (especially for minor mergers), indicating that losses can occur near apocentre following a pericentric passage. _Bottom centre_: The double-peaked distribution of final ghost particle counts corresponds to the two criteria for the density that it has lost all but 10 particles (the majority of minor mergers) or that it has physically merged into the host centre (the typical outcome of major mergers). _Bottom right_: The double-peaked structure is also visible in the distribution of final ghost masses compared to subhalo mass at infall. The mass loss can reach arbitrarily small fractions, limited only by the resolution of the simulation.
apocentres. We speculate that losses tend occur due to tidal forces that are strongest at pericentre. However, much of the mass loss and deformation in physical space can occur near apocentre (e.g., Fig. 1 and Section 3.3). Such situations may lead to a large fraction of formally unbound particles and the subhalo being dropped, spreading out the distribution of loss radii and times.
The bottom centre and bottom right panels of Fig. 4 show the final particle number and mass loss fractions of ghosts. These distributions reflect our criteria for when a ghost cannot be tracked any longer or has merged (Section 2.3.4). Most ghosts end at substantial particle numbers because they have merged, with a distribution that depends on the number of particles at infall (purple). The second group, namely those ended because their particle number decreases to 10 or below, contains the majority of minor mergers and exists independently of how many particles the initial subhalo contained. The bottom right panel demonstrates that our algorithm can track ghosts to arbitrarily small fractions of the initial subhalo mass.
### Tracer vs. bound masses
As described in Section 2.3.2, we measure tracer radii and masses that include only tracked particles, side-stepping the ill-defined question of gravitational boundness. Before we can meaningfully consider the evolution of these masses for ghosts, we need to check how similar they are to bound-only definitions from Rockstar(Section 2.2). Fig. 5 shows histograms of the mass ratio for subhaloes in the WMAP7 sample at \(z=0\), including only subhaloes that currently have at least 200 particles within \(R_{\rm 200m,halo}\). We focus on the overall sample, neglecting differences between subhaloes that have experienced strong stripping, populations at different radii, and so on. We split the sample into two coarse mass bins with \(1.4\times 10^{10}<M<3.2\times 10^{12}\)\(h^{-1}M_{\odot}\) (peak heights between 0.5 and 1, left) and \(3.4\times 10^{13}<M<1.4\times 10^{14}\)\(h^{-1}M_{\odot}\) (peak heights between 1.5 and 2, right). We further restrict the plot to only the low and high density thresholds of \(M_{\rm 200m}\) and \(M_{\rm 500c}\) because the intermediate \(M_{\rm vir}\) and \(M_{\rm 200c}\) definitions compare similarly. The corresponding ratios of spherical overdensity radii are smaller by a factor of \(R\propto M^{1/3}\).
The 68% intervals (shaded areas in Fig. 5) demonstrate the close agreement between tracer and bound masses, especially for high-density definitions such as \(M_{\rm 500c}\). Here, the interval varies between 14% at low masses and \(z=0\) to about 40% at high mass and \(z\approx 2\). For \(M_{\rm 200m}\) the interval remains between 40% and 55%, with the same trend that the agreement gets slightly worse with redshift. Overall, 95% of subhaloes have mass ratios within a factor of two. The tails are somewhat asymmetric, with fewer ratios below unity than above.
As evidenced by the lack of strong tails to low ratios, it is rare for the subhalo tagging to miss a significant number of particles that are gravitationally bound. Differences could arise due to particles that are re-bound after previously having left the subhalo (e.g., Han et al., 2012). The halo finder would include such particles, but also host particles that become temporarily "bound" because they happen to roughly match the subhalo velocity. The more substantial tails towards higher ratios, especially in \(M_{\rm 200m}\) at low masses, reflect cases where particles are not technically bound but deemed to belong to the subhalo according to our tagging scheme. Overall, the good agreement gives us confidence that the tracer mass determination is sensible and roughly corresponds to bound mass.
### Mass loss of subhaloes and ghosts
Having convinced ourselves that tracer masses approximate bound mass, we can now compare the mass evolution of subhaloes and ghosts over time. Fig. 6 visualises the average evolution using two different metrics, namely the fraction of mass remaining and the mass loss rate. We consider epochs both before and after infall, which we define as the final snapshot before the halo becomes a subhalo (vertical gray line in Fig. 6). We rescale time by the dynamical time at infall, which allows us to combine trajectories from all redshifts. We consider all subhaloes in the WMAP7 simulations that live for at least 5 snapshots in total and whose trajectory includes at least half a dynamical time before and one dynamical time after infall. We emphasise that the results in this figure are significantly biased by subhaloes reaching the resolution limit of 10 particles and dropping out of the average trajectories. We have quantified the effects of this bias by restricting the figure to subhaloes with at least 2000 particles at infall. The general shape and relative amplitude of the curves remains roughly the same, but after a few dynamical times they asymptote to a higher remaining mass fraction (by about a factor of two at the final time plotted). In the following, we focus on qualitative interpretations that do not rely on the exact amplitude of the asymptotic mass loss.
The top panel of Fig. 6 shows that tracer and bound masses experience overall similar evolutions, confirming the conclusions of Section 3.2. The bound mass decreases slightly more rapidly directly after infall. The difference in the average mass persists until about four dynamical times after infall, indicating that there is a population of particles that become formally unbound but do move with the subhalo. The tracer masses exhibit a slight spike directly after infall, which we suspect to correspond to a compression of subhaloes. Given that \(M\propto R^{3}\), the corresponding changes in radius are negligibly small. Ghosts follow roughly the same trajectory as normal subhaloes until about three dynamical times after infall, at which point they lose mass more rapidly and come to dominate the overall sample. After about 10 dynamical times, the median ghost mass decreases only very little. Here, the sample is dominated by the long-lived remnants of major mergers that have sunk to the centre of their hosts, with their lifetime determined by the criteria for ending ghosts (Section 2.3.4).
The bottom panel of Fig. 6 shows the logarithmic mass loss rate per dynamical time. The scatter in tracer mass losses is comparable to that in the bound mass but is omitted to avoid crowding. A key first impression is that the scatter is enormous, highlighting that mass loss
Figure 5: Ratio of tracer to bound masses for low-mass (left) and high-mass (right) subhaloes in the WMAP7 sample at \(z=0\). While the distribution has large tails, the 68% and 95% intervals (shaded areas) are relatively tight and overlap with unity. High-threshold definitions such as \(M_{\rm 500c}\) (red) lead to better agreement than low thresholds such as \(M_{\rm 200m}\) (blue) because they include mostly particles that are unambiguously bound. The peak of the \(M_{\rm 500c}\) histogram in the left panel is cut off, highlighting the striking agreement (1-\(\sigma\) range of just over 10%).
rates depend sensitively on the individual orbits and density structure of subhaloes. On average, the strongest mass loss occurs after one crossing time, which may seem surprising since the strongest tidal forces should occur at pericentre rather than at apocentre. However, the actual mass loss does not necessarily occur where the tides are strongest: subhaloes can pass the host centre without losing much mass before spreading out at apocentre (Fig. 1). The differences in the mass loss rates of ghosts and subhaloes are partially a selection effect, given that the orbits of subhaloes influence whether and when they become a ghost. After about six crossing times (three full orbits), the differences disappear. The bound mass behaves somewhat differently directly after infall, as discussed above.
The mass loss rate in the units of Fig. 6 has been described by a number of theoretical models. One popular assumption has been that the orbit-averaged mass loss rate is independent of the time since infall. For example, Jiang & van den Bosch (2016) describe it as
\[\frac{\mathrm{d}M_{\mathrm{shd}}}{\mathrm{d}t}\frac{t_{\mathrm{dyn}}(z)}{M_{ \mathrm{shd}}}=-\mathcal{R}\frac{4}{\pi}\left(\frac{M_{\mathrm{shd}}}{M_{ \mathrm{host}}}\right)^{\zeta} \tag{2}\]
with \(\mathcal{R}=0.86\) and \(\zeta=0.07\), meaning that it is only a weak function of the sub-to-host mass ratio \(\mu\equiv M_{\mathrm{sub}}/M_{\mathrm{host}}\)(see also Taylor & Babul, 2001; Zentner & Bullock, 2003; van den Bosch et al., 2005; Han et al., 2016). The factor of \(4/\pi\) arises due to a slightly different definition of the dynamical time. The model predicts mass loss rates between \(-0.7\) and \(-1.1\) for mass ratios between \(10^{-3}\) and unity, which roughly matches the strongest loss rates we find during the first few orbits. However, it is clear from Fig. 6 that the average loss rate varies strongly with time, a conclusion that does not change if we bin the subhaloes by the mass ratio \(\mu\). While the mass loss rate after many dynamical times is affected by the aforementioned bias due to small subhaloes being removed, the strongest differences in loss rate occur soon after infall.
There are a number of possible reasons for the disagreement between the model of Jiang & van den Bosch (2016) and our data. Most notably, equation (2) was calibrated on the evolution of idealised NFW subhaloes on their first orbit. The model parameters were tuned to match simulated subhalo mass functions and thus implicitly account for the effects of artificial disruption (which removed subhaloes rather than reducing their mass, and is thus not included in Fig. 6). Moreover, the model only accounts for haloes inside of \(R_{\mathrm{vir}}\), whereas numerous "backsplash" haloes orbit outside of that radius (Diemer, 2021). The large scatter in the mass loss rate demonstrates that more accurate models need to take the orbital evolution into account. Such models have been proposed, for example by Green et al. (2021, see also Jiang et al., 2021), whose predictions were trained on high-resolution simulations of individual subhaloes (Ogiya et al., 2019). We cannot easily evaluate their model, however, because it relies on the density profiles of both hosts and subhaloes at all times.
In summary, we find that subhaloes and ghosts typically lose the majority of their mass within the first two orbits (four crossing times). Thereafter, the mass loss rate slowly decreases. We have also tried to quantify the mass loss rate as a function of radius rather than time, but this relation demands a careful reconstruction of the poorly time-resolved subhalo trajectories near pericentre.
## 4 Results II: Impact on Simulation Predictions
The loss of subhaloes affects key predictions of \(N\)-body simulations. In this section, we investigate to what extent the addition of ghosts can mitigate these errors, specifically subhalo mass functions (Section 4.1) and correlation functions (Section 4.2). For the sake of simplicity, we bin only by mass or particle number, glossing over the complex dependencies of subhalo populations on host properties such as concentration and formation time (e.g., Gao et al., 2004).
### Subhalo mass functions
Perhaps the most straightforward metric to quantify the abundance of subhaloes is their mass function (SHMF). In this section, we quantify how much the SHMF increases due to the addition of ghosts. We emphasise that the results do depend on our chosen halo finder, our criteria for which particles are subhalo members, how they are removed, and when ghosts end. Fig. 7 shows the cumulative SHMF as a function of the sub-to-host mass ratio \(\mu\) at \(z=0\). For
Figure 6: Mass evolution before and after infall (top) and logarithmic mass loss rate per dynamical time (bottom) of subhaloes and ghosts according to the bound-only and tracer definitions (using \(M_{\mathrm{200}}\), but other definitions have similarly). The light blue curves show the bound mass as computed by Rockstar, and the dark blue lines show the tracer mass for all haloes (solid), non-ghosts (dot-dashed), and ghosts (dashed). The shaded areas show the 68% contours around the bound mass and tracer mass for all subhaloes (the latter only in the top panel to avoid crowding the figure). The trajectories become more and more biased towards higher-mass haloes because haloes near the resolution limit successively fall out of the sample. After a few dynamical times, the sample becomes dominated by ghosts. _Top panel:_ Tracer and bound masses evolve similarly, with the tracer mass capturing slightly more particles on average. _Bottom panel:_ The mass loss rate exhibits extremely large scatter, highlighting that stripping sensitively depends on the individual orbits of subhaloes. The median mass loss rate varies strongly and is highest on average after one crossing time (at first apocentre).
hosts, we use the bound-only \(M_{\rm vir}\) and \(R_{\rm vir}\) from Rockstar. For subhaloes, we can measure mass functions either from bound or tracer masses, but the results are virtually identical (left panel of Fig. 7).
To obtain converged results in different simulations, we impose a minimum number of particles per subhalo (either 25 or 200 in Fig. 7). For a given bin in \(\mu\) and the particle mass \(m_{\rm p}\) of a given simulation, we compute a minimum host halo mass. We then construct the differential mass function, \(dN_{\rm sub}/d\log_{10}(\mu)/N_{\rm host}\), by counting the number of subhaloes in sufficiently resolved hosts. We obtain the cumulative mass function by adding all differential bins above the minimum resolved \(\mu\). The cumulative and differential mass functions end up looking similar because the lowest \(\mu\) bins dominate. We add the halo counts from the different simulations in each bin, which means that the contributions are automatically number-weighted. By overplotting mass functions from individual simulation boxes with different mass resolutions, we conclude that a limit of \(N\geq 200\) particles per subhalo is sufficient for congruent results. The \(N\geq 25\) SHMPs in Fig. 7 are decidedly not converged. They appear lower because, in each simulation, subhaloes near the resolution limit drop out, which lowers the SHMF at a given mass ratio. We show the unconverged SHMPs to highlight the effect of ghosts near the resolution limit, which is slightly stronger than in the converged sample.
We focus on a particular host mass bin with \(3.4\times 10^{13}<M_{\rm vir}<1.4\times 10^{14}h^{-1}M_{\odot}\) (corresponding to peak heights between 1.5 and 2). This bin allows for a large range of \(\mu\) to be explored with our simulations: too many subhaloes fall below the resolution cutoff for lower host masses, and the number of possible hosts sharply decreases at higher host masses. We additionally filter by the subhaloes' distances from the host centre in units of \(d_{\rm sub}/R_{\rm vir,host}\). We consider three radial ranges in Fig. 7, which highlight that the addition of ghosts leads to the strongest increase in the SHMF near the host centre. We do not count ghosts within the innermost 5% of \(R_{\rm vir}\) because their continued existence may depend on the criteria for ghost disruption (Section 2.3.4).
Overall, Fig. 7 demonstrates that ghosts increase the SHMF by significant factors. If we include subhaloes anywhere in the host (left panel), the converged and unconverged SHMPs increase by about 20-30% and about 40%, respectively. Within half the host radius, this increase goes up to 25-50% and a factor of almost two for the unconverged SHMF. Within 0.25 host radii, ghosts can dominate the sample depending on \(\mu\). We find similar trends at higher redshift and for different host masses, although more massive hosts have more subhaloes at fixed \(\mu\) (due to the shallower power spectrum at larger scales in \(\Lambda\)CDM). If we included ghosts near the very halo centre, the mass functions increase even more. Interestingly, the increase in the number of subhaloes is comparable to the roughly 40% of satellites that are artificially added as orphans in the UniverseMachine model (Behroozi et al., 2019). This agreement is a hint that using ghost-augmented catalogues might drastically reduce the need for orphans.
### Correlation functions
The correlation function of haloes (and thus of galaxies) is one of the key summary statistics of large-scale structure (e.g., Zehavi et al., 2002). Subhalo losses have the pernicious effect of reducing the predicted clustering signal in a scale-dependent manner, with the strongest effect at small scales. We quantify the impact of adding ghosts in Fig. 8, which shows the auto-correlation function of haloes within certain mass ranges. We have selected haloes by the indicated ranges in their peak \(M_{\rm vir}\) ever attained (top panel) and computed the correlation function using the CoreFinc code (Sinha & Garrison, 2020). Once again, we have checked that the correlation functions from the individual WMAP7 simulations agree before combining them in each bin (weighted by the number of halo pairs).
Fig. 8 demonstrates that ghosts increase the correlation functions more or less independently of halo mass, while the absolute level of clustering increases with mass. The increase asymptotes to a fixed,
Figure 7: Cumulative subhalo mass functions per host for the co-added simulations in the WMAP7 sample at \(z=0\). We select host haloes with \(3.4\times 10^{13}<M_{\rm vir}<1.4\times 10^{14}h^{-1}M_{\odot}\) and sufficient numbers of particles to resolve subhaloes of a given mass ratio \(\mu\) with at least 25 (purple) or 200 particles (blue). The latter requirement is sufficient for the mass functions to converge between simulations. Adding ghosts (solid lines) significantly increases the abundance of subhaloes. We exclude backsplash haloes by limiting the radius to \(R_{\rm vir}\) or fractions thereof, and we excise ghosts in the innermost 5% of \(R_{\rm vir}\) to avoid a dependence of the SHMF on the algorithm for ghost termination. The solid gray line in the left panel corresponds to the purple dashed line but using bound masses according to Rockstar instead of tracer masses; the differences are negligible.
relatively small level at large scales but ramps up to a factor of two at megaparsec scales. In practice, correlation functions are generally measured for galaxies selected by stellar mass. We convert peak mass to stellar mass using an approximate stellar mass-halo mass relation based on the UniverseMachine framework (Behroozi et al., 2019). We add a log-normal scatter of 0.2 dex to the stellar mass, although this uncertainty has little effect on the results. When selecting by stellar mass, the difference due to ghosts largely persists. The effect tops out at an increase of about 75% at small scales.
These results highlight that, given the accuracy of modern galaxy surveys, subhalo loss is a roadblock on the way towards accurate predictions from simulations. We leave it to future work to investigate whether ghosts alone can reconcile simulations with observations (see, e.g., Campbell et al., 2018).
## 5 Discussion
We have analysed under what conditions phase-space halo finders lose subhaloes, presented an algorithm to keep following such subhaloes, and shown that adding the resulting ghosts has significant effects on basic predictions of \(N\)-body simulations. In Section 5.1 we further discuss the surprising finding that subhaloes can be lost even though they contain a large number of particles. In Section 5.2 we ponder the numerous limitations of this work and lay out paths towards possible solutions.
### Loss of subhaloes with many particles
When we investigated the particle numbers at which subhaloes are lost (and ghosts created), we found that the tails of the distribution reach seemingly arbitrary numbers (Fig. 4). While we argued that losses are related to the phase-space structure of subhaloes rather than to their particle number, it remains counter-intuitive that such large objects would not show up as an obviously bound entity.
To understand this question better, Fig. 9 visualises the evolution of the lost subhalo with the greatest particle number (18,000 at infall) in our test simulation. The format follows Figs. 1 and 2, but we omit the particles' velocity arrows and decrease the point size to avoid crowding. The particular merger shown in Fig. 9 occurs at \(z\approx 2\) with host and subhalo tracer masses of \(M_{200n}=8.3\times 10^{13}\) and \(1.7\times 10^{13}\)\(h^{-1}M_{\odot}\), respectively. The resulting mass ratio of 0.2 makes this a "major" merger (given our definition of \(\mu>0.1\)). Accordingly, strong dynamical friction completes the merger process within a few orbits. If we picked a fractionally smaller subhalo, it would typically spend more time orbiting before its disruption.
Major mergers are known to present challenges to halo finders (e.g., Behroozi et al., 2013, 2015). Already at infall, Rockstar and Sparta disagree slightly regarding the direction of the subhalo's velocity vector, but they agree that it has significant, mostly radial velocity. After only one snapshot, Rockstar abandons the subhalo, which is noticeably fractured (panel 2), but Sparta identifies a bound, coherently moving unit (dark blue colours in particle distribution). The ghost remains alive for about 12 dynamical times and thus orbits the host a number of times. During the first few orbits, it loses a significant amount of mass to tidal disruption (namely, between panels 3 and 4). The merger becomes particularly difficult to follow at this time because the particle distribution appears to split into multiple, somewhat bound centres. Thereafter, the mass remains more or less constant at \(\approx 10,000\) particles, and it slowly settles into the centre of the host via dynamical friction. However, even at \(z=1\) (panel 5), our algorithm still detects significant, coherent motion with respect to the host centre. At \(z=0.5\), Sparta stops tracking the ghost because its motions have become negligible.
The example of Fig. 9 demonstrates that the presumed main mechanism for subhalo loss (tidal distortion) can operate regardless of how many particles make up a subhalo. While distorted subhaloes pose a genuinely difficult, and possibly ambiguous, challenge to halo finders, particle tracking is a viable solution to follow such events.
### Limitations and future directions
Given the magnitude of the issues in simulating and detecting subhaloes, we see this work as an exploratory study. We have presented
Figure 8: Correlation function for halo samples binned by peak mass (top panel) and approximate stellar mass (third panel) at \(z=0\), computing without (dashed) and with ghosts (solid). The smaller panels show the fractional increase in \(\xi\) due to ghosts. The lowest mass bin (purple) is resolved only in the smaller simulation boxes. While the absolute level of correlation depends on halo mass, the increase due to ghosts is roughly mass-independent and reaches a factor of two at small radii. In the bottom panels, halo masses were approximately converted to stellar masses using the UniverseMachine model (note that \(M_{*}\) is given in units of \(M_{\odot}\) and halo masses in \(h^{-1}M_{\odot}\)). The trends with stellar mass are similar to those with halo mass.
a particular algorithm, evaluated it based on particular parameter choices, and shown that it significantly improves simulation predictions. However, it is beyond the scope of this paper to fully understand the (numerical or physical) disruption of subhaloes, to systematically test the full array of possible algorithmic choices, or to realistically model the resulting galaxy observables.
Throughout the paper, we have developed an understanding of why, when, and where subhaloes are typically lost by the halo finder. While tidal distortion seems to be the likely main culprit, we cannot exclude the existence of other pathological configurations that may pose particular challenges to particle-tracking algorithms. One way to make progress could be to connect the likelihood of loss to the details of a subhalo's orbit, such as its radial and tangential velocities and the host's density profile (e.g., Wetzel, 2011).
Perhaps the most important limitation of this work is that we had to, for practical purposes, focus on one particular algorithm to track subhalo particles. While we hope that our choices seem sensible, they are by no means unique. We have made the strong assertion that subhaloes can only lose particles, which may be a poor assumption during major mergers or subhalo-subhalo encounters (Behroozi et al., 2015; van den Bosch, 2017). Han et al. (2018) did allow the reintegration of subhalo particles that were initially tracked and found this effect to be somewhat significant. We leave a detailed comparison to their HBT+ code to future work. Similarly, it would be valuable to compare our ghosts to orphan models (e.g., those where a single particle is tracked) or the few-particle cores of Heitmann et al. (2019).
Even within our algorithm, it would be valuable to further study the impact of the free parameters on statistics such as the correlation function. While we have performed numerous test runs and visual inspection to optimise the parameter values, creating full merger trees for a large set of parameters would be a substantial computational effort. One major algorithmic uncertainty is when to end ghosts. We defined reasonable criteria in Section 2.3.4, but different parameters would lead to shorter or longer ghost lifetimes (although restricted to ghosts near the centres of hosts). In the context of pre
Figure 9: Same as Fig. 1, but for the largest lost subhalo lost in our test simulation. For clarity, we omit the ghost particles’ velocity vectors and reduce the point size. The binding energy (point colour) refers to an arbitrary scale between its minimum and maximum in each panel. The halo finder and Smith roughly agree on the subhalo properties at infall (\(z=2\), panel 1), but one snapshot later, Rockstra considers the halo to have merged. The reason is the strong tidal disruption experienced by the subhalo, which manifests in an irregular particle distribution (panel 2). Thereafter, the ghost is orbiting with a fairly short period and loses significant material (panel 3). Even after a number of orbits, the ghost still contains about \(10^{4}\) particles and has appreciable velocity with respect to the host centre (panels 4 and 5). Only at \(z=0.5\,\mathrm{does\leavevmode\nobreak\ \textsc{Starr}}\) decide that the ghost has truly merged because its relative velocity has become negligible.
dicting galaxy statistics, the model would have to take into account the baryonic physics of how long satellite galaxies survive relative to their haloes. One path forward could be to train machine learning algorithms on merger outcomes in baryonic simulations (e.g., Petulante et al., 2021).
Our algorithm has the advantage that it generalises to any halo finder, but subhalo identification should not have to rely on post-processing tools such as Sparta. In the long term, integrating particle tracking into a halo finder would allow for easier use and for more self-consistency. For example, one of the main benefits of tracking all particles in subhaloes (as opposed to orhang or cores) is that we can measure their tracer mass. However, this mass definition is currently applied only to subhaloes, which leads to an inconsistency across the infall boundary in time and space (Fig. 6). A comprehensive halo finder based on particle tracking would maintain a set of particles deemed to belong to a certain structure at all times, regardless of host-subhalo relationships. First steps in this direction have already been taken in a number of codes (Han et al., 2018; Springel et al., 2021).
## 6 Conclusions
We have studied the conditions under which friends-of-friends halo finders can lose subhaloes. We have proposed an algorithm to track all of a subhalo's particles and continue it as a "ghost" halo if it is dropped from the halo finder's output. We have shown that adding such ghosts to the halo catalogues has order-unity effects on key summary statistics of large-scale structure. While our study focuses on Rocksdr, we expect it to generally extend to other halo finders. Our main conclusions are as follows.
1. Subhaloes can end because they merge into the phase space of their host, because they disrupt numerically or physically, or because the halo finder loses them. Large subhaloes (more than 10% of the host mass) experience a mixture of physical mergers and loss, whereas the majority of smaller subhaloes disrupt or are eventually lost across a wide range of radii. Such losses are not a numerical error in the underlying simulation and persist to arbitrary numbers of particles.
2. While tidal distortion near the host centre is likely a key driver, the loss of subhaloes does not necessarily occur at their orbital pericentre.
3. We have presented a post-processing algorithm to track all particles in subhaloes, which become so-called ghosts if the subhalo is lost.
4. We introduce tracer masses computed only from tracked particles as a robust mass definition and show that they take on values similar to gravitationally bound masses, although they can contain formally unbound particles.
5. Restoring lost subhaloes to halo catalogues significantly changes basic predictions of \(N\)-body simulations, including the subhalo mass function (by about 40%) and the halo correlation function (by up to a factor of two at small scales). The added ghosts should at least partially alleviate the need for artificially adding orphan satellites.
All code and data used in this project have been made publicly available in the hope that they will stimulate further research into the effects of subhalo loss and improved algorithms for halo finding.
## Acknowledgements
We are grateful to Andrew Hearin, Katrin Heitmann, Fangzhou Jiang, Alexie Leauthaud, Frank van den Bosch, and Risa Wechsler for productive discussions. This research was supported in part by the National Science Foundation under Grant numbers AAG 2206688 and PHY-1748958. Our computations were run on the Mrow computing cluster provided by the University of Chicago Research Computing Center as well as the DeepThought2 cluster at the University of Maryland. This research made extensive use of the python packages NumPy(Harris et al., 2020), SciPy(Virtanen et al., 2020), Matplotlib(Hunter, 2007), and Colossus(Diemer, 2018).
## Data Availability
The Sparta code is publicly available in a BitBucket repository, bitbucket.org/bdiemer/sparta. An extensive online documentation can be found at bdiemer.bitbucketio/sparta. The Morka catalogue files (which contain ghosts) are available in an hdf5 format at e-bos_astro.umd.edu/erebos/moria. A Python module to read these files is included in the Sparta code. The full particle data for the Erebos \(N\)-body simulations are too large to be permanently hosted online, but they are available upon request.
|
2310.04943 | Effective Brauer-Siegel on some curves in $Y(1)^n$ | We establish an effective version of Siegel's lower bounds for class numbers
of imaginary quadratic fields in certain cures in $Y(1)^n$. Our proof goes
through the G-functions method of Yves Andr\'e. | Georgios Papas | 2023-10-07T23:24:22Z | http://arxiv.org/abs/2310.04943v2 | # Two Large Galois orbits conjectures in \(Y(1)^{n}\)
###### Abstract
We establish Large Galois orbits conjectures for points of unlikely intersections of curves in \(Y(1)^{n}\), upon assumptions on the intersection of such curves with the boundary \(X(1)^{n}\backslash Y(1)^{n}\), in both the Andre-Oort and the Zilber-Pink setting.
On the one hand, in the direction of Andre-Oort, our proof is effective for such curves, in contrast to previously known proofs that relied on Siegel's ineffective lower bounds for class numbers of imaginary quadratic fields. On the other hand, in the direction of Zilber-Pink, we obtain as a corollary, building on work of Habegger-Pila and Daw-Orr, new cases of the Zilber-Pink conjecture for curves in \(Y(1)^{n}\).
## 1 Introduction
The main objective of our exposition is to establish lower bounds for the size of Galois orbits of points in curves in the moduli space \(Y(1)^{n}\) coming from unlikely intersections of our curves with special subvarieties of \(Y(1)^{n}\). These results, known as "Large Galois orbits conjectures" in the general field of unlikely intersections, constitute the main difficulty in establishing the validity of unlikely intersections results using the Pila-Zannier method.
The main application of the results we obtain is some cases of the Zilber-Pink conjecture for curves in \(Y(1)^{n}\). The general strategy to establish the Zilber-Pink conjecture in this setting is due to Habegger and Pila, see [1], where the authors reduce the conjecture to a Large Galois orbits conjecture. Their main unconditional result is the following:
**Theorem 1.1** ([1], Theorem 1).: _Let \(C\subset Y(1)^{n}\) be an irreducible curve defined over \(\bar{\mathbb{Q}}\) that is asymmetric and not contained in a special subvariety of \(Y(1)^{n}\)._
_Then the Zilber-Pink conjecture holds for \(C\)._
In the process of establishing Theorem 1.1, Habegger and Pila also reduce the conjecture for any curve \(C\) as above, without the asymmetricity condition, to establishing finiteness of points of intersection of our curve with so called "strongly special" subvarieties of the moduli space \(Y(1)^{n}\). These will be subvarieties that are defined by equations of the form \(\Phi_{M}(x_{i_{1}},x_{i_{2}})=\Phi_{N}(x_{i_{3}},x_{i_{4}})=0\) where \(1\leq i_{j}\leq n\) are such that the sets \(\{i_{1},i_{2}\}\neq\{i_{3},i_{4}\}\) and \(i_{1}\neq i_{2}\), \(i_{3}\neq i_{4}\).
Using this circle of ideas, Daw and Orr establish the following:
**Theorem 1.2** ([1], Theorem 1.3).: _Let \(C\subset Y(1)^{n}\) be an irreducible curve defined over \(\bar{\mathbb{Q}}\) that is not contained in a special subvariety of \(Y(1)^{n}\) and is such that its compactification \(\bar{C}\) in \(X(1)^{n}\) intersects the point \((\infty,\ldots,\infty)\)._
_Then the Zilber-Pink conjecture holds for \(C\)._
Either of the conditions, i.e. the "asymmetricity condition" of Habegger-Pila or the condition about the type of the intersection of the curve with the boundary \(X(1)^{n}\backslash Y(1)^{n}\), is needed in order to establish the aforementioned "Large Galois orbits conjecture". In [10] this is achieved via a height bound due to Siegel and Neron, for which the asymmetricity condition is crucial. On the other hand, in [1], Daw and Orr employ Andre's G-functions method to arrive to certain height bounds at the points of interest. These in turn imply the lower bound on the size of the Galois orbits once coupled with the isogeny estimates of Masser-Wustholtz, see [14].
It is this same method introduced by Andre that we use here to go beyond the condition of Daw and Orr about the intersections of our curve with the boundary \(X(1)^{n}\backslash Y(1)^{n}\). We note that the Zilber-Pink conjecture for curves in \(Y(1)^{n}\) has been reduced, thanks to the work of the aforementioned authors, to such height bounds of points of intersection of our curve with strongly special subvarieties as above.
To state our main result in the direction of Zilber-Pink we first introduce a bit of notation.
Let \(C\subset Y(1)^{n}\), where \(n\geq 2\), be a smooth irreducible curve defined over \(\bar{\mathbb{Q}}\) and let \(\bar{C}\) be its Zariski closure in \(X(1)^{n}\). We also let \(s_{0}\in\bar{C}(\bar{\mathbb{Q}})\backslash Y(1)^{n}\) be a fixed point in the boundary \(X(1)^{n}\backslash Y(1)^{n}\).
**Definition 1.3**.: _Let \(C\), \(s_{0}\) be as above and let \(\pi_{i}:X(1)^{n}\to X(1)\) denote the coordinate projections._
_The coordinate \(i\) will be called **smooth for \(C\)** if \(\pi_{i}(s_{0})\in Y(1)\). A smooth coordinate \(i\) for the curve \(C\) will be called a **CM coordinate** for \(C\) if in addition \(\zeta_{i}\) is a CM point in \(Y(1)\). Finally, the coordinate \(i\) will be called **singular for \(C\)** if it is not smooth, i.e. if \(\pi_{i}(s_{0})=\infty\)._
**Theorem 1.4**.: _Let \(C\subset Y(1)^{n}\) be a smooth irreducible curve defined over \(\bar{\mathbb{Q}}\) that is not contained in any special subvariety of \(Y(1)^{n}\). Assume that \(C\) is such that all but at most one of its coordinates are singular and its one possibly smooth coordinate is CM._
_Then the Zilber-Pink conjecture holds for \(C\)._
For our most general Zilber-Pink-type statement see Theorem 7.4. In Section 7.2 we also derive as corollaries of Theorem 7.4 further unconditional cases of the Zilber-Pink conjecture for curves in \(Y(1)^{3}\).
We also pursue a new proof of the "Large Galois orbits conjecture" in the context of the Andre-Oort conjecture. Both the Andre-Oort Conjecture and the lower bounds for the size of Galois orbits in this setting are known to hold by work of Pila, see [11]. In particular, the Large Galois orbits conjecture here appears as Proposition 5.8 in [11]. The main tool employed by Pila in this statement are Siegel's lower bounds on class numbers, which are ineffective. The same lower bounds were used by Andre in [1] in establishing the Andre-Oort conjecture for \(\mathbb{A}_{\mathbb{C}}^{2}\). Effective proofs of this result of Andre were latter given by Kuhne [12] and Bilu-Masser-Zannier [1], without using the ineffective lower bounds of Siegel.
In this direction we establish the following:
**Theorem 1.5** (Large Galois orbits for Andre-Oort).: _Let \(C\subset Y(1)^{n}\) be an irreducible curve defined over \(\bar{\mathbb{Q}}\) that is not contained in a proper special subvariety of \(Y(1)^{n}\). Assume that there exists at least one CM coordinate for \(C\) or that there exist at least two singular coordinates for \(C\) and let \(K\) be a number field of definition of \(C\)._
_Then there exist effectively computable positive constants \(c_{1}\) and \(c_{2}\), with only \(c_{1}\) depending on the curve \(C\), such that for all CM points \(s=(s_{1},\ldots,s_{n})\in C(\bar{\mathbb{Q}})\) we have_
\[c_{1}\max\{|\operatorname{disc}(\operatorname{End}(E_{s_{k}}))|:1\leq k\leq n \}^{c_{2}}\leq[K(s):\mathbb{Q}].\]
Also using Andre's G-functions method Binyamini-Masser have announced in [1] effective results of Andre-Oort-type in \(\mathcal{A}_{g}\).
### Summary
We start in Section 2 with some general background on Andre's G-functions method. The main result here,Theorem 2.5, encodes in a sense the interplay between G-functions and relative periods of the variation of Hodge structures
given by \(R^{1}f_{*}\mathbb{Q}\), where \(f:\mathcal{X}=\mathcal{E}_{1}\times\ldots\times\mathcal{E}_{n}\to S\) is some 1-parameter family of products of elliptic curves. The main technical parts are heavily based on recent work on the G-functions method, mainly the exposition of [1, 1, 1, 10, 11, 12, 13]. At the end of the day, given a 1-parameter family over a number field as above, we can associate to it a naturally defined family of G-functions which we denote by \(\mathcal{Y}\).
In Section3 based on our previous work in [1], mainly SS7 there, we practically give a description of the so called "trivial relations" among the G-functions in our family. This is achieved working as in [1] via a monodromy argument using the Theorem of the Fixed Part of Andre, see [1].
We continue in Section4 and Section5, which constitute the main technical part of our exposition. In these sections we construct relations among the archimedean values of our family of G-functions at, essentially, points \(s\in S(\bar{\mathbb{Q}})\) over which the fiber of the morphism \(f\) above reflects an unlikely intersection in the moduli space \(Y(1)^{n}\). We deal with the CM-case in Section4, pertinent to Andre-Oort, and the case where we have two isogenies among the coordinates in Section5, the case pertinent to the Zilber-Pink Conjecture.
We conclude the main technical part of this text in Section6 by establishing the height bounds needed to deduce our Large Galois orbits statements. To do this it is crucial that we assume that the abelian scheme in question "degenerates", namely that there exists some curve \(S^{\prime}\) with \(S\subset S^{\prime}\) and some point \(s_{0}\in S^{\prime}(\bar{\mathbb{Q}})\) such that the fiber at \(s_{0}\) of the connected Neron model \(\mathcal{X}^{\prime}\) of \(\mathcal{X}\) over \(S^{\prime}\) has some \(\mathbb{G}_{m}\) component. The proof then is done by essentially appealing to the "Hasse Principle" of Andre-Bombieri for the values of G-functions. To do this we show that the relations constructed in the previous sections among the values of our G-functions at points of interest are both "non-trivial", i.e. they do not hold generically, and "global", i.e. they hold for all places with respect to which our point of interest \(s\) is "close" to the point of degeneration \(s_{0}\). This final step, i.e. the globality of our relations, is achieved by an analogue of the original argument of Andre in [1] making use of Gabber's lemma to show that the points we are considering cannot be "close" to \(s_{0}\) with respect to any finite place.
We finish our exposition in Section7 by noting down the Large Galois orbits statement in the Andre-Oort and the Zilber-Pink setting. We also record some examples of Zilber-Pink type statements that follow readily from our height bounds coupled with the general exposition of [1] and [1].
**Acknowledgments:** The author thanks Yves Andre for answering some questions about his work on G-functions and for pointing him to the direction
of [11]. Throughout the work on this paper, the author was supported by Michael Temkin's ERC Consolidator Grant 770922 - BirNonArchGeom.
### Notation
We introduce some notation that we adopt throughout the text.
Given a number field \(L\) we write \(\Sigma_{L}\) for the places of \(L\), \(\Sigma_{L,\infty}\) for the set of its archimedean places, and respectively \(\Sigma_{L,f}\) for the set of its finite places. Then given a place \(v\in\Sigma_{L}\) we write \(\mathbb{C}_{v}\) for the complete, with respect to \(v\), algebraically closed field corresponding to the place \(v\). We will also write \(\iota_{v}:L\hookrightarrow\mathbb{C}_{v}\) for the embedding of \(L\) in \(\mathbb{C}_{v}\) that corresponds to \(v\).
Given a scheme \(U\) defined over \(L\), where \(L\) is either a number field or \(L=\bar{\mathbb{Q}}\), and \(\iota:K\hookrightarrow\mathbb{C}\) an embedding of \(L\) into \(\mathbb{C}\), we write \(U_{\iota}:=U\times_{L,\iota}\mathbb{C}\) for the base change of \(U\) over \(\mathbb{C}\).
Consider a power series \(y:=\sum_{i=0}^{\infty}y_{i}x^{i}\in L[[x]]\), with \(L\) a number field, and let \(\iota_{v}\) be as above the embedding that corresponds to some place \(v\in\Sigma_{L}\). We write \(\iota_{v}(y(x))\) for the power series \(\sum_{i=0}^{\infty}\iota_{v}(y_{i})x^{i}\in\mathbb{C}_{v}[[x]]\).
Finally, for a family of such power series \(y_{j}\in L[[x]]\) and an embedding \(\iota_{v}:L\hookrightarrow\mathbb{C}_{v}\), we define \(R_{v}(\{y_{1},\ldots,y_{N}\}):=\max\{R_{v}(\iota_{v}(y_{j}))\}\), where \(R_{v}(f)\) for a power series \(f\in\mathbb{C}_{v}[[x]]\) denotes the radius of convergence of \(f\).
## 2 Recollections on the G-functions method
The main object of study in this paper is essentially the transcendence properties of values of certain G-functions that appear either as relative periods of 1-parameter families of products of elliptic curves or are closely related to those in a manner that we soon make specific. In this first section we review this relation in this context.
### Our setting
Instead of working with a curve \(C\subset X(1)^{n}\) in the majority of our exposition we will deal with a slightly different setting modeled towards applying Andre's G-function method. We dedicate this subsection in recalling this setup and the main conventions we make.
We consider \(S^{\prime}\) a smooth, not necessarily projective, geometrically irreducible curve defined over a number field \(K\), a point \(s_{0}\in S^{\prime}(K)\), and set \(S:=S^{\prime}\backslash\{s_{0}\}\). We also assume that we are given an abelian scheme of the form \(f:\mathcal{X}=\mathcal{E}_{1}\times\ldots\times\mathcal{E}_{n}\to S\), where for each \(1\leq i\leq n\) the morphism \(f_{i}:\mathcal{E}_{i}\to S\) defines an elliptic curve over \(S\), the morphism also being defined over \(K\).
For each \(1\leq i\leq n\) we write \(f^{\prime}_{i}:\mathcal{E}^{\prime}_{i}\to S^{\prime}\) for the connected Neron model of \(\mathcal{E}\) over \(S^{\prime}\) and denote their product by
\[f^{\prime}:\mathcal{X}^{\prime}:=\mathcal{E}^{\prime}_{1}\times\ldots\times \mathcal{E}^{\prime}_{n}\to S^{\prime}.\]
Note that \(\mathcal{X}^{\prime}\) will also be the connected Neron model of \(\mathcal{X}\) over \(S^{\prime}\) by standard properties of Neron models.
With Definition 1.3 in mind we introduce the following:
**Definition 2.1**.: _Let \(S^{\prime}\), \(s_{0}\), and \(f^{\prime}\) be as above. The coordinate \(i\) is said to be smooth for \(S^{\prime}\) if \((f^{\prime}_{i})^{-1}(s_{0})\) is an elliptic curve. A smooth coordinate \(i\) for the curve \(S^{\prime}\) will be called a CM coordinate for \(C\) if in addition \((f^{\prime}_{i})^{-1}(s_{0})\) is a CM elliptic curve. On the other hand, the coordinate \(i\) said to be singular for \(S^{\prime}\) if it is not smooth, i.e. if \((f^{\prime}_{i})^{-1}(s_{0})\simeq\mathbb{G}_{m}\)._
**Assumption 2.2**.: _The local monodromy around \(s_{0}\) acts unipotently on the fibers of \(R^{1}(f_{k})_{*}\mathbb{Q}\) in some analytic neighborhood of \(s_{0}\), for all singular coordinates \(k\) for \(S^{\prime}\)._
#### 2.1.1 Relative periods
Let us now fix a place \(v\in\Sigma_{K,\infty}\) with corresponding embedding \(\iota_{v}:K\hookrightarrow\mathbb{C}\). We then get a canonical isomorphism
\[H^{1}_{DR}(\mathcal{X}/S)\otimes_{\mathcal{O}_{S}}\mathcal{O}_{S_{v}}\to R^{1 }(f_{v})_{*}(\mathbb{Q})\otimes_{\mathbb{Q}}\mathcal{O}_{S_{v}}. \tag{1}\]
In our particular situation, i.e. that of an \(n\)-tuple of elliptic curves over \(S\), we can write this in the following equivalent form
\[H^{1}_{DR}(\mathcal{E}_{1}/S)\oplus\ldots\oplus H^{1}_{DR}(\mathcal{E}_{n}/S) \rightarrow(R^{1}(f_{1,v})_{*}(\mathbb{Q})(1)\oplus\ldots\oplus R^{1}(f_{n,v })_{*}(\mathbb{Q})(1))^{\vee}\otimes_{\mathbb{Q}}\mathcal{O}_{S_{v}}, \tag{2}\]
where we think of \(R^{1}(f_{k,v})_{*}\mathbb{Q}(1)\) as the variation of Hodge structures whose fibers are the Homology of the corresponding fibers of \(f_{k}\). We also note that the isomorphism (2) is compatible with the splittings.
Let us choose for each \(1\leq k\leq n\) a basis of sections \(\{\omega_{2k-1},\omega_{2k}\}\) of \(H^{1}_{DR}(\mathcal{E}_{k}/S)|_{U}\) over some affine open, a trivializing frame \(\Gamma_{k,v}=\{\gamma_{2k-1,v},\gamma_{2k,v}\}\) of \(R^{1}(f_{k,v})_{*}\mathbb{Q}|_{V}\) over some simply connected \(V\subset U_{v}\), and set \(\Gamma_{v}:=\Gamma_{1,v}\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\ldots\sqcup\ldots\ldots\ldots\ldots\ldots\ldots\sqcup\
\(\Gamma_{n,v}\) which will be a trivializing frame of the local system \((R^{1}(f_{1,v})_{*}(\mathbb{Q})(1)\oplus\ldots\oplus R^{1}(f_{n,v})_{*}(\mathbb{Q })(1))^{\vee}|_{V}\).
For each \(k\), associated to the above, we then get a matrix of relative periods of \(V\) which we denote by
\[\mathcal{P}_{\Gamma_{k,v}}:=\begin{pmatrix}\frac{1}{2\pi i}\int_{\gamma_{2k-1,v }}\omega_{2k-1}&\frac{1}{2\pi i}\int_{\gamma_{2k,v}}\omega_{2k-1}\\ \frac{1}{2\pi i}\int_{\gamma_{2k-1,v}}\omega_{2k}&\frac{1}{2\pi i}\int_{\gamma _{2k,v}}\omega_{2k}\end{pmatrix} \tag{3}\]
which encodes the restriction of the canonical isomorphism \(H^{1}_{DR}(\mathcal{E}_{k}/S)\otimes\mathcal{O}_{S_{v}}\to(R^{1}(f_{k,v})_{*} \mathbb{Q}(1))^{\vee}\otimes\mathcal{O}_{S_{v}}\) restricted over the open analytic set \(V\).
Similarly, associated to the chosen basis \(\{\omega_{i}:1\leq i\leq 2n\}\) and the trivializing frame \(\Gamma_{v}\) as above, we get a matrix of relative periods encoding the isomorphism (2) which we will denote by \(\mathcal{P}_{\Gamma_{v}}\). We note that by construction of our trivializing frame and basis \(\{\omega_{i}\}\) this matrix will be block diagonal, since the isomorphism in question respects the splitting in de Rham and Betti cohomology given by \(\mathcal{X}=\mathcal{E}_{1}\times\ldots\times\mathcal{E}_{n}\), and the diagonal blocks will be the matrices \(\mathcal{P}_{\Gamma_{k,v}}\) above.
**Remark 2.3**.: _We have opted for a notation that does not mention either the choice of a basis or that of a simply connected \(V\) over which we get a trivializing frame. The reason for that is that pretty much throughout this text we will consider a fixed such basis \(\omega_{i}\), appropriately chosen, and care more to encode the family of relative periods that come out of (2) as one varies the chosen place \(v\in\Sigma_{K,\infty}\)._
### G-functions and relative periods
In this subsection we momentarily abandon the setting in Section 2.1 that we adopt almost throughout the text. Namely, we consider a fixed \(f:\mathcal{X}^{\prime}\to S^{\prime}\), this time defined over \(\bar{\mathbb{Q}}\), \(s_{0}\in S^{\prime}(\bar{\mathbb{Q}})\) with the same properties as in Section 2.1, and an embedding \(\iota:\bar{\mathbb{Q}}\to\mathbb{C}\). Throughout this text we also fix a local parameter \(x\) of \(S^{\prime}\) at \(s_{0}\). Later on, see Section 2.2.3, we will be more careful about this choice when we review what we call a "good cover of the curve \(S^{\prime}\)".
**Definition 2.4**.: _We call a matrix \(A\in M_{r_{1}\times r_{2}}(\bar{\mathbb{Q}}[[x]])\) a **G-matrix** if all of its entries are G-functions._
**Theorem 2.5**.: _There exists a basis of sections \(\{\omega_{i}:1\leq i\leq 2n\}\) of \(H^{1}_{DR}(\mathcal{X}/S)\) over \(U:=U^{\prime}\backslash\{s_{0}\}\), where \(U^{\prime}\) is some open affine neighborhood of \(s_{0}\), and an associated family of G-matrices \(Y_{G,k}=(y_{i,j,k})\in\operatorname{GL}_{2}(\bar{\mathbb{Q}}[[x]])\) such that, writing \(\mathcal{Y}:=\{y_{i,j,k}:1\leq i,j\leq 2,1\leq k\leq n\}\), for every \(s\in U(\bar{\mathbb{Q}})\) with \(|x(s)|_{\iota}<\min\{1,R_{\iota}(\mathcal{Y})\}\) we have that_
1. _if_ \(k\) _is a smooth coordinate for_ \(S^{\prime}\) _then there exists a symplectic trivializing frame_ \(\Gamma_{k,\iota}=\{\gamma_{2k-1,\iota},\gamma_{2k,\iota}\}\) _of_ \(R^{1}(f_{k,\iota})_{*}(\mathbb{C})|_{V}\) _over some small enough analytic neighborhood_ \(V\subset S_{\iota}\) _of_ \(s\) _such that_ \[\mathcal{P}_{\Gamma_{k,\iota}}(s)=\iota(Y_{G,k}(x(s)))\cdot\Pi_{k,\iota},\] (4) _where_ \(\Pi_{k,\iota}\in\mathrm{GL}_{2}(\mathbb{C})\) _is such that, if the coordinate_ \(k\) _is furthermore CM for_ \(S^{\prime}\)_, it is of the form_ \[\Pi_{k,\iota}=\begin{pmatrix}\frac{\varpi_{k,\iota}}{2\pi i}&0\\ 0&\varpi_{k,\iota}^{-1}.\end{pmatrix}\] (5)
2. _if_ \(k\) _is a singular coordinate for_ \(S^{\prime}\) _there exist_ \(d_{k}\)_,_ \(d^{\prime}_{k}\in\bar{\mathbb{Q}}\) _independent of the chosen embedding_ \(\iota\)_, and a symplectic trivializing frame_ \(\Gamma_{k,\iota}=\{\gamma_{2k-1,\iota},\gamma_{2k,\iota}\}\) _of_ \(R^{1}(f_{k,\iota})_{*}\mathbb{Q}|_{V}\) _over some small enough analytic neighborhood_ \(V\subset S_{\iota}\) _of_ \(s\) _such that_ \[\mathcal{P}_{\Gamma_{k,\iota}}=\iota(Y_{G,k}(x(s)))\cdot\Pi_{k,\iota}\cdot \begin{pmatrix}1&N_{k}\log\iota(x(s))\\ 0&1\end{pmatrix},\] (6) _where_ \(N_{k}\in\mathbb{Q}\) _and_ \(\Pi_{k,\iota}\in\mathrm{GL}_{2}(\mathbb{C})\) _is such that its first column is_ \(\begin{pmatrix}\iota(d_{k})\\ \iota(d^{\prime}_{k})\end{pmatrix}\)_._
**Remarks 2.6**.: 1. _We stress that the choices of the bases and the various trivializations in the previous theorem are independent of the point \(s\in S(\bar{\mathbb{Q}})\) in question but depend on the "base" point \(s_{0}\). The various frames will also obviously depend on the choice of the chosen embedding \(\iota\). We return to this last dependence in the next subsection._
2. _From the previous theorem and the remarks in Section 2.1.1 we know that the relative period matrix \(\mathcal{P}_{\Gamma_{\iota}}\) associated to the morphism \(f:\mathcal{X}\to S\), the embedding \(\iota\), the basis \(\{\omega_{i}:1\leq i\leq 2n\}\), and the frame \(\Gamma_{\iota}=\Gamma_{1,\iota}\sqcup\ldots\sqcup\Gamma_{n,\iota}\) will be block diagonal with diagonal blocks the above \(\mathcal{P}_{\Gamma_{k,\iota}}\) which are described as in Theorem 2.5._
3. _We expect that this result is known to experts in the area. Indeed the ideas here appear already in_ _[_1_]_ _and_ _[_1_]_ _though the theorem itself is not expressly stated in this format._
We start with the following fundamental lemma about periods of CM elliptic curves that we will need in the proof of the above theorem.
**Lemma 2.7**.: _Let \(E/L\) be an elliptic curve defined over a number field \(L\) and assume that \(F:=\operatorname{End}_{\bar{\mathbb{Q}}}^{0}(E)=\operatorname{End}_{L}^{0}(E)\). We fix an embedding \(\iota_{v}:L\hookrightarrow\mathbb{C}\), corresponding to some \(v\in\Sigma_{L,\infty}\), let \(V_{dR}:=H^{1}_{DR}(E/L)\) and \(V_{\mathbb{Q}}:=H_{1}(E_{v},\mathbb{Q})\), and let \(\hat{F}\) be the Galois closure of \(F\)._
_Then there exist_
1. _a symplectic basis_ \(\omega_{1}\)_,_ \(\omega_{2}\) _of_ \(V_{dR}\otimes_{L}L\hat{F}\)_, and_
2. _a symplectic basis_ \(\gamma_{1}\)_,_ \(\gamma_{2}\) _of_ \(V_{\mathbb{Q}}\otimes L\hat{F}\)_,_
_such that the period matrix of \(E\) with respect to these choices is of the form_
\[\begin{pmatrix}\frac{\varpi_{v}}{2\pi i}&0\\ 0&\varpi_{v}^{-1},\end{pmatrix} \tag{7}\]
_for some \(\varpi_{v}\in\mathbb{C}\)._
Proof.: Via the action of \(F\) on \(V_{dR}\) and \(V_{\mathbb{Q}}\) we get splittings of \(V_{dR,L\hat{F}}\) and \(V_{\mathbb{Q},L\hat{F}}\) which are compatible via Grothendieck's comparison isomorphism
\[P:V_{dR}\otimes_{L}\mathbb{C}\to(V_{\mathbb{Q}})^{\vee}\otimes_{\mathbb{Q}} \mathbb{C}.\]
In more detail, on the one hand we have the splitting on the de Rham side:
\[V_{dR}\otimes_{L}L\hat{F}=W_{dR}^{\sigma_{1}}\oplus W_{dR}^{\sigma_{2}}, \tag{8}\]
and the splitting on the Betti side:
\[V_{\mathbb{Q}}\otimes L\hat{F}=W_{\sigma_{1}}\oplus W_{\sigma_{2}}, \tag{9}\]
where \(\sigma_{i}:F\hookrightarrow\mathbb{C}\) are the two embeddings of \(F\) in \(\mathbb{C}\). Here, following the notation in [1] Ch. X, we denote by \(W_{\sigma}\) and \(W_{dR}^{\sigma}\) the subspaces of the respective vector space where \(F\) acts via the embedding \(\sigma:F\to\mathbb{C}\).
By Lemma 8.2 of [11], also its "dual", we have that there exist the following:
1. a symplectic basis \(\omega_{1}\), \(\omega_{2}\) of \(V_{dR,L\hat{F}}\) for which we furthermore have that \(\omega_{i}\) spans \(W_{dR}^{\sigma_{i}}\), and
2. \(\gamma_{1}\), \(\gamma_{2}\) a symplectic basis of \(V_{\mathbb{Q},L\hat{F}}\) such that \(\gamma_{i}\) spans the subspace \(W_{\sigma_{i}}\).
Note that we have
\[P(\omega_{i})=(\frac{1}{2\pi i}\int_{\gamma_{1}}\omega_{i})\gamma_{1}^{\vee}+ (\frac{1}{2\pi i}\int_{\gamma_{2}}\omega_{i})\gamma_{2}^{\vee},\ i=1,2. \tag{10}\]
One then has from the compatibility of the action of \(F\) with this isomorphism, that for every \(\lambda\in F\):
\[P(\lambda\omega_{i})=(\frac{1}{2\pi i}\int_{\gamma_{1}}\omega_{i})\sigma_{1}( \lambda)\gamma_{1}^{\vee}+(\frac{1}{2\pi i}\int_{\gamma_{2}}\omega_{i})\sigma_{2 }(\lambda)\gamma_{2}^{\vee},\ i=1,2. \tag{11}\]
On the other hand we have from the definition of the \(\omega_{i}\) that
\[P(\lambda\omega_{i})=\sigma_{i}(\lambda)P(\omega_{i}). \tag{12}\]
Since all of the above is true for any \(\lambda\in F\), by comparing coefficients with (11) we get
\[\frac{1}{2\pi i}\int_{\gamma_{1}}\omega_{2}=\frac{1}{2\pi i}\int_{\gamma_{2}} \omega_{1}=0. \tag{13}\]
Now set \(\varpi_{1}=\frac{1}{2\pi i}\int_{\gamma_{1}}\omega_{1}\) and \(\varpi_{2}:=\frac{1}{2\pi i}\int_{\gamma_{2}}\omega_{2}\). Then the Legendre relations give \(\varpi_{1}\cdot\varpi_{2}=\frac{1}{2\pi i}\). In particular we get that the period matrix with respect to these choices of bases is of the form
\[\begin{pmatrix}\frac{\varpi_{v}}{2\pi i}&0\\ 0&\varpi_{v}^{-1},\end{pmatrix} \tag{14}\]
as we wanted.
**Remark 2.8**.: _We note that for the \(\varpi_{v}\in\mathbb{C}\) it is known that \(tr.d._{\bar{\mathbb{Q}}}(\varpi_{v},\pi)=2\). This follows from Grothendieck's period conjecture which is known here by work of Chudnovsky, see [1, 2]._
Proof of Theorem 2.5.: Part (2) is [1], Ch. IX, SS4, Theorem 1 when \(g=1\). We note that the explicit description of the period matrix is inherent in the proof. See also the proof of Claim 3.7 in [10] where this explicit description appears. We note that Assumption 2.2 is needed here, see the proof of Theorem 3.1 of [10] for more details.
The matrix \(Y_{G,k}(x)\) will be the normalized uniform solution of the G-operator \(\vartheta-G_{k}\), where \(\vartheta:=x\frac{d}{dx}\) and \(G_{k}=(g_{i,j,k})\) is given by \(\nabla_{\vartheta}(\omega_{i,k})=\sum_{j=1}^{2}g_{i,j,k}\omega_{i,j,k}\), where \(\nabla\) denotes the Gauss-Manin connection in question. The fact that \(G_{k}\) is a G-operator1 follows from the proof of the Theorem in the appendix of Chapter V in [1], since in this case the operator corresponds to a geometric differential equation. That the entries of the matrix \(Y_{G,k}\) are G-functions now follows from the Corollary in Ch. V, SS6.6 of loc. cit..
So for the singular coordinates we choose the basis \(\{\omega_{2k-1},\omega_{2k}\}\) and a symplectic trivializing frame \(\Gamma_{k,\iota}\) of \(H^{1}_{DR}(\mathcal{E}_{k}/S)|_{U}\) and \(R^{1}(f_{k,\iota})_{*}(\mathbb{Q})(1)|_{V}\) respectively as specified in Theorem 3.1 of [10].
Now we move on to the proof of (1) and the smooth coordinates for the curve \(S^{\prime}\). For the non-CM smooth coordinates our work is simpler. Namely we may choose any symplectic basis of \(\{\omega_{2k-1},\omega_{2k}\}\) of \(H^{1}_{DR}(\mathcal{E}_{k}^{\prime}/S^{\prime})\) over some neighborhood \(U^{\prime}\) of \(s_{0}\) and any symplectic frame of \(R^{1}(f_{k,\iota}^{\prime})_{*}(\mathbb{C})(1)|_{V}\) for some small enough analytic neighborhood \(V\) of \(s_{0}\).
To see this, first of all note that in this case the differential system \(\vartheta-G_{k}\) that arises as above is such that \(G_{k}(0)=0\). Indeed, in this case the morphisms \(f_{k}^{\prime}:\mathcal{E}_{k}^{\prime}\to S^{\prime}\) are in fact smooth and proper. Therefore, \(G_{k}(0)\) which coincides with the residue of the connection at the point \(s_{0}\) will be \(0\).
Now any solution of the system \(\vartheta-G_{k}\) will be of the form \(X_{k}=Y_{G,k}\cdot\Pi_{k,\iota}\) where \(\Pi_{k}\in\mathrm{GL}_{2}(\mathbb{C})\), see [1] Ch. III, SS1. Since \(\mathcal{P}_{\Gamma_{k,\iota}}\) is such a solution for any choice of \(\Gamma_{k,\iota}\) we are done. We note that by construction we will also have
\[\mathcal{P}_{\Gamma_{k,\iota}}(0)=\Pi_{k,\iota} \tag{15}\]
where \(\Pi_{k,\iota}=\left(\frac{1}{2\pi i}\int_{\gamma_{2k-1,\iota}}(\omega_{2k-1})_ {s_{0}}\quad\frac{1}{2\pi i}\int_{\gamma_{2k,\iota}}(\omega_{2k-1})_{s_{0}}\right)\) will be the period matrix of the elliptic curve \((\mathcal{E}_{k}^{\prime})_{s_{0}}\).
Let us finally look at the CM coordinates. Using Lemma 2.7 we can then find a symplectic basis \(\{\omega_{2k-1},\omega_{2k}\}\) of \(H^{1}_{DR}(\mathcal{E}_{k}^{\prime}/S^{\prime})|_{U^{\prime}}\) and a symplectic trivializing frame of the local system \(R^{1}(f_{k,\iota}^{\prime})_{*}(\mathbb{C})(1)|_{V^{\prime}}\) in a small enough neighborhood \(V^{\prime}\) of \(s_{0}\) as above with the properties we wanted. Whence the description of \(\Pi_{k,\iota}\) when \(k\) is a CM coordinate follows.
Finally, in both cases, i.e. CM or non-CM smooth coordinate, the fact that the matrix \(Y_{G,k}\) is a G-matrix follows from the same exact argument as in the singular case above.
#### 2.2.1 Family of G-functions associated to \(s_{0}\)
Let \(f:\mathcal{X}^{\prime}\to S^{\prime}\) defined over \(\bar{\mathbb{Q}}\) and \(s_{0}\in S^{\prime}(\bar{\mathbb{Q}})\) be as above.
Our first order of business is to associate from now on a family of G-functions to the point \(s_{0}\). The "natural expectation" to associate to \(s_{0}\) the entire family \(\mathcal{Y}\) as defined in Theorem 2.5 turns out to give various complications down the line. First of all, only the first column of relative periods \(\mathcal{P}_{\Gamma_{k,\iota}}\) with \(k\) singular will play an actual role in what we need. Secondly, the so called "trivial relations" of the family \(\mathcal{Y}\) are messier to describe.
With these goals in mind, let us fix for now a singular coordinate \(k\). Then from Theorem 2.5 we know that locally near \(s_{0}\)
\[\mathcal{P}_{k,v}=\iota_{v}(Y_{G,k})\cdot\Pi_{k,\iota}\cdot\begin{pmatrix}1&N_{k} \log(\iota(x))\\ 0&1\end{pmatrix}. \tag{16}\]
In particular for our choice, in the proof of Theorem 2.5, of basis \(\omega_{2k-1}\), \(\omega_{2k}\) of \(H^{1}_{DR}(\mathcal{E}_{k}/S)|_{U}\) and trivialization \(\Gamma_{k,\iota}\) of the local system \(R^{1}(f_{k,\iota})_{*}\mathbb{Q}(1)\) the first column of the matrix \(\mathcal{P}_{k,\iota}\) will be of the form
\[\begin{pmatrix}\frac{1}{2\pi i}\int_{\gamma_{2k-1,\iota}}\omega_{2k-1}\\ \frac{1}{2\pi i}\int_{\gamma_{2k-1,\iota}}\omega_{2k}\end{pmatrix}=\begin{pmatrix} \iota(d_{k}y_{1,1,k}(x)+d^{\prime}_{k}y_{1,2,k}(x))\\ \iota(d_{k}y_{2,1,k}(x)+d^{\prime}_{k}y_{2,2,k}(x)).\end{pmatrix} \tag{17}\]
**Lemma 2.9**.: _Let \(f_{k}:\mathcal{E}_{k}\to S\) be a singular coordinate for some \(f:\mathcal{X}\to S\) as above. Then there exists a basis \(\omega^{\prime}_{2k-1}\), \(\omega^{\prime}_{2k}\) of \(H^{1}_{DR}(\mathcal{E}_{k}/S)|_{U}\), where \(U=U^{\prime}\backslash\{s_{0}\}\) for some possibly smaller affine neighborhood \(U^{\prime}\) of \(s_{0}\) as before, such that_
1. _with respect to the trivializing frame_ \(\Gamma_{k,\iota}\) _chosen in Theorem_ 2.5 _the entries of the first column of the relative period matrix_ \(\mathcal{P}_{k,\iota}\) _are G-functions, and_
2. _the matrix of the polarization on_ \(H^{1}_{DR}(\mathcal{E}_{k}/S)|_{U}\) _in terms of this basis is of the form_ \[e_{k}\cdot\begin{pmatrix}0&1\\ -1&0\end{pmatrix},\] (18) _with_ \(e_{k}\in\mathcal{O}_{S^{\prime}}(S)^{\times}\)_._
Proof.: We note that the basis \(\omega_{i}\) chosen in the proof of Theorem 2.5, is in fact the restriction on \(U:=U^{\prime}\backslash\{s_{0}\}\) of a basis, which we denote by the same notation, of the vector bundle \(\mathcal{E}|_{U^{\prime}}\), where \(\mathcal{E}:=H^{1}_{DR}(\mathcal{E}_{k}/S)^{can}\) is the canonical extension of the vector bundle \(H^{1}_{DR}(\mathcal{E}_{k}/S)\) to \(S^{\prime}\).
By the proof of Lemma 6.7 of [1] there exist sections \(\omega_{1}\), \(\eta_{1}\) of \(\mathcal{E}|_{U^{\prime}}\), upon possibly replacing the original \(U^{\prime}\) by a smaller affine open neighborhood of \(s_{0}\) in \(S^{\prime}\) and letting \(U=U^{\prime}\backslash\{s_{0}\}\) as before, such that \((\omega_{1})|_{U}\), \((\eta_{1})_{U}\) is a basis of \(H^{1}_{DR}(\mathcal{E}_{k}/S)|_{U}\), and \(2\) above holds.
Now note that we have, by construction, that there exists a matrix \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in M_{2\times 2}\mathcal{O}(U^{\prime})\) such that
\[\omega_{1}=a\omega_{2k-1}+b\omega_{2k}\text{, and }\eta_{1}=c\omega_{2k-1}+d \omega_{2k} \tag{19}\]
With respect to the basis \(\{\omega_{1},\eta_{1}\}\) and the frame \(\Gamma_{k,\iota}\) the first column of the relative period matrix is of the form
\[\begin{pmatrix}\iota(a)\iota(F_{1})(x)+\iota(b)\iota(F_{2})(x)\\ \iota(c)\iota(F_{1})(x)+\iota(d)\iota(F_{2})(x)\end{pmatrix}, \tag{20}\]
where \(F_{i}(x)\) are the entries of (17), which will be G-functions by the preceding discussion.
The Lemma on page 26 of [1] and the Proposition on page 27 of loc. cit. show that the \(a\), \(b\), \(c\), \(d\) have power series expansions on \(x\) that are G-functions. From Theorem \(D\) in the introduction of loc. cit. the (20) will be G-functions. We thus set \(\omega_{2k-1}^{\prime}:=\omega_{1}\) and \(\omega_{2k}:=\eta_{1}\).
**Definition 2.10**.: _We denote by \(\mathcal{Y}_{s_{0}}\) the family of G-functions that consists of the following power series:_
1. _the entries of the G-matrices_ \(Y_{G,k}:=(y_{i,j,k}(x))\) _appearing in Theorem_ 2.5 _for all smooth coordinates_ \(k\) _of_ \(S^{\prime}\)_, and_
2. _the entries of the first column, which we denote by_ \(\begin{pmatrix}y_{1,1,k}(x)\\ y_{2,1,k}(x)\end{pmatrix}\)_, of the relative period matrices_ \(\mathcal{P}_{\Gamma_{k,\iota}}\) _with respect to the bases of Lemma_ 2.9_._
_We call this the **family of G-functions associated locally to the point \(s_{0}\)**._
#### 2.2.2 Independence from archimedean embedding
Let us return to our original notation with \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) defined over some number field \(K\), \(s_{0}\in S^{\prime}(K)\), as in Section 2.1 satisfying Assumption 2.2. Let us also fix for now a local parameter \(x\) of \(S^{\prime}\) at \(s_{0}\).
Let \(\{\omega_{i}:1\leq i\leq 2n\}\) be the basis of \(H^{1}_{DR}(\mathcal{X}/S)\) appearing in Theorem 2.5 with the \(\omega_{i}\) that correspond to singular coordinates replaced by the \(\omega_{i}^{\prime}\) of Lemma 2.9. From Theorem 2.5 and Lemma 2.9, we then know that, upon fixing an embedding \(\iota:\bar{\mathbb{Q}}\hookrightarrow\mathbb{C}\), G-functions appear in a specific way in the description of the relative periods of \(f_{\bar{\mathbb{Q}}}:\mathcal{X}_{\bar{\mathbb{Q}}}\to S_{\bar{\mathbb{Q}}}\) close to the point \(s_{0}\). Since these G-functions are solutions to various geometric differential equations the field generated by their coefficients over \(\mathbb{Q}\) is in fact a number field. Let us denote this field by \(K_{\mathcal{Y}}\).
We define the number field \(K_{f^{\prime}}\) to be the compositum of the following fields:
1. the field \(K\) over which our setup is defined,
2. the Galois closures \(\hat{F}_{k}\) of the CM-fields \(F_{k}\) associated to the CM coordinates of \(S^{\prime}\),
3. the number fields \(\mathbb{Q}(d_{k},d^{\prime}_{k})\) associated to the constants \(d_{k}\), \(d^{\prime}_{k}\in\bar{\mathbb{Q}}\) associated themselves to each singular coordinate of the curve \(S^{\prime}\), and
4. the number field \(K_{\mathcal{Y}}\).
Upon base changing the morphisms \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) by \(K_{f^{\prime}}\), in essence replacing \(K\) by \(K_{f^{\prime}}\), we may work, which we do from now on, under the following assumption:
**Assumption 2.11**.: _In the above setting we have \(K_{f^{\prime}}=K\) so that all the constants that appear in Theorem 2.5 associated to the relative periods of \(f\) near \(s_{0}\) are in fact in the base number field \(K\)._
For every archimedean embedding \(\iota_{v}:K\hookrightarrow\mathbb{C}\), associated to an archimedean place \(v\in\Sigma_{K,\infty}\), we may repeat the process of Theorem 2.5 and Lemma 2.9, keeping the basis \(\omega_{i}\) of \(H^{1}_{DR}(\mathcal{X}/S)|_{U}\) chosen for a fixed place \(v_{0}\in\Sigma_{K,\infty}\). It is easy to that all the algebraic constants, i.e. the coefficients of the G-matrices and the \(d_{k}\), \(d^{\prime}_{k}\), depend only on the choice of that basis. One can then find trivializing frames of \(R^{1}(f_{k,v})_{*}(\mathbb{C})\) for the various coordinates \(k\), with \(v\in\Sigma_{K,\infty}\) and \(v\neq v_{0}\), such that the relative periods of the morphism \(f\) are of the form described in Theorem 2.5. The only non-trivial case, that of singular coordinates, is dealt by the Lemma in Ch. X, SS3.1 of [1].
In other words we have the following
**Lemma 2.12**.: _Let \(s\in S(L)\) with \(L/K\) finite and let \(v\in\Sigma_{L,\infty}\) be such that \(|x(s)|_{v}<\min\{1,R_{v}(\mathcal{Y}_{s_{0}})\}\). Then there exists a choice of a trivializing frame \(\Gamma_{v}\) of \((R^{1}(f_{1,v})_{*}(\mathbb{Q})(1)\oplus\ldots\oplus R^{1}(f_{n,v})_{*}( \mathbb{Q})(1))^{\vee}|_{V}\) for some small enough analytic neighborhood \(V\) of \(s\) in \(S_{v}\) such that_
1. \(\mathcal{P}_{k,v}(s)=\iota_{v}(Y_{G,k}(x(s)))\cdot\Pi_{k,v}\) _for all smooth coordinates_ \(k\) _of_ \(S\)_, and_
2. _the first two columns of the relative period matrix_ \(\mathcal{P}_{k,v}(s)\) _are_ \[\binom{\iota_{v}(y_{1,1,k}(x(s)))}{\iota_{v}(y_{2,1,k}(x(s)))},\] _for all singular coordinates_ \(k\) _of_ \(S^{\prime}\)
#### 2.2.3 Good covers
In the beginning of Section 2.2 we mentioned that we choose a local uniformizer \(x\) of \(S^{\prime}\) at \(s_{0}\). In applying the G-functions method one wants to make sure that this \(x\) does not vanish at any other point of \(S^{\prime}\), see the discussion on page 202 of [1]. A workaround devised by Daw and Orr in [1] is to instead consider a certain cover \(C_{4}\) of of a smooth projective curve that contains \(S^{\prime}\) and work there instead to establish the height bounds. It is this circle of ideas and notation that we adopt and adapt here as well.
Let \(\bar{S}^{\prime}\) be a geometrically irreducible smooth projective curve that contains \(S^{\prime}\). At the end of the day to our pair of a semiabelian variety \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) defined over the number field \(K\) and point \(s_{0}\in S(K)\) we can associate a semiabelian scheme \(f^{\prime}_{C}:\mathcal{X}^{\prime}_{C}\to C^{\prime}\) and a collection of points \(\{\xi_{1},\ldots,\xi_{l}\}\subset C^{\prime}(\bar{\mathbb{Q}})\) of a smooth geometrically irreducible curve \(C^{\prime}\). The first property satisfied by this new semiabelian scheme is that over\(C:=C^{\prime}\backslash\{\xi_{1},\ldots,\xi_{l}\}\) we will have that \(f^{\prime}_{C}|_{C}\) defines an abelian scheme. Furthermore, for every such point \(\xi_{t}\) as above, letting \(C^{\prime}_{t}:=C^{\prime}\backslash\{\xi_{i}:i\neq t\}\), we get a family of pairs of a semiabelian variety
\[f^{\prime}_{t}:\mathcal{X}^{\prime}_{t}\to C^{\prime}_{t}, \tag{21}\]
and points \(\xi_{t}\in C^{\prime}_{t}(\bar{\mathbb{Q}})\), for each \(1\leq t\leq l\), such that furthermore \(f^{\prime}_{t}|_{C}\) is an abelian scheme.
Here the points \(\xi_{t}\) and the curve \(C^{\prime}\) come from an appropriately chosen cover \(C_{4}\xrightarrow{c}\bar{S}^{\prime}\), namely as in Lemma 5.1 of [1]. The main properties of this cover that we will need are that
1. there exists a non-constant rational function \(x\in K(C_{4})\) whose zeroes are simple and are the above set of points \(\{\xi_{1},\ldots,\xi_{l}\}\), and
2. \(c(\xi_{t})=s_{0}\) for all \(t\).
In fact by construction of \(C_{4}\), since in our setup \(C_{1}=C\) in the notation of Lemma 5.1 of [1], one knows that the \(\xi_{t}\) are exactly the preimages of \(s_{0}\) via \(c\).
For each of these pairs \((f^{\prime}_{t}:\mathcal{X}^{\prime}_{t}\to C^{\prime}_{t},\xi_{t})\) we apply Theorem 2.5 and Lemma 2.9. We then end up with a family of G-functions \(\mathcal{Y}_{\xi_{t}}\) associated (locally) to each of the points \(\xi_{t}\in C^{\prime}\).
**Definition 2.13**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be as above. We call the collection of G-functions \(\mathcal{Y}:=\mathcal{Y}_{\xi_{1}}\sqcup\ldots\sqcup\mathcal{Y}_{\xi_{l}}\) the **family of G-functions associated to the point \(s_{0}\)**._
**Remark 2.14**.: _We note here that to get the "good cover" \(C_{4}\) one might have to base change the original setup, i.e. the semiabelian scheme \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\)
by a finite extension \(K^{\prime}/K\) of \(K\) since the curve \(C_{4}\) is not necessarily defined over the field \(K\)._
_Thus, with Assumption 2.11 in mind, the field \(K_{f^{\prime}}\) by which we are base changing might have to be replaced by a finite extension._
From the discussion in SS 4.1.1 and SS 4.1.2 of [10], where we point the interested reader for more details on our setup, and Lemma 7.3 of [10] one obtains:
**Lemma 2.15**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a semiabelian scheme as above. If \(k\) is a singular (resp. smooth, resp. CM) coordinate for \(S^{\prime}\) then the same is true for the coordinate \(k\) for all of the curves \(C^{\prime}_{t}\) associated with a good cover of \(S^{\prime}\)._
This allows us to not distinguish between singular/smooth coordinates for the original curve \(S^{\prime}\) versus singular/smooth coordinates for our various curves \(C^{\prime}_{t}\) associated to our original curve via the good cover \(C_{4}\) as above.
#### An integral model
In order to deal with proximity of points of interest to the point \(s_{0}\) with respect to a finite place we will also need to fix an integral model \(\bar{C}_{4}\) over \(\operatorname{Spec}(\mathcal{O}_{k})\) of the curve \(C_{4}\). This can be done as in the discussion in SS 4.1.2 of [10].
We note that the main technical feature we will need from this integral model is the following assumption on our chosen family of G-functions \(\mathcal{Y}\), following the discussion in [1], Ch. \(X\), SS 3.1:
**Assumption 2.16**.: _Let \(s\in C(\bar{Q})\) such that \(|x(s)|_{v}<R_{v}(\mathcal{Y})\) for some finite place \(v\in\Sigma_{K(s),f}\). Then \(s_{0}\) and \(s\) have the same image in \(\bar{C}_{4}(\kappa(v))\), where \(\kappa(v)\) is the residue field of \(K(s)\) at \(v\)._
We finally record the following:
**Definition 2.17**.: _Let \(s\in C(L)\), with \(L/K\) finite, and let \(v\in\Sigma_{L}\)._
_We say that the point \(s\) is \(v\)-adically close to \(0\), or to \(s_{0}\), if \(|x(s)|_{v}<\min\{1,R_{v}(\mathcal{Y})\}\). We furthermore say that is \(v\)-adically close to \(\xi_{t}\) if furthermore \(s\) is contained in the connected component of the preimage \(x^{-1}(\Delta_{R_{v}(\mathcal{Y})})\subset C_{4}^{an}\) that contains \(\xi_{t}\), where \(\Delta_{R_{v}(\mathcal{Y})}\) is the open disc, either in the rigid analytic or complex analytic sense, of radius \(\min\{1,R_{v}(\mathcal{Y})\}\)._
## 3 Determining the trivial relations
Throughout this section we fix a semiabelian scheme \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) defined over \(\bar{\mathbb{Q}}\) and a fixed point \(s_{0}\in S^{\prime}(\bar{\mathbb{Q}})\) such that, letting \(S:=S^{\prime}\backslash\{s_{0}\}\) as usual, we have \(\mathcal{X}:=\mathcal{X}^{\prime}|_{S}=\mathcal{E}_{1}\times\ldots\times \mathcal{E}_{n}\) is a product of elliptic curves over \(S\). We work under the assumption that Assumption 2.2 holds for our semiabelian scheme and fix \(x\in K(S^{\prime})\) a local uniformizer at \(s_{0}\).
We furthermore fix the basis \(\{\omega_{1},\ldots,\omega_{2n}\}\) of \(H^{1}_{DR}(\mathcal{X}/S)\), where \(\omega_{2k-1}\), \(\omega_{2k}\) are given by Theorem 2.5 for the smooth coordinates \(k\) of \(S^{\prime}\) and by Lemma 2.9 for the singular coordinates \(k\) of \(S^{\prime}\) respectively.
Here we determine the so called "trivial relations" among the family of G-functions associated locally to the point \(s_{0}\in S^{\prime}(\bar{\mathbb{Q}})\), see Definition 2.10, under the following assumption that we adopt throughout this section:
**Assumption 3.1**.: _The image \(m(S)\) of \(S\) via the morphism \(m:S\to Y(1)^{n}\), which is induced from the scheme \(f:\mathcal{X}\to S\), is a Hodge generic curve._
### Notation-Background
We follow the general notation and ideas set out in SS7 of [10].
From now on let us fix an embedding \(\iota:\bar{\mathbb{Q}}\hookrightarrow\mathbb{C}\). Then the relative period matrix \(\mathcal{P}_{\Gamma_{\iota}}\) in a neighborhood close to \(s_{0}\) in \(S_{\iota}\) will be block diagonal with diagonal blocks given by Theorem 2.5 for the smooth coordinates and by Lemma 2.9 for the singular ones.
We let \(m_{k}=1\) if \(k\) is a singular coordinate for \(S^{\prime}\) and \(m_{k}=2\) if \(k\) is a smooth such coordinate. We set
\[\mathbb{B}:=\mathbb{A}_{\bar{\mathbb{Q}}}^{2m_{1}}\times\ldots\times\mathbb{A }_{\bar{\mathbb{Q}}}^{2m_{n}}.\]
We furthermore write \(\operatorname{Spec}(\bar{\mathbb{Q}}[X_{i,j,k}:1\leq i,j\leq 2])=\mathbb{A}_{ \bar{\mathbb{Q}}}^{2m_{k}}\) when \(k\) is a smooth coordinate and \(\operatorname{Spec}(\bar{\mathbb{Q}}[X_{i,1,k}:1\leq i\leq 2])=\mathbb{A}_{ \bar{\mathbb{Q}}}^{2m_{k}}\) when \(k\) is singular instead. In what follows, we alternate without mention between viewing points in these copies \(\mathbb{A}_{\bar{\mathbb{Q}}}^{2m_{k}}\) for smooth coordinates \(k\) as either \(2\times 2\) matrices or just points in affine space.
Similarly we consider \(\mathbb{B}_{0}:=\mathbb{A}_{\bar{\mathbb{Q}}}^{4}\times\ldots\times\mathbb{A }_{\bar{\mathbb{Q}}}^{4}\), \(n\) copies, which we think of alternatively as \(M_{2\times 2,\bar{\mathbb{Q}}}^{n}\). We let \(\operatorname{Spec}(\bar{\mathbb{Q}}[X_{i,j,k}:1\leq i,j\leq 2])=\mathbb{A}_{ \bar{\mathbb{Q}}}^{4}\) for each of the copies so we get a natural morphism \(\mathbb{B}_{0}\to\mathbb{B}\) which, on the level of points, is nothing but the morphism that sends \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\mapsto\begin{pmatrix}a\\ c\end{pmatrix}\) for the singular coordinates and coincides with the identity for the smooth ones.
We let \(P_{\iota}\) be the matrix gotten by deleting from \(\mathcal{P}_{\Gamma_{\iota}}\) all of the columns that correspond to the \(\gamma_{2k,\iota}\) with \(k\) a singular coordinate. This matrix will
naturally correspond to a point in \(\mathbb{B}(\mathcal{O}_{S_{\iota}}(V))\), where here \(V\) is a small enough analytic subset of \(S_{\iota}\) as in Section 2.1. Equivalently, we may and will consider \(P_{\iota}\) as a function \(P_{\iota}:V\to\mathbb{B}(\mathbb{C})\).
Similarly, writing
\[\mathcal{Y}_{s_{0}}:=\{y_{i,j,k}: 1\leq i,j\leq 2,\ k\ \text{smooth for}\ S^{\prime}\}\cup\{y_{i,1,k}:i=\] \[1,2,\ \text{and}\ k\ \text{singular for}\ S^{\prime}\}\]
we get a corresponding point \(Y_{0}\in\mathbb{B}(\bar{\mathbb{Q}}[[x]])\)). In this section our goal is to determine the equations defining the subvariety \(Y_{0}^{\bar{\mathbb{Q}}[x]-Zar}\) of \(\mathbb{B}_{\bar{\mathbb{Q}}[x]}\).
Alternatively, to this family \(\mathcal{Y}_{s_{0}}\) and the fixed embedding \(\iota\) we can also associate a function \(Y_{\iota}:V\to\mathbb{B}(\mathbb{C})\). Note that for each of the smooth coordinates we will then have, from Theorem 2.5, that for all \(s\in V\)
\[\pi_{k}(P_{\iota}(s))=\pi_{k}(Y_{\iota}(s))\cdot\Pi_{k,\iota}. \tag{22}\]
### The trivial relations
We start with the following lemma, which is an analogue of Corollary 5.9 of [10].
**Lemma 3.2**.: _The graph \(Z^{\prime}\) of the function \(P_{\iota}:V\to\mathbb{B}(\mathbb{C})\) is such that its \(\mathbb{C}\)-Zariski closure \(Z^{\prime\mathbb{C}-Zar}\subset S_{\mathbb{C}}\times\mathbb{B}_{\mathbb{C}}\) is equal to \(S_{\mathbb{C}}\times\Theta_{1,\mathbb{C}}\), where \(\Theta_{1,\mathbb{C}}\) is the subvariety of \(\mathbb{B}_{\mathbb{C}}\) cut out by the ideal_
\[I_{0}:=\langle X_{1,1,k}X_{2,2,k}-X_{1,2,k}X_{2,1,k}-\frac{1}{2\pi i}:k\ \text{is smooth for}\ S^{\prime}\rangle. \tag{23}\]
Proof.: We note that from the same proof as that of Lemma 6.11 of [10] one can describe explicitly the \(\mathbb{C}\)-Zariski closure of the graph \(Z\subset V\times\mathbb{B}_{0,\mathbb{C}}\) of the function \(\mathcal{P}_{\Gamma_{\iota}}:V\to\mathbb{B}_{0}(\mathbb{C})\). Indeed, one has that \(Z^{\mathbb{C}-Zar}\) is equal to \(S_{\mathbb{C}}\times\Theta_{0,\mathbb{C}}\), where \(\Theta_{0,\mathbb{C}}\) is the subvariety of \(\mathbb{B}_{0,\mathbb{C}}\) cut out by the ideal
\[I_{1}:=\langle X_{1,1,k}X_{2,2,k}-X_{1,2,k}X_{2,1,k}-\frac{e^{\prime}_{k}}{2 \pi i}:1\leq k\leq n\rangle, \tag{24}\]
where \(e^{\prime}_{k}=1\) for smooth coordinates and \(e^{\prime}_{k}=e_{k}\) as in part 2 of Lemma 2.9 for the singular coordinates.
The lemma follows via the same argument as in [10] used to deduce their Corollary 6.12 from their Lemma 6.11.
**Lemma 3.3**.: _Let \(Z_{G}\) be the graph of the function \(Y_{\iota}:V\to\mathbb{B}(\mathbb{C})\) and let \(Z_{G}^{\mathbb{C}-Zar}\) be its \(\mathbb{C}\)-Zariski closure in \(S_{\mathbb{C}}\times\mathbb{B}_{\mathbb{C}}\). Then \(Z_{G}^{\mathbb{C}-Zar}=S_{\mathbb{C}}\times\Theta_{\mathbb{C}}\) where \(\Theta_{\mathbb{C}}\) is the subvariety of \(\mathbb{B}_{\mathbb{C}}\) cut out by the ideal_
\[I_{0}:=\langle X_{1,1,k}X_{2,2,k}-X_{1,2,k}X_{2,1,k}-1:k\ \text{is smooth for}\ S^{\prime}\rangle. \tag{25}\]
Proof.: Consider the automorphism of \(\theta:\mathbb{B}\to\mathbb{B}\) defined on the level of points \((A_{1},\ldots,A_{n})\) by multiplying on the right by \(\Pi_{k,\iota}^{-1}\) each \(A_{k}\) for which \(k\) corresponds to a smooth coordinate for our curve \(S^{\prime}\).
By construction, see (22), we then have that \(Y_{\iota}=\theta\circ P_{\iota}\). The result follows from Lemma 3.2.
**Theorem 3.4**.: _With the previous notation, under Assumption 3.1, \(Y_{0}^{\bar{\mathbb{Q}}[x]-Zar}\) is the subvariety of \(\mathbb{B}_{\bar{\mathbb{Q}}[x]}\) cut out by_
\[I_{0}:=\langle\det(X_{i,j,k})-1:1\leq k\leq n,\ k\ \text{is smooth for}\ S^{\prime}\rangle. \tag{26}\]
Proof.: The proof follows trivially from Lemma 3.3 since the generators of the ideal \(I_{0}\) are all defined over \(\bar{\mathbb{Q}}\).
## 4 Archimedean relations at CM-points
In this section we will consider a family of G-functions associated to a point \(s_{0}\in S^{\prime}(K)\), as in Definition 2.13, and construct archimedean relations among the values of this family at CM-points \(s\in C(L)\), where \(C\) here denotes the curve associated to \(S\) in the discussion in Section 2.2.3.
We begin with some notation for this section. We consider a fixed curve \(S^{\prime}\) and associated semiabelian scheme \(f^{\prime}:\mathcal{X}^{\prime}=\mathcal{E}^{\prime}_{1}\times\ldots\times \mathcal{E}^{\prime}_{n}\to S^{\prime}\) defined over a number field \(K\). As usual we also fix a point \(s_{0}\in S^{\prime}(K)\) which is a singular value for the morphism \(f^{\prime}\). We also fix from now on the pairs of semiabelian schemes and points \(\xi_{t}\in C_{4}(K)\) with \(1\leq t\leq l\), \((f^{\prime}_{t}:\mathcal{X}^{\prime}_{t}\to C^{\prime}_{t},\xi_{t})\), associated as in Section 2.2.3 to our original curve. In particular we assume from now on that Assumption 2.2, Assumption 2.11, and Assumption 3.1 hold for our curves \(C^{\prime}_{t}\).
**Definition 4.1**.: _We say that the semiabelian scheme \(\mathcal{X}^{\prime}\to S^{\prime}\) is \(G_{AO}\)-admissible if all of the above hold and furthermore either of the following holds:_
1. _there exists at least one CM coordinate for_ \(S^{\prime}\)_, or_
2. _there exist at least two singular coordinates for_ \(S^{\prime}\)_._
**Proposition 4.2**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{AO}\)-admissible semiabelian scheme as above. Then for any \(s\in C(\bar{\mathbb{Q}})\) for which \(\mathcal{X}_{s}\) is CM, there exists a homogeneous polynomial \(R_{s,\infty}\in L_{s}[X^{(t)}_{i,j,k}:1\leq t\leq l,1\leq i,j,\leq 2,1\leq k \leq n]\), where \(L_{s}/K(s)\) is a finite extension, such that the following hold:_
1. \(\iota_{v}(R_{s,\infty}(\mathcal{Y}(x(s))))=0\) _for all_ \(v\in\Sigma_{L_{s},\infty}\) _for which_ \(s\) _is_ \(v\)_-adically close to_ \(0\)_,_
2. \([L_{s}:\mathbb{Q}]\leq c_{1}(n)[K(s):K]\)_, where_ \(c_{1}(n)\) _is a constant depending only on_ \(n\)_,_
3. \(\deg(R_{s,\infty})\leq 2[L_{s}:\mathbb{Q}]\)_, and_
4. \(R_{s,\infty}(\mathcal{Y}(x))=0\) _does not hold generically, in other words the relation defined by the polynomial is "non-trivial"._
**Definition 4.3**.: _We call the field \(L_{s}\) associated to the point \(s\) the **field of coefficients of the point \(s\)**._
Proof.: We break the proof in parts. First we create what we call "local factors" \(R_{s,v}\), each one associated to a fixed place \(v\in\Sigma_{L_{s},\infty}\) for which \(s\) is \(v\)-adically close to \(s_{0}\). To do this we break the exposition into cases. First, we work under the assumption that the toric rank of the semiabelian variety \(\mathcal{X}^{\prime}_{s_{0}}\) is \(t\geq 2\), or in other words the case where there are at least \(2\) singular coordinates for our curve \(S^{\prime}\). In the second case will work under the assumption that there is at least one singular coordinate, i.e. \(t\geq 1\), and one smooth coordinate which is CM. After this we define the polynomials \(R_{s,\infty}\) in question and establish their main properties outlined in the lemma.
Before that, we fix some notation that persists in both cases. Throughout this proof we fix a point \(s\in S(\bar{\mathbb{Q}})\) such that the fiver \(\mathcal{X}_{s}\) is CM and let \(K(s)\) be its field of definition. We let \(L_{s}\) be the compositum of the following fields
1. the finite extension \(\hat{K}(s)/K(s)\) such that \(\operatorname{End}_{\bar{\mathbb{Q}}}(\mathcal{X}_{s})=\operatorname{End}_{ \hat{K}(s)}(\mathcal{X}_{s})\),
2. the CM fields \(F_{k,s}:=\operatorname{End}_{\bar{\mathbb{Q}}}(\mathcal{E}_{k,s})\).
We note that by [15]
\[[\hat{K}(s):K(s)]\leq c_{0}(n) \tag{27}\]
where \(c_{0}(n)\) is a constant depending only on \(n\). From this we can conclude that
\[[L_{s}:\mathbb{Q}]\leq 2^{n}c_{0}(n)[K(s):\mathbb{Q}]. \tag{28}\]
Let us fix a place \(v\in\Sigma_{L_{s},\infty}\) and let \(\iota_{v}:L_{s}\hookrightarrow\mathbb{C}\) be the corresponding embedding. We assume from now on that \(s\) is \(v\)-adically close to \(\xi_{t}\) for some \(1\leq t\leq l\), see Section 2.2.3 for the notation here.
As in the proof of Lemma 2.7, for all \(1\leq k\leq n\) there exists a symplectic basis \(w_{2k-1,s}\), \(w_{2k,s}\) of \(H^{1}_{DR}(\mathcal{E}_{k,s}/K(s))\otimes L_{s}\) and a symplectic basis of \(\gamma^{\prime}_{2k-1,s}\), \(\gamma^{\prime}_{2k,s}\) of \(H_{1}(\mathcal{E}_{k,s,\iota_{v}},\mathbb{Q})\otimes L_{s}\) such that (8) and (9) hold.
We work with the semiabelian scheme \(f^{\prime}_{t}:\mathcal{X}^{\prime}_{t}\to C^{\prime}_{t}\) as above. Note that this pulls back to an abelian scheme \(f_{t}:\mathcal{X}_{t}\to C_{t}\), where \(C_{t}:=C^{\prime}_{t}\backslash\{\xi_{t}\}\). We can thus consider the fixed basis \(\{\omega_{i}:1\leq i\leq 2n\}\) of \(H^{1}_{DR}(\mathcal{X}_{t}/C_{t})|_{U}\) and the fixed frame \(\{\gamma_{j,\iota_{v}}:1\leq j\leq 2n\}\) of \((R^{1}(f_{t,\iota_{v}})_{*}\mathbb{C})^{\vee}\) chosen by the combination of Theorem 2.5 and Lemma 2.9.
We then obtain change of bases matrices \(B_{k,dR}:=\begin{pmatrix}a_{k,s}&b_{k,s}\\ c_{k,s}&d_{k,s}\end{pmatrix}\in\mathrm{SL}_{2}(L_{s})\) between the bases \(w_{i}\) and \(\omega_{i,s}\) of \(H^{1}_{DR}(\mathcal{X}_{t,s}/L_{s})\) and \(B_{k,b}:=\begin{pmatrix}\alpha_{k,s}&\beta_{k,s}\\ \gamma_{k,s}&\delta_{k,s}\end{pmatrix}\in\mathrm{SL}_{2}(L_{s})\) between the bases \(\gamma^{\prime}_{j,s}\) and \(\gamma_{j,s}\) of \(R^{1}(f_{t,\iota_{v}})_{*}(\mathbb{C})(1)\). Note that the fact that the entries of \(B_{k,b}\) are in \(L_{s}\) follows by construction. The fact that the matrices are in \(\mathrm{SL}_{2}\) follows from the fact that all bases in question are symplectic.
Let \(P_{w,\gamma^{\prime},k,s}\) be the full period matrix of \(\mathcal{E}_{k,s}\) with respect to the bases \(w_{i}\) and \(\gamma^{\prime}_{j}\). On the one hand we then have that
\[P_{w,\gamma^{\prime},k,s}=B_{k,dR}\cdot\mathcal{P}_{k}(s)\cdot B_{k,b}, \tag{29}\]
where \(\mathcal{P}_{k}(s)\) denotes the value at \(s\) of the relative period matrix associated to the semiabelian scheme \(f^{\prime}_{k,t}:\mathcal{E}^{\prime}_{k,t}\to C^{\prime}_{t}\), the basis \(\omega_{i}\), and the trivializing frame \(\gamma_{j}\) above. On the other hand, by the construction in Lemma 2.7 we know that
\[P_{w,\gamma^{\prime},k,s}=\begin{pmatrix}\frac{\varpi_{s,k}}{2\pi i}&0\\ 0&\varpi_{s,k}^{-1}\end{pmatrix}, \tag{30}\]
for some transcendental number \(\varpi_{s,k}\) that depends on the embedding \(\iota_{v}\) chosen.
**First step: Defining the local factors**
(1) Let us assume from now on that there exist at least two singular coordinates for \(S^{\prime}\) and without loss of generality we assume that these are the first two.
Let us write \(B_{k,dR}\cdot\mathcal{P}_{k}(s)=(p_{i,j,k})\) for convenience. We note that from our various conventions in Section 2 we know that the first column of \(\mathcal{P}_{k}(s)\) is actually of the form
\[\begin{pmatrix}\iota_{v}(y^{(t)}_{1,1,k}(x(s)))\\ \iota_{v}(y^{(t)}_{2,1,k}(x(s)))\end{pmatrix}, \tag{31}\]
where \(y^{(t)}_{i,j,k}\) are members of the subfamily \(\mathcal{Y}_{\xi_{t}}\) of the family of G-functions \(\mathcal{Y}\) associated to the point \(s_{0}\) as in Definition 2.13.
From (29) and (30) we get for \(k=1\), \(2\):
\[\begin{pmatrix}\frac{\varpi_{s,k}}{2\pi i}&0\\ 0&\varpi_{s,k}^{-1}\end{pmatrix}=\begin{pmatrix}\alpha_{k,s}p_{1,1,k}+\gamma_ {k,s}p_{1,2,k}&\beta_{k,s}p_{1,1,k}+\delta_{k,s}p_{1,2,k}\\ \alpha_{k,s}p_{2,1,k}+\gamma_{k,s}p_{2,2,k}&\beta_{k,s}p_{2,1,k}+\delta_{k,s} p_{2,2,k}\end{pmatrix} \tag{32}\]
Comparing the off-diagonal elements in the equality (32) we get that for \(k=1\), \(2\)
\[\beta_{k,s}p_{1,1,k}+\delta_{k,s}p_{1,2,k}=0\text{, and }\alpha_{k,s}p_{2,1,k}+ \gamma_{k,s}p_{2,2,k}=0. \tag{33}\]
If for either \(k=1\) or \(2\) we have that \(\gamma_{k,s}=0\) or \(\delta_{k,s}=0\) then, since the matrix \(B_{k,b}\) is invertible, we must have that \(p_{1,1,k}=0\) or \(p_{2,1,k}=0\).
But by definition we have \(p_{1,1,k}=\iota_{v}(a_{k,s}y^{(t)}_{1,1,k}(x(s))+b_{k,s}y^{(t)}_{2,1,k}(x(s)))\) and \(p_{2,1,k}=\iota_{v}(c_{k,s}y^{(t)}_{1,1,k}(x(s))+d_{k,s}y^{(t)}_{2,1,k}(x(s)))\). Therefore, if for either \(k=1\) or \(2\), \(\gamma_{k,s}=0\) or \(\delta_{k,s}=0\) holds we set
\[R_{s,v}:=a_{k,s}X^{(t)}_{1,1,k}+b_{k,s}X^{(t)}_{2,1,k}\text{, or respectively }c_{k,s}X^{(t)}_{1,1,k}+d_{k,s}X^{(t)}_{2,1,k}. \tag{34}\]
From now on let us assume that \(\gamma_{k,s}\), \(\delta_{k,s}\neq 0\) for \(k=1\), \(2\). Then (33) gives
\[p_{1,2,k}=-\frac{\beta_{k,s}}{\delta_{k,s}}p_{1,1,k}\text{ and }p_{2,2,k}=- \frac{\alpha_{k,s}}{\gamma_{k,s}}p_{2,1,k}. \tag{35}\]
Comparing the diagonal elements in (32) and using (35) we get
\[\frac{\alpha_{k,s}\delta_{k,s}-\beta_{k,s}\gamma_{k,s}}{\delta_{k,s}}p_{1,1,k }=\frac{\varpi_{s,k}}{2\pi i}\text{ and } \tag{36}\]
\[-\frac{\alpha_{k,s}\delta_{k,s}-\beta_{k,s}\gamma_{k,s}}{\gamma_{k,s}}p_{2,1, k}=\varpi_{s,k}^{-1} \tag{37}\]
From these, together with the fact that \(B_{k,b}\in\text{SL}_{2}(L_{s})\), we conclude that for \(k=1\), \(2\)
\[p_{1,1,k}\cdot p_{2,1,k}=-\gamma_{k,s}\delta_{k,s}\frac{1}{2\pi i}. \tag{38}\]
Finally, from (38) we can get rid of the \(2\pi i\) to conclude that \(\gamma_{2,s}\delta_{2,s}p_{1,1,1}\cdot p_{2,1,1}=\gamma_{1,s}\delta_{1,s}p_{1,1,2}\cdot p_{2,1,2}\). As we have seen above, we can then associate to the place \(v\) and the point \(s\) the polynomial
\[\begin{split} R_{s,v}&:=\gamma_{2,s}\delta_{2,s}(a _{1,s}X^{(t)}_{1,1,1}+b_{1,s}X^{(t)}_{2,1,1})(c_{1,s}X^{(t)}_{1,1,1}+d_{1,s}X^{ (t)}_{2,1,1})\\ &\qquad-\gamma_{1,s}\delta_{1,s}(a_{2,s}X^{(t)}_{1,1,2}+b_{2,s}X^ {(t)}_{2,1,2})(c_{2,s}X^{(t)}_{1,1,2}+d_{2,s}X^{(t)}_{2,1,2}).\end{split} \tag{39}\]
We note that in either case \(R_{s,v}\) is homogeneous of degree at most \(2\) and that \(\iota_{v}(R_{v}(\mathcal{Y}(x(s)))=0\).
(2) Let us now assume that there exists at least one smooth coordinate for \(S^{\prime}\) that is CM and without loss of generality assume that it is the first one.
Again combining (29) and (30) for \(k=1\), together with the description of \(\mathcal{P}_{k,v}(s)\) given by Theorem 2.5, we conclude that
\[\begin{pmatrix}\frac{\varpi_{s,1}}{2\pi i}&0\\ 0&\varpi_{s,1}^{-1}\end{pmatrix}=\iota_{v}(\begin{pmatrix}a_{1,s}&b_{1,s}\\ c_{1,s}&d_{1,s}\end{pmatrix}\cdot Y_{G,k}(\xi)\cdot\begin{pmatrix}\frac{\varpi _{0,1}}{2\pi i}&0\\ 0&\varpi_{0,1}^{-1}\end{pmatrix}\cdot\begin{pmatrix}\alpha_{1,s}&\beta_{1,s}\\ \gamma_{1,s}&\delta_{1,s}\end{pmatrix}), \tag{40}\]
noting that \(\varpi_{s,1}\) itself depends on the embedding \(\iota_{v}\).
As before for convenience let us write \((p_{i,j}):=B_{1,dR}\cdot Y_{G,1}(x(s))\). Rewriting (40) we get
\[\begin{pmatrix}\frac{\varpi_{s,1}}{2\pi i}&0\\ 0&\varpi_{s,1}^{-1}\end{pmatrix}=\iota_{v}(\begin{pmatrix}p_{1,1}\alpha_{1,s} \frac{\varpi_{0,1}}{2\pi i}+p_{1,2}\gamma_{1,s}\varpi_{0,1}^{-1}&p_{1,1}\beta _{1,s}\frac{\varpi_{0,1}}{2\pi i}+p_{1,2}\delta_{1,s}\varpi_{0,1}^{-1}\\ p_{2,1}\alpha_{1,s}\frac{\varpi_{0,1}}{2\pi i}+p_{2,2}\gamma_{1,s}\varpi_{0,1 }^{-1}&p_{2,1}\beta_{1,s}\frac{\varpi_{0,1}}{2\pi i}+p_{2,2}\delta_{1,s}\varpi _{0,1}^{-1}\end{pmatrix}). \tag{41}\]
Considering the equalities given from the off-diagonal entries in (40) we conclude that
\[A\frac{\varpi_{0,1}}{2\pi i}+B\varpi_{0,1}^{-1}=0\text{ and }C\frac{\varpi_{0,1}} {2\pi i}+D\varpi_{0,1}^{-1}=0, \tag{42}\]
where \(\begin{pmatrix}A&B\\ C&D\end{pmatrix}=\begin{pmatrix}p_{1,1}\beta_{1,s}&p_{1,2}\delta_{1,s}\\ p_{2,1}\alpha_{1,s}&p_{2,2}\gamma_{1,s}\end{pmatrix}\). From this we get that \(\det\begin{pmatrix}A&B\\ C&D\end{pmatrix}=0\). Using the fact that \(\det B_{1,dR}=\det B_{1,b}=1\) and replacing the \(p_{i,j}\) in the equation one gets from \(\det\begin{pmatrix}A&B\\ C&D\end{pmatrix}=0\), by the expression of the entries of this matrix in terms of the entries of \(B_{1,dR}\) and \(Y_{G,1}(x(s))\), the relation
\[\begin{split}\iota_{v}(a_{1,s}c_{1,s}y_{1,1,1}^{(t)}(x(s))y_{1,2,1}^{(t)}(x(s))+b_{1,s}d_{1,s}y_{2,1,1}^{(t)}(x(s))y_{2,2,1}^{(t)}(x(s))\\ +(2b_{1,s}c_{1,s}+1)y_{1,1,1}^{(t)}(x(s))y_{2,2,1}^{(t)}(x(s))\\ -(1+b_{1,s}c_{1,s}+\beta_{1,s}\gamma_{1,s})\det(y_{i,j,1}^{(t)}(x(s))))=0. \end{split} \tag{43}\]
This will naturally correspond to a polynomial \(R_{s,v}\in\bar{\mathbb{Q}}[X_{i,j,k}^{(t)}]\) as in the previous case. Note that by construction we will have that \(\iota_{v}(R_{s,v}(\mathcal{Y}(x(s))))=0\). Note also that \(R_{s,v}\) is homogeneous of degree \(2\). This last fact is easy to see once one writes \(R_{s,v}\) as a sum of monomials, upon which step the fact that \(\det B_{1,dR}=\det B_{1,b}=1\) makes it impossible that all coefficients of the polynomial in question are zero.
**Second step: Constructing the polynomial \(R_{s,\infty}\)**
Let us now consider the following polynomial
\[R_{s,\infty}(X^{(t)}_{i,j,k}):=\prod_{\begin{subarray}{c}v\in\Sigma_{L_{s}, \infty}\\ s\text{ is $v$-adically close to $0$}\end{subarray}}R_{s,v}(X^{(t)}_{i,j,k}), \tag{44}\]
where \(R_{s,v}(X^{(t)}_{i,j,k})\) are the polynomials in (34) or (39), depending on the cases lined out in the first case we examined, or the polynomials corresponding to (43).
We note that by construction we will have that \(\deg R_{s,\infty}\leq 2[L_{s}:\mathbb{Q}]\) and hence statement (3) of the Lemma follows. We also note that by construction of the local factors \(R_{s,v}\) statement (1) of our Lemma holds as well.
**Final step: Non-triviality**
The only thing we are left with showing is statement (4) of the Lemma. This would show the "non-triviality" of the relation among the values at \(x(s)\) of the G-functions of our family \(\mathcal{Y}\) in the notation of [1] Ch. \(VII\), SS 5.
By definition of \(R_{s,\infty}\) as a product of the local factors we have that if \(R_{s,\infty}(\mathcal{Y})=0\) holds generically we must have that one of the locals factors \(R_{s,v}\) is such that \(R_{s,v}(\mathcal{Y})=0\) holds generically.
Note that the local factors are such that only the G-functions from a subfamily \(\mathcal{Y}_{\xi_{t}}\) of \(\mathcal{Y}\) appear in their construction, and hence only the \(X^{(t)}_{i,j,k}\) that correspond to these will appear in \(R_{s,v}\). Thus we might as well assume from now on, as we do, that \(\mathcal{Y}=\mathcal{Y}_{\xi_{t}}\) and replace \(X^{(t)}_{i,j,k}\) by \(X_{i,j,k}\) in our notation for the remainder of this proof. Under this notation we know that the trivial relations among the G-functions of our family \(\mathcal{Y}\) are given by the ideal \(I_{0}\) described in Theorem 3.4.
First let us assume that \(R_{s,v}\) is of the form (34). It is trivially seen that \(R_{s,v}\neq 0\) since \(B_{k,dR}\in\mathrm{SL}_{2}(L_{s})\). Assume without loss of generality that \(R_{s,v}=a_{1,s}X_{1,1,1}+b_{1,s}X_{2,1,1}\) with \(a_{1,s}\neq 0\). Then it is trivial to see that we cannot have \(R_{s,v}\in I_{0}\) since \(I_{0}\) is generated by the polynomials \(g_{k}:=\det(X_{i,j,k})-1\) where \(1\leq k\leq n\) runs through the smooth coordinates for \(S^{\prime}\), and in this case \(k=1\) is a singular coordinate.
Now let us assume that \(R_{s,v}\) is as in (39), without loss of generality assuming that the two singular coordinates are \(k=1\) and \(k=2\). Then we have \(\gamma_{k,s}\neq 0\) and \(\delta_{k,s}\neq 0\) for \(k=1\), \(2\) by assumption in this case and again the fact that \(B_{k,dR}\in\mathrm{SL}_{2}(L_{s})\) shows that \(R_{s,v}\neq 0\). It is easy to see once again by the above argument that we cannot have \(R_{s,v}\in I_{0}\).
Finally, let us assume that we are in the case where \(R_{s,v}\) is the polynomial that corresponds to (43), without loss of generality assuming that \(k=1\) is a CM coordinate for \(S^{\prime}\). Assume that \(R_{s,v}\in I_{0}=\langle g_{k}:k\) smooth for \(S^{\prime}\rangle\).
It is easy to see that this implies that \(R_{s,v}\in(g_{1})\leq L_{s}[X_{i,j,1}:1\leq i,j,\leq 2]\). Since \((g_{1})\subset m_{1}:=\langle X_{1,1,1}-1,X_{1,2,1},X_{2,1,1},X_{2,2,1}-1\rangle\) we must have \(R_{s,v}\in m_{1}\) which is easily seen to imply \(R_{s,v}(\begin{pmatrix}1&0\\ 0&1\end{pmatrix})=2b_{1,s}c_{1,s}+1=0\).
On the other hand letting \(m_{N}:=\langle X_{1,1,1}-N,X_{1,2,1}-1,X_{2,1,1}+\frac{1}{2},X_{2,2,2}-\frac{ 1}{2N}\rangle\) for all \(N\in\mathbb{N}\), \(N\geq 2\), and noting that \((g_{1})\subset m_{N}\), we will have that \(R_{s,v}\in m_{N}\) for all \(N\geq 2\), \(N\in\mathbb{N}\). Keeping in mind that \(2b_{1,s}c_{1,s}+1=0\) we get that
\[4a_{1,s}c_{1,s}N^{2}-b_{1,s}d_{1,s}=0 \tag{45}\]
for all \(N\) as above. This gives \(a_{1,s}c_{1,s}=b_{1,s}d_{1,s}=0\) which, together with \(2b_{1,s}c_{1,s}+1=0\), is impossible since \(B_{1,dR}\in\mathrm{SL}_{2}(L_{s})\).
## 5 Isogenies and archimedean relations
Working, with Zilber-Pink-type statements in mind, we aim to replicate the result of the previous section this time for points \(s\in S(\bar{\mathbb{Q}})\) for which the fiber \(\mathcal{X}_{s}\) are such that there exist two isogenies between two distinct pairs of coordinates.
### Isogenies and periods
We work in the general setting described in the beginning of Section 4 which we consider fixed from now on. In particular, as we did in Section 4, we assume throughout that Assumption 2.2, Assumption 2.11, and Assumption 3.1 hold for our curves \(C^{\prime}_{t}\).
Before we proceed we record a definition that we adopt throughout the exposition here and in the next sections whenever working in the "Zilber-Pink context".
**Definition 5.1**.: _Any semiabelian scheme \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) as above, i.e. one that satisfies Assumption 2.2, Assumption 2.11, and Assumption 3.1, will be called \(G_{ZP}\)**-admissible**._
We also record here the following lemma, which appears practically as Proposition 4.4 of [10].
**Lemma 5.2**.: _Let \(E_{1}\) and \(E_{2}\) be elliptic curves defined over some number field \(L\). Assume that there exists a cyclic isogeny \(\phi:E_{1}\to E_{2}\) of degree \(\deg(\phi)=M\) which is also defined over \(L\)._
_Let \(P_{k}\) be the full period matrix of \(E_{k}\), \(k=1\), \(2\), with respect to some fixed archimedean embedding \(\iota:L\hookrightarrow\mathbb{C}\), some fixed bases \(\{\gamma_{k,1},\gamma_{k,2}\}\) of \(H_{1}(E_{k,\iota},\mathbb{Z})\), and some fixed symplectic bases \(\{\omega_{k,1},\omega_{k,2}\}\) of \(H_{DR}^{1}(E_{k}/L)\) for which \(\omega_{k,1}\in F^{1}H_{DR}^{1}(E_{k}/L)\) for \(k=1\), \(2\)._
_Then, there exist \(a\), \(b\), \(c\in L\) and \(p\), \(q\), \(r\), \(s\in\mathbb{Z}\) with \(\det\begin{pmatrix}a&0\\ b&c\end{pmatrix}=\det\begin{pmatrix}p&q\\ r&s\end{pmatrix}=M\) such that_
\[\begin{pmatrix}a&0\\ b&c\end{pmatrix}\cdot P_{1}=P_{2}\cdot\begin{pmatrix}p&q\\ r&s\end{pmatrix} \tag{46}\]
Proof.: Let \(\omega_{1}\) be a non-zero element of \(F^{1}H_{DR}^{1}(E_{1}/L)\) and \(\omega_{2}\in H_{DR}^{1}(E_{1}/L)\) another element so that the set \(\{\omega_{1}.\omega_{2}\}\) is a symplectic basis with respect to the polarizing form. Similarly let \(\{\omega_{1}^{\prime},\omega_{2}^{\prime}\}\) be a basis of \(H_{DR}^{1}(E_{2}/L)\) with the same properties. Let also \(\{\gamma_{1},\gamma_{2}\}\) and \(\{\gamma_{1}^{\prime},\gamma_{2}^{\prime}\}\) be symplectic bases of \(H_{1}(E_{1},\mathbb{Z})\) and \(H_{1}(E_{2},\mathbb{Z})\) respectively.
We then have that there exists \(a\in L\) such that \(\phi^{*}(\omega_{1}^{\prime})=a\cdot\omega_{1}\) and there exist \(b\), \(c\in L\) such that \(\phi^{*}(\omega_{2}^{\prime})=b\cdot\omega_{1}+c\cdot\omega_{2}\). On the other hand for the homology we know that there exist \(p\), \(q\), \(r\), and \(s\in\mathbb{Z}\) such that \(\phi_{*}(\gamma_{1})=p\cdot\gamma_{1}^{\prime}+r\cdot\gamma_{2}^{\prime}\) and \(\phi_{*}(\gamma_{2})=q\cdot\gamma_{1}^{\prime}+s\cdot\gamma_{2}^{\prime}\).
On the other hand we have
\[\int_{\gamma_{j}}\phi^{*}(\omega_{i}^{\prime})=\int_{\phi_{*}(\gamma_{j})} \omega_{i}^{\prime}. \tag{47}\]
Combining this with the above we obtain for \(i=j=1\)
\[a\int_{\gamma_{1}}\omega_{1}=p\cdot\int_{\gamma_{1}^{\prime}}\omega_{1}^{ \prime}+r\cdot\int_{\gamma_{2}^{\prime}}\omega_{1}^{\prime},\]
and similar relations from the other pairs of indices. Their combination is just the above equality of matrices.
### The toy case: \(n=3\)
The Zilber-Pink for curves starts taking meaning for \(n\geq 3\). In this subsection we work with the minimal such dimension, i.e. here \(n=3\).
As usual let us write \(s_{0}\in S^{\prime}(K)\) for the only singular value of the morphism \(f^{\prime}\). We think of the point \(s_{0}\) as reflecting some potential intersection of
the completion of the image of \(S\) in \(Y(1)^{3}\) with the boundary \(X(1)^{3}\backslash Y(1)^{3}\). We write \(\mathcal{X}_{0}\) for the connected fiber at \(s_{0}\) of the Neron model of \(\mathcal{X}\) over \(S^{\prime}\). There are three things that can potentially happen in this case:
1. \(\mathcal{X}_{0}=\mathbb{G}_{m}^{3}\) this has been dealt with in [1],
2. \(\mathcal{X}_{0}=\mathbb{G}_{m}^{2}\times E\) with \(E\) some elliptic curve, or
3. \(\mathcal{X}_{0}=\mathbb{G}_{m}\times E\times E^{\prime}\) with \(E\) and \(E^{\prime}\) (not necessarily distinct) elliptic curves.
It is special cases of cases 2 and 3 above that we are interested in. In what follows we shall keep notation as above for the decomposition of the fiber \(\mathcal{X}_{0}\). Namely we shall assume, which we can do without loss of generality, that the potentially singular coordinates for \(S^{\prime}\) are the first two. We refer to each of the cases by the type of fiber that appears over \(s_{0}\).
Throughout this subsection we fix notation as in the beginning of the proof of Proposition 4.2. In particular, we fix a point \(s\in C_{t}(\bar{\mathbb{Q}})\), for some \(t\). We write \(E_{1}\times E_{2}\times E_{3}\) for the fiber \(\mathcal{X}_{C,s}\) at \(s\) of our family and assume that there exist \(\phi_{1}:E_{3}\to E_{1}\) and \(\phi_{2}:E_{3}\to E_{2}\) cyclic isogenies of degree \(\deg(\phi_{k})=M_{k}\). We also let \(L_{s}\) be the compositum of \(K(s)\) with the fields of definition of these isogenies. Finally, we assume that \(s\) is \(v\)-adically close to \(\xi_{t}\) with respect to some fixed archimedean place \(v\in\Sigma_{L_{s},\infty}\).
**Definition 5.3**.: \(1.\) _Any point \(s\in C_{t}(\bar{\mathbb{Q}})\) as above will be called a point with **unlikely isogenies** for the semiabelian scheme \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\)._
\(2.\) _We call the field \(L_{s}\) defined above the **field of coefficients of the point \(s\)**._
By Theorem 2.5 we have three matrices of G-functions, one for each coordinate, for ease of notation we write \(Y_{G,k}(x)\) for these rather than the more accurate "\(Y_{G,k}^{(t)}(x)\)". For convenience we also write \(Y_{G,k}(x(s))=\left(\tilde{h}_{i,j}^{(k)}\right)\) for the entries of these matrices, i.e. the values of the G-functions at \(\xi:=x(s)\).
Similarly to the notation used in the proof of Proposition 4.2 we also write \(\mathcal{P}_{k}(s)\) for the values at \(s\) of the respective relative period matrices \(f^{\prime}_{t,k}:\mathcal{E}^{\prime}_{t,k}\to C^{\prime}_{t}\) constructed with respect to the bases and trivializations used in Section 2.2.3 to construct the family \(\mathcal{Y}\) associated to \(s_{0}\).
#### 5.2.1 The case \(\mathbb{G}_{m}^{2}\times E\)
From Lemma 5.2 we get that there exist \(a_{k}\), \(b_{k}\), \(c_{k}\in L\) and \(p_{k}\), \(q_{k}\), \(r_{k}\)\(s_{k}\in\mathbb{Z}\) such that
\[\begin{pmatrix}a_{k}&0\\ b_{k}&c_{k}\end{pmatrix}\Pi_{3}\left(\tilde{h}_{i,j}^{(3)}\right)\begin{pmatrix} \varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}= \tag{48}\] \[=\Pi_{k}\left(\tilde{h}_{i,j}^{(k)}\right)\begin{pmatrix}d_{k}&e _{k}\\ d_{k}^{\prime}&e_{k}^{\prime}\end{pmatrix}\begin{pmatrix}1&N_{k}\log(\xi)\\ 0&1\end{pmatrix}\begin{pmatrix}p_{k}&q_{k}\\ r_{k}&s_{k}\end{pmatrix}.\]
Here \(\Pi_{3}\) is the change of basis matrix from the basis \(\{\omega_{5,s},\omega_{6,s}\}\) of \(H^{1}_{DR}(\mathcal{E}_{3,s}/L)\) constructed in Theorem 2.5 to the basis used in Lemma 5.2 and \(\Pi_{k}:=\Pi_{k,1}\cdot\Pi_{k,2}\), for \(k=1\), \(2\), is the product of the change of basis matrices \(\Pi_{k,2}\), that passes from the basis of \(H^{1}_{DR}(\mathcal{E}_{k,s}/L)\) chosen in Theorem 2.5 to that given by Lemma 2.9, and \(\Pi_{k,1}\), which passes from the basis of \(H^{1}_{DR}(\mathcal{E}_{k,s}/L)\) chosen in Lemma 2.9 to that chosen in Lemma 5.2.
Note here that \(d_{k}\), \(d_{k}^{\prime}\in K\) by Assumption 2.11. To ease our notation a little we set \(e_{0,k}:=d_{k}N_{k}\log(\xi)+e_{k}\) and \(e_{0,k}^{\prime}:=d_{k}^{\prime}N_{k}\log(\xi)+e_{k}^{\prime}\). Also, writing \(\Pi_{k}\left(\tilde{h}_{i,j}^{(k)}\right)=\left(h_{i,j}^{(k)}\right)\), we may rewrite the above in the more useful form
\[\begin{pmatrix}a_{k}&0\\ b_{k}&c_{k}\end{pmatrix}\begin{pmatrix}h_{i,j}^{(3)}\end{pmatrix}\begin{pmatrix} \varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}=\begin{pmatrix}h_{i,j}^{(k)}\end{pmatrix} \begin{pmatrix}d_{k}&e_{0,k}\\ d_{k}^{\prime}&e_{0,k}^{\prime}\end{pmatrix}\begin{pmatrix}p_{k}&q_{k}\\ r_{k}&s_{k}\end{pmatrix}. \tag{49}\]
**Remark 5.4** (The CM case).: _As we will see, the case where \(E\) is CM is easier to handle. Perhaps it is even the only one we can handle in practical terms! The vanishing of the periods \(\varpi_{0,2}\) and \(\varpi_{0,3}\) turns out to make computations of relations feasible!_
_We record here for our convenience (49) under the assumption that \(E\) is CM:_
\[\begin{pmatrix}a_{k}&0\\ b_{k}&c_{k}\end{pmatrix}\begin{pmatrix}h_{i,j}^{(3)}\end{pmatrix}\begin{pmatrix} \frac{\varpi_{0,3}}{2\pi i}&0\\ 0&\varpi_{0,3}^{-1}\end{pmatrix}=\begin{pmatrix}h_{i,j}^{(k)}\end{pmatrix} \begin{pmatrix}d_{k}&e_{0,k}\\ d_{k}^{\prime}&e_{0,k}^{\prime}\end{pmatrix}\begin{pmatrix}p_{k}&q_{k}\\ r_{k}&s_{k}\end{pmatrix} \tag{50}\]
#### Towards relations
There are two potential ways to go from (49) to relations among the \(h_{i,j}^{(k)}\). They both use the same technique inspired from [1] Proposition 4.4. The first of these will end up only using the G-functions \(y_{i,1,k}^{(t)}\) corresponding to the first column of the matrices \(\begin{pmatrix}h_{i,j}^{(k)}\end{pmatrix}\begin{pmatrix}d_{k}&e_{0,k}\\ d_{k}^{\prime}&e_{0,k}^{\prime}\end{pmatrix}\) coming from the two singular coordinates.
Here we have chosen to work in the greatest possible generality for two reasons. First of all, these computations appear throughout all cases we will deal with in one way or another. Secondly, the computations themselves reveal the limitations of current methods at least to the knowledge of the author.
**First way:** Multiply both sides of (49) on the left by the vector
\[(g_{1}^{(k)},g_{2}^{(k)}):=(d_{k}h_{2,1}^{(k)}+d_{k}^{\prime}h_{2,2}^{(k)},-(d_ {k}h_{1,1}^{(k)}+d_{k}^{\prime}h_{1,2}^{(k)})), \tag{51}\]
to get the following
\[(a_{k}g_{1}^{(k)}+b_{k}g_{2}^{(k)},c_{k}g_{2}^{(k)})\left(h_{i,j}^{(3)}\right) \begin{pmatrix}\varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}=(0,\frac{D_{k}}{2\pi i})\begin{pmatrix} p_{k}&q_{k}\\ r_{k}&s_{k}\end{pmatrix}, \tag{52}\]
where \(D_{k}:=\det(\Pi_{k})\in L_{s}^{\times}\).
Here we are using the fact that \(\det\left(\tilde{h}_{i,j}^{(k)}\right)\begin{pmatrix}d_{k}&e_{k}\\ d_{k}^{\prime}&e_{k}^{\prime}\end{pmatrix}\begin{pmatrix}1&N_{k}\log(\xi)\\ 0&1\end{pmatrix}=\det\begin{pmatrix}d_{k}&e_{0,k}\\ d_{k}^{\prime}&e_{0,k}^{\prime}\end{pmatrix}=\frac{1}{2\pi i}\), from the Legendre relation, while \(\det\left(\tilde{h}_{i,j}^{(k)}\right)=1\) for all \(k\).
Setting
\[(H_{1}^{(k)},H_{2}^{(k)})=D_{k}^{-1}((a_{k}g_{1}^{(k)}+b_{k}g_{2}^{(k)})h_{1,1} ^{(3)}+c_{k}g_{2}^{(k)}h_{2,1}^{(3)},(a_{k}g_{1}^{(k)}+b_{k}g_{2}^{(k)})h_{1,2 }^{(3)}+c_{k}g_{2}^{(k)}h_{2,2}^{(3)}), \tag{53}\]
one gets that
\[(H_{1}^{(k)},H_{2}^{(k)})\cdot\begin{pmatrix}\varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}=(\frac{r_{k}}{2\pi i},\frac{s_{k}}{2 \pi i}). \tag{54}\]
This finally translates to the pair of relations
\[H_{1}^{(k)}\varpi_{0,1}+H_{2}^{(k)}\varpi_{0,3}=\frac{r_{k}}{2\pi i}\text{ and }H_{1}^{(k)}\varpi_{0,2}+H_{2}^{(k)}\varpi_{0,4}=\frac{s_{k}}{2\pi i} \tag{55}\]
**Remark 5.5**.: _Note that the transcendence degree of the (possibly transcendental) periods \(\varpi_{0,i}\) and \(\pi\) over \(\bar{\mathbb{Q}}\) is \(\leq 4\) and conjecturally under Grothendieck's period conjecture will be equal to \(4\) when our elliptic curve is not CM. In spirit we do not have enough equations to "get rid off" all of them and create a relation among the values of the \(h_{i,j}^{(k)}\)._
**Second way:** Here we are using all of the G-functions from the singular coordinates.
Multiply both sides of (49) on the left by the vector
\[(h_{2,1}^{(k)},-h_{1,1}^{(k)}), \tag{56}\]
using the fact that \(\det\left(h_{i,j}^{(k)}\right)=D_{k}\) for \(k=1\), \(2\), to get the following
\[(a_{k}h_{2,1}^{(k)}-b_{k}h_{1,1}^{(k)},-c_{k}h_{1,1}^{(k)})\left(h_{i,j}^{(3)} \right)\begin{pmatrix}\varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}=(0,D_{k})\begin{pmatrix}d_{k}&e_{0,k} \\ d_{k}^{\prime}&e_{0,k}^{\prime}\end{pmatrix}\begin{pmatrix}p_{k}&q_{k}\\ r_{k}&s_{k}\end{pmatrix}. \tag{57}\]
Setting
\[(g_{1}^{(k)},g_{2}^{(k)}):=(D_{k}^{-1}(a_{k}h_{2,1}^{(k)}-b_{k}h_{1,1}^{(k)}), -D_{k}^{-1}c_{k}h_{1,1}^{(k)}),\text{ and then} \tag{58}\]
\[(H_{1}^{(k)},H_{2}^{(k)})=(g_{1}^{(k)}h_{1,1}^{(3)}+g_{2}^{(k)}h_{2,1}^{(3)}, g_{1}^{(k)}h_{1,2}^{(3)}+g_{2}^{(k)}h_{2,2}^{(3)}), \tag{59}\]
one gets that
\[(H_{1}^{(k)},H_{2}^{(k)})\cdot\begin{pmatrix}\varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}=(d_{k}^{\prime},e_{0,k}^{\prime}) \begin{pmatrix}p_{k}&q_{k}\\ r_{k}&s_{k}\end{pmatrix}. \tag{60}\]
This finally translates to the pair of relations
\[H_{1}^{(k)}\varpi_{0,1}+H_{2}^{(k)}\varpi_{0,3}=d_{k}^{\prime}p_{k}+e_{0,k}^{ \prime}q_{k},\text{ and }H_{1}^{(k)}\varpi_{0,2}+H_{2}^{(k)}\varpi_{0,4}=d_{k}^{\prime}r_{k}+e_{0,k}^ {\prime}s_{k}. \tag{61}\]
Now repeat the above from the start by multiplying both sides of (49) on the left by the vector
\[(h_{2,2}^{(k)},-h_{1,2}^{(k)}), \tag{62}\]
to get
\[(a_{k}h_{2,2}^{(k)}-b_{k}h_{1,2}^{(k)},-c_{k}h_{1,2}^{(k)})\begin{pmatrix}h_{ i,j}^{(3)}\end{pmatrix}\begin{pmatrix}\varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}=(D_{k},0)\begin{pmatrix}d_{k}&e_{0,k} \\ d_{k}^{\prime}&e_{0,k}^{\prime}\end{pmatrix}\begin{pmatrix}p_{k}&q_{k}\\ r_{k}&s_{k}\end{pmatrix}. \tag{63}\]
Setting
\[(g_{3}^{(k)},g_{4}^{(k)}):=(D_{k}^{-1}(a_{k}h_{2,2}^{(k)}-b_{k}h_{1,2}^{(k)}), -D_{k}^{-1}c_{k}h_{1,2}^{(k)}),\text{ and then} \tag{64}\]
\[(H_{3}^{(k)},H_{4}^{(k)}):=(h_{1,1}^{(3)}g_{3}^{(k)}+h_{2,1}^{(3)}g_{4}^{(k)}, h_{1,2}^{(3)}g_{3}^{(k)}+h_{2,2}^{(3)}g_{4}^{(k)}) \tag{65}\]
one then keeps going as earlier to reach an analogue of (61), namely one gets:
\[(H_{3}^{(k)}\varpi_{0,1}+H_{4}^{(k)}\varpi_{0,3},H_{3}^{(k)}\varpi_{0,2}+H_{4 }^{(k)}\varpi_{0,4})=(d_{k}p_{k}+e_{0,k}q_{k},d_{k}r_{k}+e_{0,k}s_{k}). \tag{66}\]
**Remark 5.6**.: _The advantage to the previous computations is evident. We now have more potential relations to try to create some relation strictly among the \(H_{i}^{(k)}\) by eliminating the \(\varpi_{0,j}\). The drawback is that through this way we have introduced more transcendental numbers, namely the \(e_{0,k}\) and \(e^{\prime}_{0,k}\)._
_Nevertheless, this still seems to not be enough, at least to the author, to deal with the problem of creating archimedean relations among the \(h_{i,j}^{(k)}\) unless we make assumption about the transcendental numbers that appear above._
#### The subcase where \(E\) has CM
From now on assume that we have that \(E\), the fiber in the third coordinate of the fiber \(\mathcal{X}_{0}\), has CM. Then we can use (50) instead. Using the same exact argument as the one employed in the "First way" of the previous paragraph, we get from (55) in this setting that
\[H_{1}^{(k)}\frac{\varpi_{0,3}}{2\pi i}=\frac{r_{k}}{2\pi i},\text{ and }H_{2}^{(k)}\varpi_{0,3}^{-1}=\frac{s_{k}}{2\pi i}. \tag{67}\]
Multiplying these together we get \(H_{1}^{(k)}\cdot H_{2}^{(k)}=\frac{r_{k}s_{k}}{2\pi i}\) for \(k=1,\,2\).
From this, one gets that either \(H_{j}^{(k)}=0\) for some \(k\) and \(j\), or alternatively, if all of the \(r_{k}\) and \(s_{k}\) are non-zero, that \(H_{1}^{(1)}\cdot H_{2}^{(1)}r_{2}s_{2}=H_{1}^{(2)}\cdot H_{2}^{(2)}r_{1}s_{1}\). We can thus conclude with the following:
**Lemma 5.7**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{ZP}\)-admissible semiabelian scheme. Assume that \(\mathcal{X}_{0}\) is of \(\mathbb{G}_{m}^{2}\times E\)-type with \(E\) CM._
_Let \(s\in C_{t}(\bar{\mathbb{Q}})\) be some point with unlikely isogenies and let \(L_{s}\) be its associated field of coefficients. Then if \(s\) is \(v\)-adically close to \(\xi_{t}\) with respect to some archimedean place \(v\in\Sigma_{L_{s},\infty}\), there exists \(R_{s,v}\in L_{s}[X_{i,j,k}^{(t)}]\) such that the following hold_
1. \(\iota_{v}(R_{s,v}(\mathcal{Y}_{\xi_{t}}(x(s))))=0\)_,_
2. \(R_{s,v}\) _is homogeneous of degree_ \(\deg(R_{s,v})\leq 4\)_, and_
3. \(R_{s,v}\notin I_{0}\leq L_{s}[X_{i,j,k}^{(t)}]\)_, where_ \(I_{0}\) _is the ideal defined in Theorem_ 3.4_._
Proof.: From the above discussion we have that either \(H_{j}^{(k)}=0\) for some \(k\) and \(j\), or that \(H_{1}^{(1)}\cdot H_{2}^{(1)}r_{2}s_{2}=H_{1}^{(2)}\cdot H_{2}^{(2)}r_{1}s_{1}\).
We start with some remarks. Note that by the discussion preceding (49) we have that by definition the first column of the matrix \(\Pi_{k,2}\cdot(\tilde{h}_{i,j}^{(k)})\cdot\begin{pmatrix}d_{k}&e_{0,k}\\ d^{\prime}_{k}&e^{\prime}_{0,k}\end{pmatrix}\) is nothing but \(\begin{pmatrix}\iota_{v}(y_{1,1,k}^{(t)}(x(s)))\\ \iota_{v}(y_{2,1,k}^{(t)}(x(s))).\end{pmatrix}\). Writing \(\Pi_{k,1}=(a_{i,j,k})\in\mathrm{SL}_{2}(L_{s})\) we thus have that the intermediate vector (51) is nothing but
\[(g_{1}^{(k)},g_{2}^{(k)})=(\iota_{v}(a_{2,1,k}y_{1,1,k}^{(t)}(x(s))+a_{2,2,k}y_{2, 1,k}^{(t)}(x(s))),\]
\[-\iota_{v}(a_{1,1,k}y_{1,1,k}^{(t)}(x(s))+a_{1,2,k}y_{2,1,k}^{(t)}(x(s)))).\]
Writing \(\Pi_{3}:=(a_{i,j,3})\) we thus get that \(h_{i,j}^{(3)}\) are linear combinations of the entries of the matrix \((\iota_{v}(y_{i,j,3}(x(s))))\), which are by construction the values of G-functions we are interested in.
Therefore the equations \(H_{j}^{(1)}=0\) and \(H_{1}^{(1)}\cdot H_{2}^{(1)}r_{2}s_{2}=H_{1}^{(2)}\cdot H_{2}^{(2)}r_{1}s_{1}\), will correspond to a polynomial \(R_{s,v}\) that by construction will satisfy all but the final conclusion of our lemma. The rest of this proof focuses on this final part of our statement, i.e. the non-triviality of the \(R_{s,v}\). As in the proof of Proposition 4.2 we drop from now on any reference to \(t\), i.e. the index referring to the root \(\xi_{t}\) of the "local parameter" \(x\) associated to the good cover of our curve.
**Case \(1\): \(H_{j}^{(k)}=0\)**
Let us assume without loss of generality that \(H_{1}^{(1)}=0\), i.e. that \(j=k=1\). Then \(R_{s,v}\) will be the following polynomial
\[R_{s,v}=c_{1}(a_{2,1,3}X_{1,1,3}+a_{2,2,3}X_{2,1,3})(a_{1,1,1}X_{1,1,1}+a_{1,2,1}X_{2,1,1})\] \[+(a_{1}a_{2,1,1}X_{1,1,1}+a_{1}a_{2,2,1}X_{2,1,1}+b_{1}a_{1,1,1}X _{1,1,1}+b_{1}a_{1,2,1}X_{2,1,1})\cdot \tag{69}\] \[\cdot(a_{1,1,3}X_{1,1,3}+a_{1,2,3}X_{2,1,3}).\]
Since \(I_{0}\) is generated by the polynomial \(\det(X_{i,j,3})-1\) it is trivial to see that as long as one of the coefficients of the presentation of \(R_{s,v}\) as a sum of monomials is non-zero we will be done. From now on assume that this is not so.
Then looking at the coefficients of the monomials \(X_{1,1,1}X_{1,1,3}\) and \(X_{1,1,1}X_{2,1,3}\) we get that
\[(a_{1}a_{2,1,1}+b_{1}a_{1,1,1})a_{1,1,3}+c_{1}a_{1,1,1}a_{2,1,3}=0,\,\mbox{and} \tag{70}\]
\[(a_{1}a_{2,1,1}+b_{1}a_{1,1,1})a_{1,2,3}+c_{1}a_{1,1,1}a_{2,2,3}=0. \tag{71}\]
Since \(\det(\Pi_{3})\neq 0\), the above implies that \((a_{1}a_{2,1,1}+b_{1}a_{1,1,1},c_{1}a_{1,1,1})=(0,0)\). Note that \(c_{1}\neq 0\) by construction thus \(a_{1,1,1}=0\). This in turn gives \(a_{1}a_{2,1,1}=0\) and since again \(a_{1}\neq 0\) we get \(a_{2,1,1}=0\) which would imply \(\det(\Pi_{1,1,})=0\).
**Case \(2\):**\(H_{1}^{(1)}\cdot H_{2}^{(1)}r_{2}s_{2}=H_{1}^{(2)}\cdot H_{2}^{(2)}r_{1}s_{1}\)
In this case we will have that \(r_{k}\), \(s_{k}\neq 0\) for \(k=1\), \(2\) by construction. Let us write \(R_{H_{j},k}\) for the polynomial corresponding to \(H_{j}^{(k)}\), for example \(D_{1}\cdot R_{H_{1},1}\) is the polynomial described in (69).
Then \(R_{s,v}=r_{2}s_{2}R_{H_{1},1}R_{H_{2},1}-r_{1}s_{1}R_{H_{1},2}R_{H_{2},2}\). The same computations giving (69) give
\[\begin{split} D_{1}\cdot R_{H_{2},1}=c_{1}(a_{2,1,3}X_{1,2,3}+a_ {2,2,3}X_{2,2,3})(a_{1,1,1}X_{1,1,1}+a_{1,2,1}X_{2,1,1})\\ +(a_{1}a_{2,1,1}X_{1,1,1}+a_{1}a_{2,2,1}X_{2,1,1}+b_{1}a_{1,1,1}X_ {1,1,1}+b_{1}a_{1,2,1}X_{2,1,1})\cdot\\ \cdot(a_{1,1,3}X_{1,2,3}+a_{1,2,3}X_{2,2,3}).\end{split} \tag{72}\]
Writing
\[R_{H_{1},1}=C_{1}X_{1,1,1}X_{1,1,3}+C_{2}X_{1,1,1}X_{2,1,3}+C_{3}X_{2,1,1}X_{1, 1,3}+C_{4}X_{2,1,1}X_{2,1,3}\]
we notice that
\[R_{H_{2},1}=C_{1}X_{1,1,1}X_{1,2,3}+C_{2}X_{1,1,1}X_{2,2,3}+C_{3}X_{2,1,1}X_{1, 2,3}+C_{4}X_{2,1,1}X_{2,2,3},\]
i.e. the coefficients are the same with at least one of them being non-zero.
By symmetry one has
\[R_{H_{1},2}=C_{1}^{\prime}X_{1,1,2}X_{1,1,3}+C_{2}^{\prime}X_{1,1,2}X_{2,1,3}+ C_{3}^{\prime}X_{2,1,2}X_{1,1,3}+C_{4}^{\prime}X_{2,1,2}X_{2,1,3}\]
we notice that
\[R_{H_{2},2}=C_{1}^{\prime}X_{1,1,2}X_{1,2,3}+C_{2}^{\prime}X_{1,1,2}X_{2,2,3}+ C_{3}^{\prime}X_{2,1,2}X_{1,2,3}+C_{4}^{\prime}X_{2,1,2}X_{2,2,3},\]
i.e. the coefficients are again the same and at least one of them is non-zero.
Now, if \(R_{s,v}\in I_{0}\) we would have \(R_{s,v}\in m_{1}:=\langle X_{1,1,3}-1,X_{2,1,3},X_{1,2,3},X_{2,2,3}-1\rangle\). This in turn implies that
\[\begin{split} r_{2}s_{2}(C_{1}X_{1,1,1}+C_{3}X_{2,1,1})(C_{2}X_{1,1,1}+C_{4}X_{2,1,1})\\ -r_{1}s_{1}(C_{1}^{\prime}X_{1,1,2}+C_{3}^{\prime}X_{2,1,2})(C_{2} ^{\prime}X_{1,1,2}+C_{4}^{\prime}X_{2,1,2})=0.\end{split} \tag{73}\]
The proof in the previous case shows that at least one of the \(C_{1}\) and \(C_{2}\), and similarly at least one of \(C_{3}\) and \(C_{4}\) are non-zero, and the same for the coefficients \(C_{j}^{\prime}\). If (73) were to hold we must then have that, without loss of generality, \(C_{2}=C_{4}=0\).
Then, noting that \(I_{0}\subset m_{2}:=\langle X_{1,1,3}-1,X_{2,1,3},X_{1,2,3}-1,X_{2,2,3}-1\rangle\), we get \(R_{s,v}\in m_{2}\) which implies
\[r_{2}s_{2}(C_{1}X_{1,1,1}+C_{3}X_{2,1,1})(C_{1}X_{1,1,1}+C_{3}X_{2,1,1})-F(X_{1,1,2},X_{2,1,2})=0. \tag{74}\]
This is clearly impossible since \(r_{2}s_{2}C_{1}\neq 0\) and the coefficient of \(X_{1,1,1}^{2}\) is \(r_{2}s_{2}C_{1}^{2}\neq 0\)
#### 5.2.2 The \(\mathbb{G}_{m}\times E\times E^{\prime}\) case
The same issue as in Section 5.2.1 pops up. Namely, there are too many possibly transcendental numbers that appear in our equations. Nevertheless, there are special cases here where we can extract relations among the values of the G-functions of our family.
#### \(E^{\prime}\) is CM
Let us write \(\begin{pmatrix}\varpi_{0,1}&\varpi_{0,2}\\ \varpi_{0,3}&\varpi_{0,4}\end{pmatrix}\) for the periods of the elliptic curve \(E\) and \(\begin{pmatrix}\frac{\varpi}{2\pi i}&0\\ 0&\varpi^{-1}\end{pmatrix}\) for those of \(E^{\prime}\).
Working with the isogenous pair \(\phi_{2}^{\vee}:E_{2}\to E_{3}\) of the fiber at \(s\) we get the following, here as before we write \(\Pi_{k}\cdot Y_{G,k}(x(s))=(h_{i,j}^{k})\), note that now \(\Pi_{k}\) for \(k=2\), \(3\), are defined in the same manner as \(\Pi_{3}\) in Section 5.2.1:
\[\begin{pmatrix}a_{3}&0\\ b_{3}&c_{3}\end{pmatrix}\begin{pmatrix}h_{i,j}^{(2)}\end{pmatrix}\begin{pmatrix} \varpi_{0,i}\end{pmatrix}=\begin{pmatrix}h_{i,j}^{(3)}\end{pmatrix}\begin{pmatrix} \frac{\varpi}{2\pi i}&0\\ 0&\varpi^{-1}\end{pmatrix}\begin{pmatrix}p_{3}&q_{3}\\ r_{3}&s_{3}\end{pmatrix}. \tag{75}\]
From this, one gets working as in the "second way" above
\[(H_{1}^{(3)}\varpi_{0,1}+H_{2}^{(3)}\varpi_{0,3},H_{1}^{(3)}\varpi_{0,2}+H_{2 }^{(3)}\varpi_{0,4})=(\frac{r_{3}}{\varpi},\frac{s_{3}}{\varpi}),\text{ and} \tag{76}\]
\[(H_{3}^{(3)}\varpi_{0,1}+H_{4}^{(3)}\varpi_{0,3},H_{3}^{(3)}\varpi_{0,2}+H_{4 }^{(3)}\varpi_{0,4})=(\frac{p_{3}\varpi}{2\pi i},\frac{q_{3}\varpi}{2\pi i}). \tag{77}\]
Now we look at the pair of isogenous elliptic curves \(\phi:E_{3}\to E_{1}\). From the previous discussion, working as in the "second way" outlined in the previous section, we get:
\[\begin{pmatrix}a_{1}&0\\ b_{1}&c_{1}\end{pmatrix}\begin{pmatrix}h_{i,j}^{(3)}\end{pmatrix}\begin{pmatrix} \frac{\varpi}{2\pi i}&0\\ 0&\varpi^{-1}\end{pmatrix}=\begin{pmatrix}h_{i,j}^{(1)}\end{pmatrix}\begin{pmatrix} d_{1}&e_{0,1}\\ d_{1}^{\prime}&e_{0,1}^{\prime}\end{pmatrix}\begin{pmatrix}p_{1}&q_{1}\\ r_{1}&s_{1}\end{pmatrix}. \tag{78}\]
This will lead us to equations of the form \(H_{1}^{(1)}\varpi=r_{1}\) and \(H_{2}^{(1)}\frac{1}{\varpi}=\frac{s_{1}}{2\pi i}\). For the other pair of functions we get equations of the form
\[(\frac{\varpi}{2\pi i}H_{3}^{(1)},\frac{1}{\varpi}H_{4}^{(1)})=(d_{1}p_{1}+e_ {0,1}r_{1},d_{1}q_{2}+e_{0,1}s_{1}). \tag{79}\]
**Remark 5.8**.: _These seem to not be sufficient for our purposes in dealing with the general case here, i.e. that where the other elliptic curve \(E\) is generic. Once again, there are too many possibly transcendental numbers that appear in these equations._
\(E\) is also CM
Let us now assume that \(E\) is also CM. Then we can get relations in two different ways.
**First way:** Working as in Section 5.2.1, namely the constructions under the assumption that \(E\) is CM there, we get from working with the isogenous pair \((E_{1},E_{3})\) the relations
\[H_{1}^{(1)}H_{2}^{(1)}=\frac{r_{1}s_{1}}{2\pi i}, \tag{80}\]
and working with the pair \((E_{1},E_{2})\) we get the relation
\[H_{1}^{(2)}H_{2}^{(2)}=\frac{r_{2}s_{2}}{2\pi i}. \tag{81}\]
From these we can get rid of \(\pi\) and get a relation as before.
**Second way:** The second way is to work only with the pair \((E_{2},E_{3})\). One then gets a simplified version of the equation in (75). Namely, one has:
\[\begin{pmatrix}a_{3}&0\\ b_{3}&c_{3}\end{pmatrix}\begin{pmatrix}h_{i,j}^{(2)}\end{pmatrix}\begin{pmatrix} \frac{\varpi^{\prime}}{2\pi i}&0\\ 0&\varpi^{\prime-1}\end{pmatrix}=\begin{pmatrix}h_{i,j}^{(3)}\end{pmatrix} \begin{pmatrix}\frac{\varpi}{2\pi i}&0\\ 0&\varpi^{-1}\end{pmatrix}\begin{pmatrix}p_{3}&q_{3}\\ r_{3}&s_{3}\end{pmatrix}, \tag{82}\]
where \(\begin{pmatrix}\frac{\varpi^{\prime}}{2\pi i}&0\\ 0&\varpi^{\prime-1}\end{pmatrix}\) is the period matrix of \(E\).
We work much as in the "second way" outlined in Section 5.2.1. Multiplying both sides of the above on the left by \((h_{2,2}^{(3)},-h_{1,2}^{(3)})\) we get:
\[(a_{3}h_{2,2}^{(3)}-b_{3}h_{1,2}^{(3)},-c_{3}h_{1,2}^{(3)})\begin{pmatrix}h_{i,j}^{(2)}\end{pmatrix}\begin{pmatrix}\frac{\varpi^{\prime}}{2\pi i}&0\\ 0&\varpi^{\prime-1}\end{pmatrix}=(1,0)\begin{pmatrix}\frac{\varpi}{2\pi i}&0 \\ 0&\varpi^{-1}\end{pmatrix}\begin{pmatrix}p_{3}&q_{3}\\ r_{3}&s_{3}\end{pmatrix}. \tag{83}\]
As usual setting \((g_{3},g_{4}):=(a_{3}h_{2,2}^{(3)}-b_{3}h_{1,2}^{(3)},-c_{3}h_{1,2}^{(3)})\) and \((H_{3},H_{4}):=(g_{3}h_{1,1}^{(2)}+g_{4}h_{2,1}^{(2)},g_{3}h_{1,2}^{(2)}+g_{4} h_{2,2}^{(2)})\), we get
\[(\frac{\varpi^{\prime}H_{3}}{2\pi i},\frac{H_{4}}{\varpi^{\prime}})=(\frac{ \varpi p_{3}}{2\pi i},\frac{\varpi q_{3}}{2\pi i}). \tag{84}\]
Multiplying (82) on the left on both sides by \((-h_{2,1}^{(3)},h_{1,1}^{(3)})\) and repeating the notation from earlier we end up with the relations:
\[(\frac{\varpi^{\prime}H_{1}}{2\pi i},\frac{H_{2}}{\varpi^{\prime}})=(\frac{r_{ 3}}{\varpi},\frac{s_{3}}{\varpi}). \tag{85}\]
Combining this with (84) gives
\[H_{1}H_{2}H_{3}H_{4}=p_{3}q_{3}r_{3}s_{3}.. \tag{86}\]
**Remark 5.9**.: _The \(H_{i}\) correspond to homogeneous degree \(2\) polynomials among the \(h_{i,j}^{(k)}\). To turn (86) into a relation coming from a homogeneous polynomial we can just multiply its right hand side by \(1=\det(y_{i,j,3}(x(s)))^{2}\det(y_{i,j,2}(x(s)))^{2}\)._
Once again we finish as in the previous case by recording the following lemmas that guarantee the existence of the "factors" \(R_{s,v}\).
**Lemma 5.10**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{ZP}\)-admissible semiabelian scheme. Assume that \(\mathcal{X}_{0}\) is of \(\mathbb{G}_{m}\times E\times E^{\prime}\)-type with \(E\) and \(E^{\prime}\) CM._
_Let \(s\in C_{t}(\bar{\mathbb{Q}})\) be some point with unlikely isogenies and let \(L_{s}\) be its associated field of coefficients. Then if \(s\) is \(v\)-adically close to \(\xi_{t}\) with respect to some archimedean place \(v\in\Sigma_{L_{s},\infty}\), there exists \(R_{s,v}\in L_{s}[X_{i,j,k}^{(t)}]\) such that the following hold_
1. \(\iota_{v}(R_{s,v}(\mathcal{Y}_{\xi_{t}}(x(s))))=0\)_,_
2. \(R_{s,v}\) _is homogeneous of degree_ \(\deg(R_{s,v})\leq 4\)_, and_
3. \(R_{s,v}\notin I_{0}\leq L_{s}[X_{i,j,k}^{(t)}]\)_, where_ \(I_{0}\) _is the ideal defined in Theorem_ 3.4_._
Proof.: We move much in the same way as in the proof of Lemma 5.7. It is then straightforward to see that the relations among the \(h_{i,j}^{(k)}\) outlined in the "first way" above, with the same arguments as before, will correspond to polynomials \(R_{s,v}\). By construction these will satisfy the first two conclusions of our lemma. Again all that is left to check is that these \(R_{s,v}\) are not in the ideal \(I_{0}\). Once again to simplify notation we drop temporarily any mention of "\(t\)", the index of the root \(\xi_{t}\) of our parameter \(x\).
As we did in the proof of Lemma 5.7 we let \(\Pi_{1,1}=(a_{i,j,1})\) and \(\Pi_{k}=(a_{i,j,k})\) for \(k=2\) or \(3\).
**Case \(1\): \(H_{j}^{(k)}=0\)**
Again, if, without loss of generality, \(H_{1}^{(1)}=0\) we see as before that \(R_{s,v}\notin I_{0}\). Indeed there are monomials of the form \(X_{i,j,1}X_{i^{\prime},j^{\prime},3}\) that appear in its expression as a sum of monomials with non-zero coefficients. The ideal \(I_{0}\) is now generated by the two polynomials \(f_{2}:=\det(X_{i,j,2})-1\) and \(f_{3}:=\det(X_{i,j,3})-1\). In this case \(R_{s,v}\) is homogeneous of degree \(2\) and we are done since the monomials of \(f_{2}\) and \(f_{3}\) are not of the proper form.
**Case \(2\): \(H_{1}^{(1)}\cdot H_{2}^{(1)}r_{2}s_{2}=H_{1}^{(2)}\cdot H_{2}^{(2)}r_{1}s_{1}\)**
Again here \(r_{k}\), \(s_{k}\neq 0\) for \(k=1\), \(2\) by construction. Note that the polynomials \(R_{H_{1},1}\) and \(R_{H_{2},1}\) introduced in the proof of Lemma 5.7 will be the
same here. While the polynomials \(R_{H_{j},2}\) can be described similarly, replacing \(X_{i,j,3}\) by \(X_{i,j,2}\) in the expression of \(R_{H_{j},1}\) as sums of monomials that appears in the aforementioned proof. We write
\[R_{H_{j},2}=C_{1}^{\prime}X_{1,1,1}X_{1,j,2}+C_{2}^{\prime}X_{1,1,1}X_{2,j,2}+C _{3}^{\prime}X_{2,1,1}X_{1,j,2}+C_{4}^{\prime}X_{2,1,1}X_{2,j,2}\]
The polynomial in question can then be written as \(R_{s,v}=r_{2}s_{2}R_{H_{1},1}R_{H_{2},1}-r_{2}s_{2}R_{H_{1},2}R_{H_{2},2}\). Consider the following ideals
\[m_{1}:=\langle X_{1,1,k}-1,X_{1,2,k},X_{2,1,k},X_{2,2,k}-1:k=2,3\rangle,\] \[m_{2}:=\langle X_{1,1,k}-1,X_{1,2,1},X_{1,2,2}-1,X_{2,1,k},X_{2,2,k}-1:k=2,3\rangle,\] \[m_{3}:=\langle X_{1,1,k}-1,X_{2,1,1}-1,X_{1,2,k},X_{2,1,2},X_{2,2,k}-1:k=2,3\rangle,\] \[m_{4}:=\langle X_{1,1,k}-1,X_{1,2,1}-1,X_{1,2,2},X_{2,1,k},X_{2,2,k}-1:k=2,3\rangle.\]
Note that \(I_{0}\subset m_{j}\) hence we get \(R_{s,v}\in m_{j}\) for \(1\leq j\leq 4\).
Modding out \(R_{s,v}\) by \(m_{1}\) and looking at the coefficients of \(X_{1,1,1}^{2}\) and \(X_{2,1,1}^{2}\) we conclude that
\[r_{2}s_{2}C_{1}C_{2}=r_{1}s_{1}C_{1}^{\prime}C_{2}^{\prime}\text{ and }r_{2}s_{2}C_{3}C_{4}=r_{1}s_{1}C_{3}^{\prime}C_{4}^{\prime}. \tag{87}\]
On the other hand modding \(R_{s,v}\) by \(m_{2}\) and looking at the coefficients of the same terms as above we get
\[r_{2}s_{2}C_{1}C_{2}=r_{1}s_{1}(C_{1}^{\prime}C_{2}^{\prime}+(C_{1}^{\prime}) ^{2})\text{ and }r_{2}s_{2}C_{3}C_{4}=r_{1}s_{1}(C_{3}^{\prime}C_{4}^{\prime}+(C_{3}^{ \prime})^{2}). \tag{88}\]
Thus \(C_{1}^{\prime}=C_{3}^{\prime}=0\) and by the proof of Lemma 5.7 we know that this implies that \(C_{2}^{\prime}\), \(C_{4}^{\prime}\neq 0\).
The above also show that one of \(C_{1}\) and \(C_{2}\) is \(0\) and likewise for the pair \(C_{3}\) and \(C_{4}\). Assume from now on without loss of generality that \(C_{1}\neq 0\). This forces \(C_{2}=0\).
Then modding \(R_{s,v}\) out by \(m_{3}\) we get
\[(C_{1}X_{1,1,1}+C_{3}X_{2,1,1}+C_{4}X_{2,1,1})(C_{4}X_{2,1,1})=0, \tag{89}\]
which forces \(C_{4}=0\), thus again by the proof of Lemma 5.7\(C_{3}\neq 0\).
We conclude that
\[R_{s,v}=\] \[r_{2}s_{2}X_{1,1,3}X_{1,2,3}(C_{1}X_{1,1,1}+C_{3}X_{2,1,1})^{2}-r_{1}s_ {1}X_{2,1,2}X_{2,2,2}(C_{2}^{\prime}X_{1,1,1}+C_{4}^{\prime}X_{2,1,1})^{2}.\]
Finally, from \(R_{s,v}\in m_{4}\) we conclude that
\[r_{2}s_{2}(C_{1}X_{1,1,1}+C_{3}X_{2,1,1})^{2}=0, \tag{90}\]
which forces \(C_{1}=C_{3}=0\) which is a contradiction.
Finally, we close off this section by looking at the alternate relation described in the "second way" above. Namely, we consider the polynomial that corresponds to the relation among the entries of the values of the G-matrices \(Y_{G,k}(x(s))\) for \(k=2\), \(3\), that comes from the equation
\[H_{1}H_{2}H_{3}H_{4}=\iota_{v}(p_{3}q_{3}r_{3}s_{3}\det(y_{i,j,3}(x(s)))^{2}\det (y_{i,j,2}(x(s)))^{2}). \tag{91}\]
Using the computations in the "second way" above we reach the following:
**Lemma 5.11**.: _In the context of Lemma 5.10 there exists homogeneous \(R_{s,v}\in L_{s}[X_{i,j,k}^{(t)}:1\leq i,j\leq 2,k=2,3]\) such the following hold_
1. \(\iota_{v}(R_{s,v}(\mathcal{Y}_{\xi_{t}}(x(s))))=0\)_,_
2. \(R_{s,v}\) _is homogeneous of degree_ \(\deg(R_{s,v})\leq 8\)_, and_
3. \(R_{s,v}\notin I_{0}\leq L_{s}[X_{i,j,k}^{(t)}:1\leq i,j\leq 2,k=2,3]\)_, where_ \(I_{0}\) _is the ideal defined in Theorem_ 3.4_._
Proof.: The first two properties follow by construction. The construction gives us two possible cases, either one of the entries of \(\begin{pmatrix}p_{3}&q_{3}\\ r_{3}&s_{3}\end{pmatrix}\) is zero, in which case we get a polynomial from the relation \(H_{j}=0\), or they are all non-zero in which case we look to (91) for the polynomial \(R_{s,v}\) we want.
As in earlier proofs we are reduced to showing non-triviality of the relations in question, i.e. that \(R_{s,v}\notin I_{0}\). As in previous proofs we drop any reference of the index "\(t\)" from now on for notational simplicity.
**Case \(1\): \(H_{j}=0\)**
Without loss of generality assume that \(H_{3}=0\), i.e. that \(p_{3}=0\). Then by construction we have
\[R_{s,v}=C_{1}X_{1,2,3}X_{1,1,2}+C_{2}X_{1,2,3}X_{2,1,2}+C_{3}X_{2,2,3}X_{1,1,2 }+C_{4}X_{2,2,3}X_{2,1,2},\text{ where } \tag{92}\]
\[C_{1}=(a_{3}a_{2,1,3}-b_{3}a_{1,1,3})a_{1,1,2}-c_{3}a_{1,1,3}a_{2,1,2},\]
\[C_{2}=(a_{3}a_{2,1,3}-b_{3}a_{1,1,3})a_{1,2,2}-c_{3}a_{1,1,3}a_{2,2,2},\]
\[C_{3}=(a_{3}a_{2,2,3}-b_{3}a_{1,2,3})a_{1,1,2}-c_{3}a_{1,2,3}a_{2,1,2},\text{ and }\]
\[C_{4}=(a_{3}a_{2,2,3}-b_{3}a_{1,2,3})a_{1,2,2}-c_{3}a_{1,2,3}a_{2,2,2}.\]
Now assume that \(R_{s,v}=0\), i.e. that \(C_{j}=0\) for all \(j\). Since \(\Pi_{2}=(a_{i,j,2})\) is invertible and by definition \(c_{3}\neq 0\), we will then have that \(a_{1,1,3}=a_{1,2,3}=0\) which clearly contradicts the fact that \(\Pi_{3}=(a_{i,j,3})\) is invertible. Therefore,
we get \(R_{s,v}\neq 0\) and the monomials in \(R_{s,v}\) do not appear in the presentations of the two generators \(\det(X_{i,j,2})-1\) and \(\det(X_{i,j,3})-1\) of the ideal \(I_{0}\), thus \(R_{s,v}\notin I_{0}\) in this case.
We note that furthermore, as in the proof of Lemma 5.7, we can see that at least one of \(C_{1}\) and \(C_{2}\) has to be non-zero and likewise for the pair \(C_{3}\), \(C_{4}\). Indeed, if say \(C_{1}=C_{2}=0\), since \(\det(\Pi_{2})\neq 0\) and \(a_{3}\), \(c_{3}\neq 0\) we must have that \(a_{1,1,3}=a_{2,1,3}=0\) but this once again contradicts the fact that \(\det(\Pi_{3})\neq 0\).
**Case \(2\):**\(H_{1}H_{2}H_{3}H_{4}=\iota_{v}(p_{3}q_{3}r_{3}s_{3}\det(y_{i,j,3}(x(s)))^{2}\det(y_{i,j,2}(x(s)))^{2})\)
Again here we will have that all entries of the matrix \(\begin{pmatrix}p_{3}&q_{3}\\ r_{3}&s_{3}\end{pmatrix}\) are non-zero. Assume from now on that \(R_{s,v}\in I_{0}\) and write \(f_{k}:=\det(X_{i,j,k})-1\) for its two generators.
Let us write \(R_{i}\) for the polynomial corresponding to each of the \(H_{i}\). In this sense we will have
\[R_{s,v}=R_{1}R_{2}R_{3}R_{4}-p_{3}q_{3}r_{3}s_{3}\det(X_{i,j,3})^{4}.\]
We have already seen that
\[R_{3}=C_{1}X_{1,2,3}X_{1,1,2}+C_{2}X_{1,2,3}X_{2,1,2}+C_{3}X_{2,2,3}X_{1,1,2}+C _{4}X_{2,2,3}X_{2,1,2}.\]
Computing \(R_{4}\) we see, as in the previous case, that we may write
\[R_{4}=C_{1}X_{1,2,3}X_{1,2,2}+C_{2}X_{1,2,3}X_{2,2,2}+C_{3}X_{2,2,3}X_{1,2,2}+ C_{4}X_{2,2,3}X_{2,2,2},\]
where \(C_{j}\) are the exact same coefficients as above.
Similar computations give
\[R_{1}=C_{1}^{\prime}X_{1,1,3}X_{1,1,2}+C_{2}^{\prime}X_{1,1,3}X_{2,1,2}+C_{3}^ {\prime}X_{2,1,3}X_{1,1,2}+C_{4}^{\prime}X_{2,1,3}X_{2,1,2}\ \ \text{and}\]
\[R_{2}=C_{1}^{\prime}X_{1,1,3}X_{1,2,2}+C_{2}^{\prime}X_{1,1,3}X_{2,2,2}+C_{3}^ {\prime}X_{2,1,3}X_{1,2,2}+C_{4}^{\prime}X_{2,1,3}X_{2,2,2},\]
again with the same coefficients.
Let us first consider the ideal
\[m_{1}:=\langle f_{2},X_{1,2,3},X_{2,1,3},X_{1,1,3}X_{2,2,3}-1\rangle,\]
noting that \(I_{0}\subset m_{1}\). Then from \(R_{s,v}\in I_{0}\) we conclude that the polynomial
\[\begin{split} Q_{1}:=(C_{1}^{\prime}X_{1,1,2}+C_{2}^{\prime}X_{2,1,2})(C_{1}^{\prime}X_{1,2,2}+C_{2}^{\prime}X_{2,2,2})\\ (C_{3}X_{1,1,2}+C_{4}X_{2,1,2})(C_{3}X_{1,2,2}+C_{4}X_{2,2,2})\end{split} \tag{93}\]
is such that \(Q_{1}\in(f_{2})\),where \((f_{2})\) here denotes the principal ideal of \(L_{s}(X_{1,1,3})[X_{i,j,2}:1\leq i,j\leq 2]\).
Noting that \((f_{2})\subset m_{2}:=\langle X_{2,2,2}-1,X_{1,1,2}-1,X_{1,2,2},X_{2,1,2}\rangle\), we can see, modding out \(Q_{1}\) by \(m_{2}\), that
\[C_{1}^{\prime}C_{2}^{\prime}C_{3}C_{4}=0. \tag{94}\]
Let us assume without loss of generality that \(C_{1}^{\prime}=0\). Then from the discussion in the first part, from the symmetry of the definition of the \(H_{j}\), we know that \(C_{2}^{\prime}\neq 0\).
On the other hand, modding out \(Q_{1}\) by the ideals \(m_{3,n}:=\langle X_{2,2,2}-1,X_{1,1,2}-1,X_{1,2,2},X_{2,1,2}-n\rangle\), for which \(I_{0}\subset m_{3,n}\) for all \(n\in\mathbb{N}\), we see that
\[(C_{2}^{\prime})^{2}(C_{3}+nC_{4})C_{4}=0 \tag{95}\]
holds for all \(n\in\mathbb{N}\). This clearly implies that \(C_{4}=0\) and hence \(C_{3}\neq 0\) by our remarks in the previous case of the proof.
Now the relations \(C_{1}^{\prime}=C_{4}=0\) imply
\[(C_{2}^{\prime}C_{3})^{-2}\cdot Q_{1}=X_{2,1,2}X_{2,2,2}X_{1,1,2}X_{1,2,2}\in (f_{2}),\]
the latter viewed as an ideal in the ring \(L_{s}(X_{1,1,3})[X_{i,j,2}:1\leq i,j\leq 2]\). Since \((f_{2})\) is prime this would imply that \(X_{i,j,2}\in(f_{2})\) for some pair \(i\), \(j\) which is clearly absurd.
**Remark 5.12**.: _The distinct advantage of Lemma 5.11 is that one only needs one isogeny to create the relations in question! The negligible for our arguments disadvantage is that one has that the polynomial will be of higher degree potentially than the one constructed in Lemma 5.10._
### Archimedean relations at points with unlikely isogenies
Putting everything together from the previous subsection we can conclude with the following proposition describing archimedean relations among values of G-functions at points with unlikely isogenies for \(G_{ZP}\)-admissible semiabelian schemes and with \(n\) arbitrary this time.
**Proposition 5.13**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{ZP}\)-admissible semiabelian scheme, as in the discussion in the beginning of Section 5.2 with \(n\) arbitrary. Let \(s\in C(\bar{\mathbb{Q}})\) be a point that has unlikely isogenies and assume that not all of the isogenous coordinates of \(\mathcal{X}_{s}\) are singular for \(S^{\prime}\), while all of these that are smooth coordinates for \(S^{\prime}\) are furthermore CM for \(S^{\prime}\)._
_Then, there exists a homogeneous polynomial \(R_{s,\infty}\in L_{s}[X_{i,j,k}:1\leq i,j,\leq 2,1\leq k\leq n]\), where \(L_{s}/K(s)\) is a finite extension, such that the following hold:_
1. \(\iota_{v}(R_{s,\infty}(\mathcal{Y}(x(s))))=0\) _for all_ \(v\in\Sigma_{L_{s},\infty}\) _for which_ \(s\) _is_ \(v\)_-adically close to_ \(0\)_,_
2. \([L_{s}:\mathbb{Q}]\leq c_{1}(n)[K(s):\mathbb{Q}]\)_, with_ \(c_{1}(n)>0\) _a constant depending only on_ \(n\)_,_
3. \(\deg(R_{s,\infty})\leq 8[L_{s}:\mathbb{Q}]\)_, and_
4. \(R_{s,\infty}(\mathcal{Y}(x))=0\) _does not hold generically, in other words the relation defined by the polynomial is "non-trivial"._
**Remarks 5.14**.: _We note that the points with unlikely isogenies for which all of the isogenous coordinates are singular are for all practical reasons dealt with by the work of Daw and Orr in [10]._
Proof.: The proof is identical to that of Proposition 4.2. Let \(i_{1},\ldots,i_{4}\) be the four isogenous coordinates of \(\mathcal{X}_{s}\) and let us write \(\mathcal{E}_{i_{j},0}\) for the fibers of the various connected Neron models at \(s_{0}\). We assume without loss of generality that \(i_{1}<i_{2}\leq i_{3}<i_{4}\).
The assumption that not all isogenous coordinates of \(\mathcal{X}_{s}\) are singular for \(S^{\prime}\) and the definition of \(G_{ZP}\)-admissibility shows that we are in either of the following situations:
**Case 1:**\(i_{2}=i_{3}\) and \(\mathcal{E}_{i_{1},0}\times\mathcal{E}_{i_{2},0}\times\mathcal{E}_{i_{4},0} \simeq\mathbb{G}_{m}^{2}\times E\) with \(E\) CM.
The local factors \(R_{s,v}\) in this case will be those constructed in Lemma 5.7.
**Case 2:**\(i_{2}=i_{3}\) and \(\mathcal{E}_{i_{1},0}\times\mathcal{E}_{i_{2},0}\times\mathcal{E}_{i_{4},0} \simeq\mathbb{G}_{m}\times E\times E^{\prime}\) with \(E\), \(E^{\prime}\) both CM.
The local factors \(R_{s,v}\) are those constructed in Lemma 5.10.
**Case 3:**\(i_{2}\neq i_{3}\) and \(\mathcal{E}_{i_{1},0}\times\mathcal{E}_{i_{2},0}\times\mathcal{E}_{i_{3},0} \times\mathcal{E}_{i_{4},0}\simeq\mathbb{G}_{m}^{3}\times E\) with \(E\) CM.
The local factors \(R_{s,v}\) in this case will be those constructed in Lemma 5.15.
**Case 4:**\(i_{2}\neq i_{3}\) and \(\mathcal{E}_{i_{1},0}\times\mathcal{E}_{i_{2},0}\times\mathcal{E}_{i_{3},0} \times\mathcal{E}_{i_{4},0}\simeq\mathbb{G}_{m}^{2}\times E\times E^{\prime}\) with \(E\) and \(E^{\prime}\) both CM.
There are two subcases here. If two of the isogenous coordinates, say \(i_{3}\) and \(i_{4}\), are CM then the local factors are those defined by Lemma 5.11.
On the other hand, if none of the pairs of isogenous coordinates are both CM, we need to use the local factors \(R_{s,v}\) of Lemma 5.15.
**Case 5:**\(i_{2}\neq i_{3}\) and \(\mathcal{E}_{i_{1},0}\times\mathcal{E}_{i_{2},0}\times\mathcal{E}_{i_{3},0}\times \mathcal{E}_{i_{4},0}\simeq\mathbb{G}_{m}\times E\times E^{\prime}\times E^{ \prime\prime}\) with \(E\), \(E^{\prime}\), and \(E^{\prime\prime}\) all CM.
In this case at least one of the pairs of isogenous coordinates are both CM. Thus, we can use the local factors \(R_{s,v}\) of Lemma 5.11.
**Case 6:** all of the coordinates \(i_{j}\) are CM.
The local factors are those defined by Lemma 5.11.
The definition of \(R_{s,\infty}\) and the proof of its properties follow exactly as in the proof of Proposition 4.2.
**Lemma 5.15**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{ZP}\)-admissible semiabelian scheme with \(n=4\). Let \(s\in C_{t}(\bar{\mathbb{Q}})\) be some point with unlikely isogenies and let \(L_{s}\) be its associated field of coefficients. Assume that \(s\) is \(v\)-adically close to \(\xi_{t}\) with respect to some archimedean place \(v\in\Sigma_{L_{s},\infty}\) and that the following hold_
1. _there are isogenies_ \(\phi_{1}:\mathcal{E}_{1,s}\to\mathcal{E}_{2,s}\) _and_ \(\phi_{2}:\mathcal{E}_{3,s}\to\mathcal{E}_{4,s}\)_, and_
2. _either of the following holds_ 1. \(1\) _and_ \(3\) _are singular coordinates and the rest are CM, or_ 2. \(1\)_,_ \(2\)_,_ \(3\) _are all singular coordinates while_ \(4\) _is a CM coordinate for_ \(S^{\prime}\)_._
_Then, there exists \(R_{s,v}\in L_{s}[X^{(t)}_{i,j,k}]\) such that the following hold_
1. \(\iota_{v}(R_{s,v}(\mathcal{Y}_{\xi_{t}}(x(s))))=0\)_,_
2. \(R_{s,v}\) _is homogeneous of degree_ \(\deg(R_{s,v})\leq 4\)_, and_
3. \(R_{s,v}\notin I_{0}\leq L_{s}[X^{(t)}_{i,j,k}]\)_, where_ \(I_{0}\) _is the ideal defined in Theorem_ 3.4_._
Proof.: Let us first assume that we are in (ii)(a).
We work as in Section 5.2 in the "first way" of creating relations among the isogenous pair \(\mathcal{E}_{1,s}\) and \(\mathcal{E}_{2,s}\). We then get, as before \(H^{(1)}_{1}\) and \(H^{(1)}_{2}\) such that (67) holds. In particular either \(H^{(1)}_{j}=0\) for some \(j\) or \(H^{(1)}_{1}\cdot H^{(1)}_{2}=\frac{r_{1}s_{1}}{2\pi i}\) with \(r_{1}s_{1}\neq 0\).
If \(H^{(1)}_{j}=0\) we are done as in the proofs of earlier similar results.
Now working with the pair \(\mathcal{E}_{3,s}\) and \(\mathcal{E}_{4,s}\) we again get as before \(H^{(2)}_{1}\) and \(H^{(2)}_{2}\) such that (67) holds with \(r_{2}\), \(s_{2}\in\mathbb{Z}\). Once again if \(r_{2}=0\) or \(s_{2}=0\) we are done as before. If on the other hand \(r_{2}s_{2}\neq 0\) we get \(H^{(2)}_{1}\cdot H^{(2)}_{2}=\frac{r_{2}s_{2}}{2\pi i}\).
Assume from now own that \(r_{1}s_{1}r_{2}s_{2}\neq 0\) so that we have that
\[r_{2}s_{2}H_{1}^{(1)}\cdot H_{2}^{(1)}=r_{1}s_{1}H_{1}^{(2)}\cdot H_{2}^{(2)}. \tag{96}\]
Then by similar arguments as in Lemma 5.7 we get a polynomial \(R_{s,v}\) that is homogeneous of degree \(4\) and satisfies all of the properties that we want.
Let us now assume that we are in (ii)(b). By working with the isogenous pair \(\mathcal{E}_{3,s}\) and \(\mathcal{E}_{s,4}\) we get on the one hand the same relations as in the previous case. Namely, reducing from above to the case \(r_{2}s_{2}\neq 0\), we have
\[H_{1}^{(2)}\cdot H_{2}^{(2)}=\frac{r_{2}s_{2}}{2\pi i}. \tag{97}\]
Let us now work with the isogenous pair \(\mathcal{E}_{1,s}\) and \(\mathcal{E}_{2,s}\). Working as in the beginning of Section 5.2.1 and with the same notation for the various matrices as used there, we get that
\[\begin{pmatrix}a_{1}&0\\ b_{1}&c_{1}\end{pmatrix}\begin{pmatrix}h_{i,j}^{(1)}\end{pmatrix}\begin{pmatrix} d_{1}&e_{0,1}\\ d_{1}^{\prime}&e_{0,1}^{\prime}\end{pmatrix}=\begin{pmatrix}h_{i,j}^{(2)} \end{pmatrix}\begin{pmatrix}d_{2}&e_{0,2}\\ d_{2}^{\prime}&e_{0,2}^{\prime}\end{pmatrix}\begin{pmatrix}p_{1}&q_{1}\\ r_{1}&s_{1}\end{pmatrix}. \tag{98}\]
Arguing as in the "first way" of extracting relations described in Section 5.2.1 one again ends up with equations of the form
\[(H_{1}^{(1)},H_{2}^{(1)})=(\frac{r_{1}}{2\pi i};\frac{s_{1}}{2\pi i}), \tag{99}\]
where \(H_{j}^{(1)}\) are polynomials in the \(h_{i,j}^{(k)}\) for \(k=1\), \(2\). These are nothing but a recreation of equation (12) in [1].
We can then associate to \(r_{2}s_{2}H_{1}^{(1)}\cdot H_{2}^{(1)}=r_{1}s_{1}H_{1}^{(2)}\cdot H_{2}^{(2)}\) a polynomial \(R_{s,v}\) that will satisfy the conditions we want. The fact that only the first columns of the period matrices \(\mathcal{P}_{k,t_{v}}\) for \(k=1\) and \(2\) will appear follows from the construction of the \(H_{j}^{(1)}\) as in the proof of Lemma 5.7.
**Remark 5.16**.: _We note that the above Lemma also shows that we can recreate the relations of Daw and Orr's Proposition 4.4 in [1] in our slightly altered setting and thus deal with points for which all of the isogenous coordinates are singular for the base curve in question. We do not pursue this further since for our applications to the Zilber-Pink conjecture the result of Daw and Orr suffices to treat with such points with unlikely isogenies._
## 6 Proof of the height bounds
Having no access to \(p\)-adic relations among the values of our G-functions we instead use arguments centered around Gabber's lemma, as in [1] and
[12], to rule out p-adic proximity of the points we are interested in to the point \(s_{0}\). After this we finally come to the proof of the height bounds we want.
### \(p\)-adic proximity
**Lemma 6.1**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{AO}\)-admissible semiabelian scheme. Let \(s\in C(\bar{\mathbb{Q}})\) be a CM point with field of coefficients \(L_{s}\) defined as in Proposition 4.2. Furthermore assume that there exists at least one singular coordinate for \(S^{\prime}\)._
_Then if \(v\in\Sigma_{L_{s},f}\) is some finite place of \(L_{s}\), the point \(s\) is not \(v\)-adically close to \(s_{0}\)._
Proof.: Using Assumption 2.16, the proof of Lemma 5.4 in [12] shows that if \(s\) was \(v\)-adically close to \(s_{0}\) then the special fiber of the connected Neron model of \(\mathcal{X}_{s}\times_{K(s)}L_{s,v}\) would be the same as that of \(\mathcal{X}_{0}\times_{K}L_{s,v}\).
Since each coordinate \(\mathcal{E}_{k,s}\) is CM it will have potentially good reduction at \(v\) while for \(\mathcal{X}_{0}\) we know that at least one of the coordinates is isomorphic to \(\mathbb{G}_{m}\) which is a contradiction.
**Lemma 6.2**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{ZP}\)-admissible semiabelian scheme. Let \(s\in C(\bar{\mathbb{Q}})\) be a point with unlikely isogenies and field of coefficients \(L_{s}\)._
_Assume that for one of the pairs of isogenous coordinates, say \(i_{1}\) and \(i_{2}\), of \(\mathcal{X}_{s}\) one of them is CM for \(S^{\prime}\) and the other one is singular for \(S^{\prime}\). Then if \(v\in\Sigma_{L_{s},f}\) is some finite place of \(L_{s}\), the point \(s\) is not \(v\)-adically close to \(s_{0}\)._
Proof.: By the same argument as above we know that the special fiber of the Neron model of \(\mathcal{X}_{s}\times_{K(s)}L_{s,v}\) would be the same as that of \(\mathcal{X}_{0}\times_{K}L_{s,v}\). Then, by Corollary 7.2 of [10] we also know that \(\mathcal{E}_{1,s}\times_{K(s)}L_{s,v}\) and \(\mathcal{E}_{2,s}\times_{K(s)}L_{s,v}\) will have the same type of reduction at \(v\). By assumption we then have a contradiction since one of these will be \(\mathbb{G}_{m,\kappa(v)}\), where \(\kappa(v)\) here is the respective residue field, while the other one will be an elliptic curve over \(\kappa(v)\).
### Proof of the heights bounds
We start with the Andre-Oort related height bounds.
**Theorem 6.3**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{AO}\)-admissible semiabelian scheme with at least one singular coordinate for \(S^{\prime}\). Then there exist effectively computable constants \(c_{1}\) and \(c_{2}\) such that for all \(s\in S(\bar{\mathbb{Q}})\) for which the fiber \(\mathcal{X}_{s}\) is CM we have that_
\[h(s)\leq c_{1}[K(s):\mathbb{Q}]^{c_{2}}. \tag{100}\]
Proof.: We start by establishing the height bounds in the good cover \(C_{4}\) first.
In the construction of the bases \(\omega_{i}\) of \(H^{1}_{DR}(\mathcal{X}_{C_{t}}/C_{t})|_{U_{t}}\) we have, see Section 2.2.3 for our notation here, excluded a finite number of points, i.e. the points in \(C_{t}\backslash U_{t}\). Let \(M:=\max\{h(x(P)):P\in(C_{t}\backslash U_{t})(\bar{\mathbb{Q}}),1\leq t\leq l\}\).
Now fix a point \(s\) for which \(\mathcal{X}_{s}\) is CM and let
\[\Sigma(s):=\{v\in\Sigma_{L_{s},\infty}:s\text{ is }v\text{-adically close to }s_{0}\}.\]
If \(\Sigma(s)=\emptyset\) then as in the proof of Theorem 1.3 of [10], see SS 12 there, we know that
\[h(x(s))\leq\rho(\mathcal{Y}):=\max_{1\leq t\leq l}\rho(\mathcal{Y}_{\xi_{t}}).\]
On the other hand, if \(\Sigma(s)\neq\emptyset\) combining Proposition 4.2 with Lemma 6.1 we get non-trivial and global relations among the values of our G-functions at \(x(s)\), in the terminology of Ch. \(VII\), SS 5 of [1]. Thus, the "Hasse principle" of Andre-Bombieri, CH. \(VII\), Theorem 5.2 in [1], gives that
\[h(x(s))\leq c_{0,1}\deg(R_{s,\infty})^{c_{2}}. \tag{101}\]
We note that the constant \(c_{0,1}\) will only depend on the differential operator \(\Lambda\) associated via the Gauss-Manin connection with our choice of bases and the family of G-functions \(\mathcal{Y}\), while the constant \(c_{2}\) will only depend on \(n\).
We thus conclude that \(h(s)\leq c_{1}\deg(R_{s,\infty})^{c_{2}}\) in any case where \(c_{1}\) depends on \(\Lambda\), \(\mathcal{Y}\), and the degree \(l\) of the cover \(C_{4}\to\bar{S}^{\prime}\) which can be bounded in terms of the genus of the projectivization \(\bar{S}^{\prime}\) of our original curve \(S^{\prime}\). Since \([L_{s}:\mathbb{Q}]\leq_{n}[K(s):\mathbb{Q}]\) the result follows.
**Theorem 6.4**.: _Let \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) be a \(G_{ZP}\)-admissible semiabelian scheme. Then there exist effectively computable constants \(c_{1}\) and \(c_{2}\) such that for all \(s\in S(\bar{\mathbb{Q}})\) that have unlikely isogenies and such that one of the pairs of isogenous coordinates of \(\mathcal{X}_{s}\) consists of a CM and a singular coordinate for the curve \(S^{\prime}\) we have that_
\[h(s)\leq c_{1}[K(s):\mathbb{Q}]^{c_{2}}. \tag{102}\]
Proof.: The proof is identical to that of Theorem 6.3, replacing the usage of Proposition 4.2 by Proposition 5.13 and Lemma 6.1 by Lemma 6.2 respectively.
## 7 Applications to Unlikely Intersections
Here we discuss applications of the height bounds of the previous section in the realm of unlikely of intersections in \(Y(1)^{n}\).
### Effective Andre-Oort
We introduce a bit of notation following that of [111]. For an imaginary quadratic point \(\tau\in\mathbb{H}\), where \(\mathbb{H}\) is the upper half plane, we know that \(j(\tau)\) will be a singular modulus. We will write \(D(\tau)\) for the discriminant of the ring of endomorphisms \(\operatorname{End}(E_{\tau})\) of this CM elliptic curve.
**Corollary 7.1** (Large Galois Orbits for Andre-Oort).: _Let \(Z\subset Y(1)^{n}\) be an irreducible Hodge generic curve defined over \(\bar{\mathbb{Q}}\) and let \(K\) be a field of definition of \(Z\).Assume that \(\bar{Z}\) intersects the boundary \(X(1)^{n}\backslash Y(1)^{n}\) at a point \(z_{0}\) that has at least one CM coordinate._
_Then there exist effectively computable positive constants \(c_{3}\), \(c_{4}\) such that for every point \(s\in Z(\bar{\mathbb{Q}})\) all of whose coordinates are of the form \(s_{k}=j(\tau_{k})\) with \(\tau_{k}\) imaginary quadratic we have_
\[[K(s):K]\geq c_{3}\max\{|D(\tau_{k})|\}^{c_{4}}. \tag{103}\]
Proof.: This proof is pretty much verbatim that of Proposition 5.12 of [10]. Throughout let us fix a point \(s\) as in the statement.
Let us fix a compactification \(\bar{Z}\) of \(Z\) in \(X(1)^{n}\simeq(\mathbb{P}^{1})^{n}\). Then we can find a finite etale cover of \(\bar{Z}\), \(g:\bar{S}\to\bar{Z}\), such that after possibly base changing by a finite extension \(K^{\prime}/K\), we have that the semiabelian scheme \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\), where
1. \(S^{\prime}\) is an open subset of \(\bar{S}\) such that \(g(S^{\prime})\cap(X(1)^{n}\backslash Y(1)^{n})=\{z_{0}\}\) with preimage \(s_{0}\in S^{\prime}(K)\),
2. \(f:\mathcal{X}=\mathcal{E}_{1}\times\ldots\times\mathcal{E}_{n}\to S^{\prime} \backslash\{s_{0}\}\) is the pullback of the universal family, and
3. \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) is the connected Neron model of \(f\) over \(S^{\prime}\),
is such that it satisfies Assumption 2.2, Assumption 2.11, and Assumption 3.1.
We can then apply Theorem 6.3 for any \(\tilde{s}\in C_{4}(\bar{\mathbb{Q}})\) that is a preimage of \(s\) in the good cover \(C_{4}\) of \(S^{\prime}\). Then we know that
\[h(x(\tilde{s}))\leq c_{1}[K(s):\mathbb{Q}]_{2}^{c}, \tag{104}\]
with the constants that appear here being independent of the point \(s\).
Letting \(\rho_{i}\) be the compositions \(S\xrightarrow{g}Z\xrightarrow{\pi_{i}}Y(1)\simeq\mathbb{A}_{1}\), and applying [116] Proposition 2.1 we get that for all \(1\leq k\leq n\) we have
\[|h(\rho_{k}(s))-12h_{F}(\mathcal{E}_{k,s})|\leq c_{3}\log\max\{2,h(\rho_{k}( \tilde{s}))\}. \tag{105}\]
Note here that the constant \(c_{3}\) is just a constant independent of our setting.
On the other hand, we have from standard facts about Weil heights that
\[|h(x(\tilde{s}))-c_{5}h(\rho_{k}(\tilde{s}))|\leq c_{6}h(x(\tilde{s})), \tag{106}\]
here \(c_{5}\) and \(c_{6}\) will depend on our curve.
On the other hand note that from [13] we know that for all \(1\leq k\leq n\) we have
\[|D(\tau_{k})|\leq c_{7}\max\{[K(s):\mathbb{Q}],h_{F}(\mathcal{E}_{k,s})\}^{c_{ 8}} \tag{107}\]
where \(c_{7}\) and \(c_{8}\) are positive constants that are also independent of our setting.
Combining (105) together with (106) and (107) we conclude that there exist constants \(c_{9}\), \(c_{10}\) independent of our chosen point \(s\) such that for all \(1\leq k\leq n\) we have
\[|D(\tau_{k})|\leq c_{9}\max\{[K(s):\mathbb{Q}],h(x(\tilde{s}))\}^{c_{10}}. \tag{108}\]
Pairing this last equation with (104) we have concluded the proof.
**Remark 7.2**.: _The constants \(c_{1}\) and \(c_{2}\) of Theorem 6.3 depends only on \(n\), \(\rho(\mathcal{Y})\), \(\sigma(\mathcal{Y})\), \(|Sin\Lambda|\), i.e. the number of singularities of \(\Lambda\), and \(\sigma(\Lambda)\)._
_By the Theorem on page \(123\) of [1] one can replace the dependence on \(\sigma(\Lambda)\) by a dependence on \(\sigma(\mathcal{Y})\) and the quantity \(s\) defined on page \(120\) in [1] that depends on the degrees of the denominators and numerators of the entries of the matrix \(\Gamma\) associated to the bases \(\omega_{i}\) via the Gauss-Manin connection._
### Some cases of the Zilber-Pink Conjecture
The strategy to reduce the Zilber-Pink conjecture for curves in \(Y(1)^{n}\) to height bounds for isogenous points analogous to those that appear in Theorem 6.4 already appears in [10], based on work of Habegger and Pila in [13].
Using the same arguments as in Proposition 5.12 of [10] one can establish the following:
**Corollary 7.3** (Large Galois Orbits for Zilber-Pink).: _Let \(Z\subset Y(1)^{n}\) be an irreducible Hodge generic curve defined over \(\bar{\mathbb{Q}}\) and let \(K\) be a field of definition of \(Z\)._
_Then there exist positive constants \(c_{3}\), \(c_{4}\) such that for every point \(s\in Z(\bar{\mathbb{Q}})\) for which \(\exists\ \{i_{1},i_{2}\}\), \(\{i_{3},i_{4}\}\subset\{1,\ldots,n\}\) with \(i_{1}\neq i_{2}\), \(i_{3}\neq i_{4}\) and \(\{i_{1},i_{2}\}\neq\{i_{3},i_{4}\}\) that are such that_
1. \(\exists\ M\)_,_ \(N\) _with_ \(\Phi_{M}(s_{i_{1}},s_{i_{2}})=\Phi_{N}(s_{i_{3}},s_{i_{4}})=0\)_,_
2. \(s_{i_{1}}\)_,_ \(s_{i_{3}}\) _are not singular moduli, and_
3. _one of the two sets_ \(\{i_{1},i_{2}\}\)_,_ \(\{i_{3},i_{4}\}\) _contains one CM and one singular coordinate for_ \(Z\)_,_
_we have_
\[[K(s):K]>c_{3}\max\{M,N\}^{c_{4}}. \tag{109}\]
Proof.: We simply note here the differences needed to adjust the proof of Proposition 5.12 of [10] to our setting.
We adopt the notation of the proof of Corollary 7.1 finding a semiabelian scheme \(f^{\prime}:\mathcal{X}^{\prime}\to S^{\prime}\) that is \(G_{ZP}\)-admissible and such that \(S^{\prime}\) is a finite etale cover of \(Z\).
We can then apply Theorem 6.4 to find \(c_{1}\), \(c_{2}\) with
\[h(x(\tilde{s}))\leq c_{1}[K(s):\mathbb{Q}]^{c_{2}}\]
for all preimages \(\tilde{s}\) in \(S\) via \(g\) of any such point \(s\in Z(\bar{\mathbb{Q}})\).
Letting \(\rho_{i}\) be as in the previous proof, we recover the respective inequalities in the proof of Prop. 5.12 in [10], upon which stage we finish by using the isogeny estimates of Gaudron-Remond [1].
Given the above we can conclude from [12] the following Zilber-Pink-type statement.
**Theorem 7.4**.: _Let \(C\subset Y(1)^{n}\) be an irreducible Hodge generic curve defined over \(\bar{\mathbb{Q}}\). Let_
\[\begin{array}{c}J_{1}:=\{1\leq i\leq n:i\text{ is a singular coordinate for }C\}\text{ and }\\ J_{2}:=\{1\leq i\leq n:i\text{ is a CM coordinate for }C\}\text{, }\end{array}\]
_and set \(J_{C}:=(J_{1}\times J_{2})\cup(J_{2}\times J_{1})\subset\mathbb{N}^{2}\). Then the set_
\[\{s\in C(\mathbb{C}):\exists N,M\text{ such that }\Phi_{N}(s_{i_{1}},s_{i_{2}})= \Phi_{M}(s_{i_{3}},s_{i_{4}})=0,(i_{1},i_{2})\in J_{C}\}\]
_is finite._
Apart from implying Theorem 1.4, the above is enough to give us unconditional cases of the Zilber-Pink conjecture for curves in \(Y(1)^{3}\).
**Theorem 7.5**.: _Let \(C\subset Y(1)^{3}\) be an irreducible curve not contained in a special subvariety of \(Y(1)^{3}\). Assume that the curve intersects the boundary \(X(1)^{3}\backslash Y(1)^{3}\) in a point which up to permutation of coordinates is of the form \((\infty,\zeta_{1},\zeta_{2})\) or \((\infty,\infty,\zeta_{1})\) with \(\zeta_{1}\), \(\zeta_{2}\) singular moduli._
_Then the Zilber-Pink conjecture holds for \(C\)._ |
2306.07574 | Stability for hyperplane covers | An almost $k$-cover of the hypercube $Q^n = \{0,1\}^n$ is a collection of
hyperplanes that avoids the origin and covers every other vertex at least $k$
times. When $k$ is large with respect to the dimension $n$, Clifton and Huang
asymptotically determined the minimum possible size of an almost $k$-cover.
Central to their proof was an extension of the LYM inequality, concerning a
weighted count of hyperplanes.
In this paper we completely characterise the hyperplanes of maximum weight,
showing that there are $\binom{2n-1}{n}$ such planes. We further provide
stability, bounding the weight of all hyperplanes that are not of maximum
weight. These results allow us to effectively shrink the search space when
using integer linear programming to construct small covers, and as a result we
are able to determine the exact minimum size of an almost $k$-cover of $Q^6$
for most values of $k$. We further use the stability result to improve the
Clifton--Huang lower bound for infinitely many choices of $k$ in every
sufficiently large dimension $n$. | Shagnik Das, Valjakas Djaljapayan, Yen-chi Roger Lin, Wei-Hsuan Yu | 2023-06-13T06:51:54Z | http://arxiv.org/abs/2306.07574v1 | # Stability for hyperplane covers
###### Abstract
An almost \(k\)-cover of the hypercube \(Q^{n}=\{0,1\}^{n}\) is a collection of hyperplanes that avoids the origin and covers every other vertex at least \(k\) times. When \(k\) is large with respect to the dimension \(n\), Clifton and Huang asymptotically determined the minimum possible size of an almost \(k\)-cover. Central to their proof was an extension of the LYM inequality, concerning a weighted count of hyperplanes.
In this paper we completely characterise the hyperplanes of maximum weight, showing that there are \(\binom{2n-1}{n}\) such planes. We further provide stability, bounding the weight of all hyperplanes that are not of maximum weight. These results allow us to effectively shrink the search space when using integer linear programming to construct small covers, and as a result we are able to determine the exact minimum size of an almost \(k\)-cover of \(Q^{6}\) for most values of \(k\). We further use the stability result to improve the Clifton-Huang lower bound for infinitely many choices of \(k\) in every sufficiently large dimension \(n\).
+
Footnote †: Department of Mathematics, National Central University, Taiwan. E-mail: [email protected]
+
Footnote †: Department of Mathematics, National Central University, Taiwan. E-mail: [email protected]
+
Footnote †: Department of Mathematics, National Central University, Taiwan. E-mail: [email protected]
+
Footnote †: Department of Mathematics, National Central University, Taiwan. E-mail: [email protected]
+
Footnote †: Department of Mathematics, National Central University, Taiwan. E-mail: [email protected]
## 1 Introduction
While it is clear that the hypercube \(Q^{n}=\{0,1\}^{n}\) can be covered by two hyperplanes, a classic and surprising result shows that if we have to avoid the origin, then \(n\) planes are needed to cover the remaining points. Lying at the intersection of finite geometry and extremal combinatorics, this problem and its variations have been studied by several researchers over the decades. In the finite geometry setting, such a hyperplane cover is closely related to the notion of blocking sets, and research in this direction was pioneered in the late 1970s by Jamison [6]. Meanwhile in the extremal context, this problem was first studied by Alon and Furedi [1], who resolved a problem of Komjath [7] from Ramsey Theory. The Alon-Furedi Theorem was a precursor to the hugely influential Combinatorial Nullstellensatz, and indeed, this problem has proven a valuable test case in the development of the algebraic method. For a more thorough survey of the history of this problem, we refer the reader to [2].
### Covering with multiplicities
In recent years, renewed interest in this hyperplane covering problem was sparked by the work of Clifton and Huang [5]. In this paper, they studied the multiplicity version of the problem. Given
\(k\in\mathbb{N}\), we define an _almost \(k\)-cover_ of \(Q^{n}\) to be a set of hyperplanes that avoids the origin while covering all other points of \(Q^{n}\) at least \(k\) times. We are then interested in the minimum size of an almost \(k\)-cover, a quantity we denote by \(f(n,k)\). Note that the Alon-Furedi Theorem shows \(f(n,1)=n\).
Clifton and Huang showed that the extremal function \(f(n,k)\) exhibits different behaviour, depending on the relative sizes of the two parameters. When \(n\) is large compared to \(k\), they used algebraic methods to obtain lower bounds on \(f(n,k)\), showing that \(f(n,2)=n+1\), \(f(n,3)=n+3\) for all \(n\geq 2\), and \(f(n,k)\geq n+k+1\) whenever \(k\geq 4\) and \(n\geq 3\). In a subsequent paper, Sauermann and Wigderson [10] solved the algebraic version of this problem, which in particular improves the lower bound to \(f(n,k)\geq n+2k-3\) for any \(k\geq 2\) and \(n\geq 2k-3\). However, these lower bounds fall short of Clifton and Huang's upper bound of \(f(n,k)\leq n+\binom{k}{2}\), which they conjecture to be the truth for all \(k\in\mathbb{N}\) and \(n\) sufficiently large with respect to \(k\).
In this paper, however, we will be interested in the other regime, where \(k\) is large with respect to \(n\). In this range, Clifton and Huang [5] determined asymptotically the size of the smallest almost \(k\)-covers, showing \(f(n,k)=(H_{n}+o(1))\,k\), where \(H_{n}=1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}\) is the \(n\)th Harmonic number. To prove this result, they solved the linear programming relaxation of the integer linear program that represents the hyperplane covering problem. In particular, to prove the lower bound, they first defined a weighting of the points of \(Q^{n}\).
**Definition 1.1**.: Let \(x\) be a nonzero point in \(Q^{n}\) with exactly \(t\) coordinates equal to \(1\). The _weight_ of \(x\) is defined as
\[w(x)=\frac{1}{t\binom{n}{t}}.\]
More generally, given any set \(S\subseteq\mathbb{R}^{n}\), we define the _weight_ of \(S\) to be the sum of the weights of all points in \(S\cap Q^{n}\); that is,
\[w(S)=\sum_{p\in S\cap Q^{n}}w(p). \tag{1}\]
Given this definition, the crucial step was following theorem.
**Theorem 1.2** ([5], Theorem 1.3).: _If \(h\) is a hyperplane avoiding the origin, then \(w(h)\leq 1\)._
To see how this implies the lower bound, observe that the total weight of the hypercube is \(H_{n}\). Since every point is covered at least \(k\) times, it follows that the total weight of the hyperplanes in an almost \(k\)-cover must be at least \(H_{n}k\). Since, by Theorem 1.2, each hyperplane can have weight at most \(1\), we must have
\[f(n,k)\geq\lceil H_{n}k\rceil. \tag{2}\]
There are a couple of key remarks to be made concerning this theorem. First, Clifton and Huang observed that the bound is tight; any hyperplane \(h=\{x:\sum_{i=1}^{n}c_{i}x_{i}=1\}\) where all the coefficients \(c_{i}\) are all either \(0\) or \(1\) satisfies \(w(h)=1\). In particular, this gives exponentially many hyperplanes of maximum weight.
Next, Clifton and Huang observed that if the coefficients \(c_{i}\) are all positive, then the points covered by the hyperplane are the characteristic vectors of the sets in an antichain. The bound then follows immediately from the famous Lubell-Yamamoto-Meshalkin inequality [3, 8, 9, 11]. Thus, Theorem 1.2 can be viewed as a generalisation of the LYM inequality.
### Our results
In this paper, we prove stability for Theorem 1.2, characterising all hyperplanes of maximum weight, and improving the bound on the weight of all other hyperplanes.
**Theorem 1.3**.: _Let \(h=\{x:\sum_{i=1}^{n}c_{i}x_{i}=1\}\) be an affine hyperplane in \(\mathbb{R}^{n}\) that does not pass through the origin. If the coefficients satisfy_
1. \(c_{i}\leq 1\) _for all_ \(i\)_,_
2. \(\sum_{i=1}^{n}c_{i}\geq 1\)_, and_
3. \(c_{i}\in\mathbb{Z}\) _for all_ \(i\)_,_
_then \(w(h)=1\). Otherwise, \(w(h)\leq 1-\frac{1}{n}\)._
Note that the upper bound for the non-maximum-weight hyperplanes is best possible. Indeed, the hyperplane \(h_{\alpha}=\{x:\alpha x_{1}+\sum_{i=2}^{n}x_{i}=1\}\) has \(w(h_{\alpha})=1-\frac{1}{n}\) whenever \(\alpha\notin\{-(n-2),-(n-3),\ldots,-1,0,1\}\).
Our characterisation allows us to enumerate the hyperplanes of maximum weight.
**Corollary 1.4**.: _The number of weight-\(1\) hyperplanes in \(\mathbb{R}^{n}\) is \(\binom{2n-1}{n}\)._
Proof.: We enumerate the weight-\(1\) hyperplanes by establishing a bijection from the set \(B_{n}\subseteq\{-1,+1\}^{2n-1}\) of all sequences \((b_{1},\ldots,b_{2n-1})\) with exactly \(n\) positive entries. For a sequence \((b_{1},\ldots,b_{2n-1})\) in \(B_{n}\), let \(i_{\ell}\) be the indices such that \(b_{i_{\ell}}=+1\) for \(\ell=1,2,\ldots,n\); we arrange these indices so that \(i_{1}<i_{2}<\cdots<i_{n}\), and we set \(i_{0}=0\) for later usage. Now we define a mapping \(\varphi\) from \(B_{n}\) to \(\mathbb{Z}^{n}\) as follows: for \(b=(b_{1},\ldots,b_{2n-1})\in B_{n}\), let \(\varphi(b)=(c_{1},\ldots,c_{n})\), where the coefficients \(c_{\ell}\) are given by
\[c_{\ell}=\sum_{j=i_{\ell-1}+1}^{i_{\ell}}b_{j},\qquad\ell=1,2,\ldots,n.\]
For instance, \(\varphi(+1,+1,-1,-1,+1,-1,+1,+1,-1)=(1,1,-1,0,1)\).
It is clear that the image of \(\varphi\) is exactly those vectors \((c_{1},\ldots,c_{n})\) that satisfy the conditions (i), (ii), and (iii) in Theorem 1.3, and the inverse mapping \(\varphi^{-1}\) can be easily defined: simply expand every coefficient \(c_{\ell}\) into a sequence of \(|c_{\ell}|+1\) terms of \(-1\) followed by a \(+1\). This shows that \(\varphi\) is a bijection, and the number of weight-\(1\) hyperplanes in \(\mathbb{R}^{n}\) is the cardinality of \(B_{n}\).
The characterisation of maximum weight hyperplanes is also useful for determining precise values of the extremal function \(f(n,k)\). As an example of an application of our result, we will prove the following, extending results of Clifton and Huang (who determined \(f(5,k)\) for all \(k\geq 15\)).
**Theorem 1.5**.: _For \(k\geq 65\), we have_
\[f(6,k)=\left\lceil\frac{49k}{20}\right\rceil\]
_whenever \(k\not\equiv 2,11\pmod{20}\)._
OrganisationIn Section 2, we will review Clifton and Huang's proof of Theorem 1.2, introducing terminology that we will need in our own proofs. In Section 3, we use this framework to prove Theorem 1.3, and prove some further stability. In Section 4, we combine our stability result with the linear programming approach to prove Theorem 1.5, and also show that the lower bound (2) is not tight infinitely often. Finally, in Section 5, we provide some concluding remarks and outline possible directions for further research.
The good, bad, and the redundant
In this section we will review the proof of Theorem 1.2, primarily with the aim of establishing some terminology and notation that we shall use in our own proofs in the next section.
Let \(h\) be the affine hyperplane in \(\mathbb{R}^{n}\) defined by the equation \(\sum_{i=1}^{n}c_{i}x_{i}=1\). We wish to show that \(w(h)\leq 1\); that is, the sum of the weights of the points of \(Q^{n}\) contained in \(h\) is at most \(1\). For this, it is useful to identify a point \(x\in Q^{n}\) with the subset \(S\subseteq[n]\) whose characteristic vector it is; that is, \(\{i:x_{i}=1\}\). The idea behind the proof is to associate each set \(S\) covered by the hyperplane \(h\) with a disjoint set of permutations in \(S_{n}\), whose size is proportional to the weight of \(x\). Since the total number of permutations is bounded by \(n!\), this will in turn bound the weight of the hyperplane \(h\).
**Definition 2.1**.: Given some set \(S\subseteq[n]\) such that \(\sum_{i\in S}c_{i}=1\), we say a permutation \(\pi\in S_{n}\)_yields_\(S\) provided that
1. \(\pi([|S|])=S\); that is, \(S\) is an initial segment of \(\pi\), and
2. \(\sum_{i=1}^{\ell}c_{\pi(i)}\begin{cases}<1&\text{ if }\ell<|S|,\\ =1&\text{ if }\ell=|S|,\end{cases}\) where \(1\leq\ell\leq|S|\).
From the definition, it is clear that each \(\pi\in S_{n}\) can yield at most one subset \(S\subseteq[n]\). We call a permutation \(\pi\in S_{n}\)_bad_ if it does not yield any set \(S\subseteq[n]\), and define \(\mathcal{B}\) to be the collection of all bad permutations in \(S_{n}\).
Clifton and Huang [5] proved that, for every set \(S\subseteq[n]\) such that \(\sum_{i\in S}c_{i}=1\), and for every cyclic permutation \(\sigma\) of \(S\), there is at least one starting point in \(S\) such that if we unfold \(\sigma\) to a linear permutation \(\pi_{\sigma}\) of \(S\), and extend it arbitrarily to any \(\pi\in S_{n}\) with \(\pi|_{[|S|]}=\pi_{\sigma}\), then \(\pi\) yields \(S\). In the case where \(\sigma\) admits two or more such starting points, we call \(\sigma\)_switchable_, and from the available options, we choose \(\pi_{\sigma}\) such that the initial entry \(\pi_{\sigma}(1)\) is the largest.
Now, consider a permutation \(\pi\in S_{n}\). If \(\pi\) is not bad, there is some set \(S\) that \(\pi\) yields. Let \(\sigma\) be the cyclic permutation of \(S\) induced by \(\pi|_{[|S|]}\). If \(\pi|_{[|S|]}=\pi_{\sigma}\), we say that \(\pi\) is _good_, and otherwise we say that \(\pi\) is _redundant_. We let \(\mathcal{G}\subseteq S_{n}\) be the set of good permutations, and \(\mathcal{R}\subseteq S_{n}\) the set of redundant permutations. Note that this gives a partition \(S_{n}=\mathcal{G}\cup\mathcal{B}\cup\mathcal{R}\) of the set of the permutations of \([n]\) into the subsets of good, bad, and redundant permutations, whence
\[n!=|\mathcal{G}|+|\mathcal{B}|+|\mathcal{R}|. \tag{3}\]
Given a set \(S\) covered by \(h\), note that there are \((|S|-1)!\) cyclic permutations \(\sigma\) of \(S\), each of which gives \((n-|S|)!\) good permutations (which must start with \(\pi_{\sigma}\)). Hence, abusing notation to write \(S\in h\) to indicate that \(S\) is covered by \(h\),
\[|\mathcal{G}|=\sum_{S\in h}(|S|-1)!(n-|S|)!,\]
or
\[\frac{|\mathcal{G}|}{n!}=\sum_{S\in h}\frac{(|S|-1)!(n-|S|)!}{n!}=\sum_{S\in h }\frac{1}{|S|\binom{n}{|S|}}=\sum_{x\in h}w(x)=w(h).\]
Dividing (3) by \(n!\) and rearranging then yields
\[w(h)=\frac{|\mathcal{G}|}{n!}=1-\frac{|\mathcal{B}|}{n!}-\frac{|\mathcal{R}| }{n!}\leq 1. \tag{4}\]
This proves Theorem 1.2. Moreover, it outlines how one can establish stability -- to prove that non-maximum-weight hyperplanes have weight bounded away from \(1\), we need to show that they admit many bad or redundant permutations.
Weights of hyperplanes
We will now use the framework set up in the previous section to prove our main result, Theorem 1.3. Following that, we shall prove some further stability, providing a partial characterisation of all hyperplanes of large weight.
### The proof of Theorem 1.3
We first prove that if \(h=\{x:\sum_{i=1}^{n}c_{i}x_{i}=1\}\), where the coefficients \(c_{i}\) satisfy the three conditions in Theorem 1.3, then \(w(h)=1\). Per the discussion in Section 2, this amounts to showing that every permutation in \(S_{n}\) is good, i.e., \(\mathcal{G}=S_{n}\).
For any permutation \(\pi\in S_{n}\), let \(t\) be the smallest index such that
\[\sum_{j=1}^{t}c_{\pi(j)}=1.\]
Note that the conditions (i), (ii) and (iii) guarantee that such an index exists, and moreover that \(\sum_{j=1}^{\ell}c_{\pi(j)}<1\) for all \(1\leq\ell\leq t-1\). Hence, \(\pi\) yields the subset \(S=\{\pi(1),\ldots,\pi(t)\}\).
Now consider the cyclic permutation \(\sigma=\langle\pi(1),\ldots,\pi(t)\rangle\) of \(S\). We argue that \(\pi(1)\) is the unique starting point for \(\sigma\), that is, \(\pi|_{[t]}=\pi_{\sigma}\). There is nothing to prove if \(t=1\), so we may assume \(t>1\) and \(\pi(s)\) is another starting point of \(\sigma\) with \(1<s\leq t\). Then \(\sum_{i=s}^{t}c_{\pi(i)}<1\). Since all the coefficients \(c_{j}\)'s are integers, this implies that \(\sum_{i=s}^{t}c_{\pi(i)}\leq 0\). But this leads to
\[\sum_{i=1}^{s-1}c_{\pi(i)}=\sum_{i=1}^{t}c_{\pi(i)}-\sum_{i=s}^{t}c_{\pi(i)} \geq 1-0=1,\]
which contradicts our choice of \(t\). Hence we have \(\pi|_{[t]}=\pi_{\sigma}\), and so \(\pi\) is good. As \(\pi\) was an arbitrary permutation, it follows that \(\mathcal{G}=S_{n}\), as required.
To complete the proof of Theorem 1.3, we need to show that if \(w(h)<1\), then \(w(h)\leq 1-\frac{1}{n}\). Note that by what we have just shown, we know at least one of the following three cases must occur:
1. \(c_{i}>1\) for some \(i\in[n]\),
2. \(\sum_{i=1}^{n}c_{i}<1\), or
3. \(c_{i}\notin\mathbb{Z}\) for some \(i\in[n]\).
We treat each of these cases in turn.
Case (1)\(c_{i}>1\) for some \(i\in[n]\).
Observe that any permutation \(\pi\) with \(\pi(1)=i\) is bad, since if a permutation yields a set \(S\in[n]\), its first \(|S|\) partial sums are at most \(1\). Thus, \(|\mathcal{B}|\geq(n-1)!\), and so by (4) we have \(w(h)\leq 1-\frac{1}{n}\).
Case (2)\(\sum_{i=1}^{n}c_{i}<1\).
We will show that each cyclic permutation \(\sigma\) of \([n]\) admits a starting point for which the resulting permutation \(\pi\in S_{n}\) is bad. Then \(|\mathcal{B}|\geq(n-1)!\), which once again implies \(w(h)\leq 1-\frac{1}{n}\).
Now let \(\pi^{\prime}\) be the permutation we obtain from \(\sigma\) by starting at \(1\). For \(1\leq t\leq n\), define \(a_{t}=\sum_{i=1}^{t}c_{\pi^{\prime}(i)}\). Let \(t_{0}\in[n]\) be any index such that \(a_{t_{0}}\geq a_{t}\) for all \(t\in[n]\). In particular,
\[a_{t_{0}}\geq a_{n}=\sum_{i=1}^{n}c_{\pi^{\prime}(i)}=\sum_{i=1}^{n}c_{i}.\]
If \(t_{0}=n\), then \(a_{t}\leq a_{t_{0}}=a_{n}<1\) for each \(t\in[n]\), which means that \(\pi^{\prime}\) is already bad. Otherwise, take \(\pi\) to be the permutation obtained by starting \(\sigma\) at \(t_{0}+1\). We shall show that \(\pi\) is bad. If not, then it yields some set \(S\subseteq[n]\), and so there is some \(j\) with \(\sum_{i=1}^{j}c_{\pi(i)}=1\). If \(j\leq n-t_{0}\) then
\[\sum_{i=1}^{j}c_{\pi(i)}=\sum_{i=t_{0}+1}^{t_{0}+j}c_{\pi^{\prime}(i)}=a_{t_{ 0}+j}-a_{t_{0}}\leq 0,\]
by the choice of \(t_{0}.\) Otherwise, if \(n-t_{0}+1\leq j\leq n\), then
\[\sum_{i=1}^{j}c_{\pi(i)}=\sum_{i=t_{0}+1}^{n}c_{\pi^{\prime}(i)}+\sum_{i=1}^{j -n+t_{0}}c_{\pi^{\prime}(i)}=a_{n}-a_{t_{0}}+a_{j-n+t_{0}}\leq a_{n}<1\]
where the last inequality holds by \(a_{t_{0}}\geq a_{j-n+t_{0}}\) and the strict inequality follows by assumption. Hence, there is no such \(j,\) and \(\pi\) must be bad, completing this case.
Case (3)\(c_{i}\notin\mathbb{Z}\) for some \(i\in[n]\).
If \(c_{i}\) is the unique non-integral coefficient, then any permutation with \(\pi(1)=i\) is bad, as all its partial sums will be non-integral. Thus, as in Case (1), we will have \(w(h)\leq 1-\frac{1}{n}\). Hence we may assume there are \(r\geq 2\) non-integral coefficients, which we may assume (without loss of generality) to be \(c_{1},c_{2},\ldots,c_{r}\). In this case, we shall show that for every cyclic permutation of \([n]\) there is a starting point for which the corresponding permutation \(\pi\in S_{n}\) is either bad or redundant. Then we have \(|\mathcal{B}|+|\mathcal{R}|\geq(n-1)!\), and so it again follows from (4) that \(w(h)\leq 1-\frac{1}{n}\).
Let \(\sigma\) be a cyclic permutation of \([n]\), and partition it into the cyclic intervals \(I_{1},I_{2},\ldots,I_{r}\) starting \(1,2,\ldots,r\), respectively. If one of these intervals has a partial sum at least \(1\), then note that it is in fact strictly greater than \(1\); as \(c_{i}\) is the only non-integral coefficient in \(I_{i}\), all partial sums are non-integral. Thus, if we start \(\pi\) at \(i\), we obtain a bad permutation. Otherwise, for every interval \(I_{i}\), all the partial sums are strictly less than \(1\). In this case, consider the permutation \(\pi\) starting at \(1\). If \(\pi\) does not yield a set \(S\subseteq[n]\), then it is bad. So, we may assume \(\pi\) yields a set \(S\subseteq[n]\). Note that \(S\) must contain some \(c_{j}\) for \(2\leq j\leq r\), since there must be another non-integral coefficient to make the sum integral. Hence, \(S\) intersects more than one of the intervals \(I_{i}\) in \(\sigma\).
Now, for \(1\leq t\leq|S|\), define \(a_{t}=\sum_{i=t}^{|S|}c_{\pi(i)}\), and let \(t_{0}\) be the smallest index that minimizes \(a_{t}\). Note that we have \(2\leq t_{0}\leq|S|\), since \(\sum_{i=1}^{|S|}c_{\pi(i)}=\sum_{i\in S}c_{i}=1\), while if we choose \(t\) be the index of the last non-integral coefficient in \(S\) then \(a_{t}\) is a partial sum of that interval, and hence \(a_{t}<1\). Let \(\pi^{\prime}\) be the permutation obtained by swapping the intervals \(\pi([1,t_{0}-1])\) and \(\pi([t_{0},|S|])\). Note that we still have \(\pi^{\prime}([|S|])=S\), and that \(\pi\) and \(\pi^{\prime}\) induce the same cyclic permutation of \(S\).
**Claim.**\(\pi^{\prime}\) also yields \(S\).
_Proof of Claim._ Since \(\pi^{\prime}([|S|])=S\), the only way that \(\pi^{\prime}\) could fail to yield \(S\) is if there is some index \(j\in[|S|-1]\) such that \(\sum_{i=1}^{j}c_{\pi^{\prime}(i)}\geq 1\). If \(j\leq|S|-t_{0}+1\) then \(\pi^{\prime}([j])=\pi([t_{0},t_{0}+j-1]).\) If this partial sum is positive, let alone greater than or equal to \(1\), then
\[a_{t_{0}+j}=\sum_{i=t_{0}+j}^{|S|}c_{\pi(i)}=\sum_{i=t_{0}}^{|S|}c_{\pi(i)}- \sum_{i=t_{0}}^{t_{0}+j-1}c_{\pi(i)}=a_{t_{0}}-\sum_{i=1}^{j}c_{\pi^{\prime}(i )}<a_{t_{0}},\]
which contradicts the choice of \(t_{0}\) (\(a_{t_{0}}\) is the minimum sum). Thus, \(j\geq|S|-t_{0}+2\), in which case \(\pi^{\prime}([j])=\pi([t_{0},|S|])\cup\pi([1,j-|S|+t_{0}-1])\) and
\[\sum_{i=1}^{j}c_{\pi^{\prime}(i)}=\sum_{i=1}^{j-|S|+t_{0}-1}c_{\pi(i)}+\sum_{i= t_{0}}^{|S|}c_{\pi(i)}\geq 1.\]
Recall that \(\pi\) yields \(S\), and so we have \(\sum_{i=1}^{|S|}c_{\pi(i)}=1\). We can therefore deduce that \(\sum_{i=t_{0}+j-|S|}^{t_{0}-1}c_{\pi(i)}\leq 0\), and so
\[a_{t_{0}+j-|S|}=\sum_{i=t_{0}+j-|S|}^{|S|}c_{\pi(i)}=\sum_{i=t_{0}+j-|S|}^{t_{ 0}-1}c_{\pi(i)}+\sum_{i=t_{0}}^{|S|}c_{\pi(i)}=\sum_{i=t_{0}+j-|S|}^{t_{0}-1}c _{\pi(i)}+a_{t_{0}}\leq a_{t_{0}},\]
which again contradicts our choice of \(t_{0}\) (recall that \(j\leq|S|-1\) which implies \(t_{0}+j-|S|<t_{0}\)). Hence, there is no such index \(j\), and \(\pi^{\prime}\) yields \(S\) as well.
Thus, we see that each of these cyclic permutations \(\sigma\) of \(S\) are switchable, admitting at least two starting points that result in permutations yielding \(S\). We defined \(\pi_{\sigma}\) to be the permutation with the largest starting entry, and hence the permutation \(\pi\) obtained from starting \(\sigma\) at \(1\) is redundant. Hence, we have shown that for each permutation of \([n]\), there is a starting point giving a bad or redundant permutation, which resolves Case (3).
This completes the proof of Theorem 1.3.
### Further stability
Theorem 1.3 characterises the hyperplanes of maximum possible weight. It is then natural to ask what we can say about hyperplanes of large weight, and indeed, our methods allow us to at least partially describe their coefficients.
**Theorem 3.1**.: _Let \(h=\{x:\sum_{i=1}^{n}c_{i}x_{i}=1\}\) be a hyperplane in \(\mathbb{R}^{n}\) of weight \(w(h)>1-\frac{r}{n}\). Then the following hold:_
1. \(|\{i:c_{i}>1\}|\leq r-1\)_, and_
2. \(|\{i:c_{i}\notin\mathbb{Z}\}|\leq 2r^{2}-1\)_._
Before we embark on the proof, we first note that the bound in part (a) is best possible. Indeed, if \(c_{1}=\ldots=c_{r-1}=2,c_{r}=\ldots=c_{n}=1\), then it is easy to see that a permutation \(\pi\) is bad if \(\pi(1)\in[r-1]\), and good otherwise. This implies \(w(h)=1-\frac{r-1}{n}>1-\frac{r}{n}\), and we have \(r-1\) coefficients larger than \(1\).
On the other hand, we do not expect the bound in part (b) to be tight, and rather expect that there can be at most \(O(r)\) non-integral coefficients. However, note that a bound of \(r\) is not true in general. For instance, if \(n=2r-1\), then we can take \(c_{i}=\frac{1}{2}\) for all \(i\); this has weight \(\frac{1}{2}\) but \(n=2r-1\) fractional coefficients. Perhaps, though, if one imposes some lower bound on \(n\), then it could be true that one must have fewer than \(r\) fractional coefficients.
Finally, we note that Theorem 3.1 only has two parts in its characterisation, as compared to the three parts in Theorem 1.3. One might also expect that, if the hyperplane has weight close to one, then the sum of the coefficients should not be too small. However, this is not true -- one could have \(c_{1}=c_{2}=\ldots=c_{n-1}=1\), with \(c_{n}\) tending to \(-\infty\). Then the sum of the coefficients is very small, but all permutations except those starting with \(n\) are good, meaning \(w(h)=1-\frac{1}{n}\). Hence, it appears to be somewhat complicated to formulate an appropriate condition on the sums of the coordinates.
Proof.: For (a), note that any permutation starting with a coefficient larger than \(1\) is bad. Thus, if there are at least \(r\) coefficients larger than \(1\), we have \(|\mathcal{B}|\geq r(n-1)!\), and then (4) implies \(w(h)\leq 1-\frac{r}{n}\).
For (b), we make the following claim.
**Claim**.: If \(h\) has \(s\) fractional coefficients, then for any circular permutation of \([n]\) and \(b\in\mathbb{N}\), there are either \(b\) starting points that give bad permutations, or \(\frac{s}{b}\) that give bad or switchable permutations.
Let us first see how the claim gives the result. Suppose for contradiction that \(h\) has \(s=2r^{2}\) fractional coefficients, and set \(b=r\). Then we know that the \((n-1)!\) circular permutations of \([n]\) are of two types -- the first give rise to at least \(r\) bad permutations, and the second give rise to \(2r\) permutations that are either bad or switchable. Suppose there are \(\alpha(n-1)!\) circular permutations of the first kind, and thus \((1-\alpha)(n-1)!\) circular permutations of the second kind. If we let \(\mathcal{S}\) denote the set of switchable permutations, we have \(|\mathcal{B}|+|\mathcal{S}|\geq r\alpha(n-1)!+2r(1-\alpha)(n-1)!\), with \(|\mathcal{B}|\geq r\alpha(n-1)!\).
Now notice that at least half of all switchable permutations are redundant, and so it follows from the above inequalities that \(|\mathcal{B}|+|\mathcal{R}|\geq r(n-1)!\), and so the weight of the hyperplane is at most \(1-\frac{r}{n}\).
To finish, we prove the claim.
Proof of Claim.: Fix a circular permutation \(\sigma\) of \([n]\), and let \(I_{1},I_{2},\ldots,I_{s}\) be the (cyclic) intervals that start with fractional coefficients, labelled in cyclic order. For \(1\leq i\leq s\), let \(\pi_{i}\) be the linear permutation of \([n]\) obtained from \(\sigma\) by starting at the \(i\)th fractional coefficient (so \(I_{i}\) is an initial segment of \(\pi_{i}\)).
Now since each \(I_{i}\) starts with a fractional coefficient, and has no other, all of its initial sums are fractional. Thus, if \(I_{i}\) has any initial sum that is at least \(1\), it is strictly larger than \(1\), and so \(\pi_{i}\) will be a bad permutation. Thus, if there are at least \(b\) such intervals, then we obtain \(b\) bad permutations, and we are done.
Suppose instead that \(I_{i}\) has an initial sum in the interval \((0,1)\). If the permutation \(\pi_{i}\) is good, then it yields a set \(S\). Thus, if we consider the tail interval of \(S\) with the smallest sum, this will be a proper subinterval of \(S\) (since all of \(S\) has sum \(1\), but we can drop the initial sum in \(I_{i}\) with positive sum to obtain a tail interval with smaller sum). Then, as in the proof of Theorem 1.3, we can rotate \(S\) to obtain another permutation that yields \(S\) with the same cyclic permutation of \(S\). Hence, \(\pi_{i}\) is switchable in this case.
In the remaining case, then, all initial sums of \(I_{i}\), including \(I_{i}\) itself, must be strictly negative. Suppose we have \(b\) such intervals appearing consecutively in \(\sigma\), and assume without loss of generality that these are \(I_{1},I_{2},\ldots,I_{b}\). If all of these intervals are good, let \(S_{i}\) be the set yielded by \(\pi_{i}\).
First observe that \(S_{i}\) and \(S_{i+1}\) cannot end in the same interval. Indeed, if this were the case, then \(S_{i}\triangle S_{i+1}\) would be \(I_{i}\cup J\), where \(J\) is the difference of two initial sums of the interval in which \(S_{i}\) and \(S_{i+1}\) end. However, the sum of \(I_{i}\) is fractional, while the sum of \(J\) must be an integer, and thus we cannot have both \(S_{i}\) and \(S_{i+1}\) having sum equal to \(1\).
Next, observe that \(S_{i}\) must end after \(S_{i+1}\). If not, then \(S_{i+1}\) contains \(S_{i}\setminus I_{i}\) as an initial sum. However, the sum of \(S_{i}\) is \(1\), while the sum of \(I_{i}\) is negative, so this means that \(S_{i+1}\) has an initial sum that is strictly larger than \(1\), which is a contradiction.
Hence, it follows that the sets \(S_{i}\), \(1\leq i\leq b\), all end in distinct intervals. If \(S_{i}\) ends in the interval \(I_{j}\), then notice that \(I_{j}\cap S_{i}\) is a tail sum of \(S_{i}\), and it must be positive. If this sum is in \((0,1)\), then as before, we will be able to rotate \(S_{i}\), which means that \(\pi_{i}\) is switchable. Otherwise,
contains an initial sum that is larger than \(1\), but we only have at most \(b-1\) such intervals. Thus, one of \(\pi_{1},\ldots,\pi_{b}\) must be switchable.
Hence, we have shown that every set of \(b\) consecutive intervals has at least one bad or switchable permutation. By averaging over the \(s\) intervals, it follows that there are at least \(\frac{s}{b}\) bad or switchable permutations, as required.
## 4 Almost \(k\)-covers
In this section, we show how our stability result can be used in the determination of \(f(n,k)\), the minimum number of hyperplanes needed for an almost \(k\)-cover of the \(n\)-dimensional hypercube \(Q^{n}\).
### Constructing small covers
Recall that \(f(n,k)\) is the solution to the integer linear program where we have a variable for every hyperplane, representing the multiplicity with which it appears in the cover, and a constraint for every nonzero point in \(\{0,1\}^{n}\), ensuring the point is covered at least \(k\) times.
Unfortunately, integer linear programming is notoriously difficult to solve, and this is especially true in this setting, as the number of variables involved grows incredibly quickly. Indeed, there are infinitely many hyperplanes in \(\mathbb{R}^{n}\) that could, in principle, be included in our cover. However, we can finitise our problem by observing that we need only consider hyperplanes that intersect the hypercube _maximally_ -- that is, if \(H_{1}\) and \(H_{2}\) are hyperplanes, both avoiding the origin, and \(H_{1}\cap\{0,1\}^{n}\subseteq H_{2}\cap\{0,1\}^{n}\), then we can replace any occurrence of \(H_{1}\) in a cover with a copy of \(H_{2}\).
Since any \(n\) points in \(\mathbb{R}^{n}\) determine a unique hyperplane, it follows that any maximally intersecting hyperplane must contain at least \(n\) points of the hypercube. In particular, this implies that there are at most \(\binom{2^{n}-1}{n}\) hyperplanes we need to consider, making the integer linear program finite.
Unfortunately, there is a considerable difference between _finite_ and _computationally feasible_, and this upper bound of \(\binom{2^{n}-1}{n}\) grows far too fast to leave us with any hope of computing exact answers even when \(n\) is as small as \(6\). While it should be pointed out that this is indeed just an upper bound, and some hyperplanes are significantly overcounted, brute-force enumeration of the maximally intersecting hyperplanes (which we could only carry out for \(n\leq 5\)) shows that the true number also exhibits rapid growth.
In order to be able to proceed computationally, then, it is necessary to restrict the search space. A natural place to start is by only considering hyperplanes of maximum weight. Indeed, the Clifton-Huang lower bound (2) shows that in any almost \(k\)-cover, the total weight of the hyperplanes must be at least \(H_{n}k\), where \(H_{n}\) is the \(n\)th Harmonic number. To minimise the size of the cover, then, we would want each hyperplane to have as much weight as possible.
As we saw in Corollary 1.4, there are far fewer hyperplanes of weight \(1\); the number of these is the much more modest \(\binom{2n-1}{n}\). Moreover, Theorem 1.3 characterises these maximum-weight planes, so we are able to set up the corresponding integer linear program and efficiently search for small
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \(n\) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline Upper bound & 1 & 3 & 35 & 1365 & 169911 & 67945521 \\ Actual count & 1 & 3 & 11 & 95 & 2629 &?? \\ Weight-1 planes & 1 & 3 & 10 & 35 & 126 & 462 \\ \end{tabular}
\end{table}
Table 1: The number of maximally intersecting and weight-1 hyperplanes in \(\mathbb{R}^{n}\).
covers when \(n=6\). In many cases we are able to find covers matching the Clifton-Huang lower bound, as shown in Theorem 1.5, which we first restate.
**Theorem 1.5**.: _For \(k\geq 65\), we have_
\[f(6,k)=\left\lceil\frac{49k}{20}\right\rceil\]
_whenever \(k\not\equiv 2,11\pmod{20}\)._
Proof.: The Clifton-Huang lower bound (2) gives \(f(n,k)\geq\lceil H_{6}k\rceil=\frac{49k}{20}\), and so we need only prove the upper bound. To this end, we solved the integer linear program corresponding to \(f(n,k)\) for various values of \(k\), restricting ourselves to using the \(462\) hyperplanes of weight \(1\).
We noted that \(f(6,60)=147\), which comes from the general construction given by Clifton and Huang [5]. We next considered the case \(k=20\). The lower bound implies \(f(6,20)\geq 49\), with equality only possible if all hyperplanes involved have weight \(1\). Our integer linear program solver found such an almost-\(k\) cover, which we have provided in Appendix A. Note that this cover includes hyperplanes with coefficients other than \(0\) or \(1\); that is, it uses some of the new weight-\(1\) hyperplanes described in Theorem 1.3.
Having thus established that \(f(6,20)=49\), we turn to other values of \(k\). Now, since \(f(n,k+\ell)\leq f(n,k)+f(n,\ell)\), these together imply that \(f(6,k+20m)\leq f(6,k)+49m\) for all integers \(m\geq 1\). We then solved the integer linear program for all \(23\leq k\leq 42\), finding that \(f(n,k)=\left\lceil\frac{49k}{20}\right\rceil\) in all cases except \(k\in\{31,33,42\}\). While the first and last of these values are excluded from our result, we were able to resolve the case \(k\equiv 13\pmod{20}\) by finding \(f(6,k)=\left\lceil\frac{49k}{20}\right\rceil\) for \(k=53\).
Using this finite set of computational results, together with the recursive upper bound, it follows that \(f(6,k)\leq\left\lceil\frac{49k}{20}\right\rceil\) for all \(k\geq 65\) with \(k\not\equiv 2,11\pmod{20}\).
It is natural to ask what happens when \(k\equiv 2,11\pmod{20}\). Our computational results were restricted to hyperplanes of weight \(1\). However, by Theorem 1.3, we know that all other hyperplanes have weight at most \(\frac{5}{6}\). Thus, if we have a cover of \(m\) planes containing at least one plane that is not of weight \(1\), the total weight of the planes in the cover is at most \(m-\frac{1}{6}\).
For \(k\equiv 2,11\pmod{20}\), we have \(\frac{49k}{20}\geq\left\lceil\frac{49k}{20}\right\rceil-\frac{1}{10}\). Hence, in light of the previous remark, if there is a cover of size \(\left\lceil\frac{49k}{20}\right\rceil\), it must consist entirely of weight-\(1\) hyperplanes. Our solutions to the integer linear program show that for \(k\in\{11,22,31,42,51,62\}\), no such cover exists; in all these cases, we have \(f(6,k)=\left\lceil\frac{49k}{20}\right\rceil+1\).
However, it is not inconceivable that further along these sequences, the Clifton-Huang lower bound is realised; that is, \(f(6,2+20m_{0})=5+49m_{0}\) or \(f(6,11+20m_{0})=27+49m_{0}\) for some suitably large \(m_{0}\). If this does happen, then using \(f(6,20)=49\), it again follows that we would have equality for all \(m\geq m_{0}\).
Indeed, we also have \(\frac{49k}{20}>\left\lceil\frac{49k}{20}\right\rceil-\frac{1}{6}\) when \(k\equiv 13\pmod{20}\), which means that any cover of size \(\left\lceil\frac{49k}{20}\right\rceil\) can only contain weight-\(1\) hyperplanes. Our solutions to the integer linear program show that there are no such covers for \(k=13\) or \(k=33\), but it does exist for \(k=53\) (and, therefore, for all further terms in this sequence).
### Improving the lower bound
In some cases, though, one can show that the lower bound \(f(n,k)\geq\left\lceil H_{n}k\right\rceil\) is not tight for arbitrarily large \(k\). Using numerical data about the hyperplanes in \(\mathbb{R}^{5}\), Clifton and Huang were able to show that \(f(5,67+60m)=\left\lceil\frac{137k}{60}\right\rceil+1\) for all \(m\geq 1\). Using our stability result, we can extend this to all sufficiently large \(n\), showing that the lower bound is not tight infinitely often.
**Theorem 4.1**.: _Let \(k\geq 2\) and \(n\in\mathbb{N}\) be such that \(0<\lceil kH_{n}\rceil-kH_{n}<\frac{1}{n\binom{n-1}{\lfloor n/2\rfloor}}\). Then \(f(n,k)\geq\lceil kH_{n}\rceil+1\)._
_In particular, if \(n\) is sufficiently large, there are infinitely many choices of \(k\) for which \(f(n,k)\geq\lceil kH_{n}\rceil+1\)._
Proof.: Consider a smallest almost \(k\)-cover \(\mathcal{H}\) of \(Q^{n}\), and for each hyperplane \(h\), let \(x_{h}\) denote the number of copies of \(h\) in \(\mathcal{H}\). We then have
\[|\mathcal{H}|=\sum_{h}x_{h}=\sum_{h}\left(\sum_{S:S\in h}w(S)+1-\sum_{S:S\in h }w(S)\right)x_{h}.\]
We obtain
\[|\mathcal{H}|=\sum_{S}\left(\sum_{h:h\ni S}x_{h}\right)w(S)+\sum_{h}\left(1- \sum_{S:S\in h}w(S)\right)x_{h}.\]
Now, since \(\mathcal{H}\) is an almost \(k\)-cover, we have \(\sum_{h:h\ni S}x_{h}\geq k\) for all \(S\). Since
\[\sum_{S}w(S)=\sum_{s=1}^{n}\sum_{S:|S|=s}w(S)=\sum_{s=1}^{n}\frac{\binom{n}{s} }{s\binom{n}{s}}=\sum_{s=1}^{n}\frac{1}{s}=H_{n},\]
we can further rearrange this equation to obtain
\[|\mathcal{H}|-kH_{n}=\sum_{S}\left(\sum_{h:h\ni S}x_{h}-k\right)w(S)+\sum_{h} \left(1-\sum_{S:S\in h}w(S)\right)x_{h}. \tag{5}\]
Consider the terms on the right-hand side of (5). Since \(\mathcal{H}\) is an almost \(k\)-cover, every set is covered at least \(k\) times, so \(\sum_{h:h\ni S}x_{h}-k\) is always non-negative integer. Furthermore, by Theorem 1.3, each hyperplane has weight either exactly \(1\) or at most \(1-\frac{1}{n}\), which means \(1-\sum_{S:S\in h}w(S)\) is non-negative, and is at least \(\frac{1}{n}\) when it is positive. Thus, each individual summand on the right-hand side is non-negative, and the positive summands are at least \(\min\left(\min_{S}w(S),\frac{1}{n}\right)=\frac{1}{n\binom{n-1}{\lfloor n/2 \rfloor}}\).
Now we look at the left-hand side. Since it must be non-negative, we have \(|\mathcal{H}|\geq\lceil H_{n}k\rceil\). Suppose we had equality. By our assumption on \(k\) and \(n\), \(0<\lceil kH_{n}\rceil-kH_{n}<\frac{1}{n\binom{n-1}{\lfloor n/2\rfloor}}\), but we previously established that the right-hand side, if positive, must be at least \(\frac{1}{n\binom{n-1}{\lfloor n/2\rfloor}}\). Hence, we must have \(f(n,k)=|\mathcal{H}|\geq\lceil kH_{n}\rceil+1\).
For the second assertion, let \(H_{n}=\frac{c_{n}}{d_{n}}\), where \(c_{n}\) and \(d_{n}\) are coprime. We can then find some \(1\leq k_{0}\leq d_{n}\) such that \(kc_{n}\equiv-1\pmod{d_{n}}\). It then follows that for any \(k\equiv k_{0}\pmod{d_{n}}\), we have \(\lceil kH_{n}\rceil-kH_{n}=\frac{1}{d_{n}}\).
As shown by Boyd [4], \(d_{n}=e^{(1+o(1))n}\). On the other hand, \(n\binom{n-1}{\lfloor n/2\rfloor}\leq n2^{n-1}\). Hence, for \(n\) sufficiently large, we have \(\lceil H_{n}k\rceil-H_{n}k<\frac{1}{n\binom{n-1}{\lfloor n/2\rfloor}}\). Thus, for this infinite sequence of values of \(k\), \(f(n,k)\geq\lceil H_{n}k\rceil+1\).
**Remark**.: Note that in the proof of the second statement, it suffices to have \(k\equiv k_{0}\pmod{d_{n}}\) where \(k_{0}\) is such that \(ck_{0}\equiv-r\pmod{d_{n}}\) for some \(1\leq r<\frac{d_{n}}{n\binom{n-1}{\lfloor n/2\rfloor}}\). This shows that the linear programming lower bound is not tight for approximately \((e/2)^{(1+o(1))n}\) of the \(e^{(1+o(1))n}\) residue classes.
Concluding remarks
In this paper, we have studied the Clifton-Huang weighting of hyperplanes, characterising the hyperplanes of maximum weight, and proving some stability, showing that all other hyperplanes have weight bounded away from \(1\). We have then shown how this result can be used to determine further values of the extremal function \(f(n,k)\), which is the minimum number of hyperplanes needed to cover all nonzero points of \(Q^{n}\) at least \(k\) times, while avoiding the origin completely. Several open problems remain, and we shall highlight some possible directions for further research below.
Maximal hyperplanesWe have shown that the weight of any hyperplane is either \(1\) or at most \(1-\frac{1}{n}\). This latter bound is best possible, as evidenced, for example, by the plane \(\Pi_{c}\) defined by \(x_{1}+x_{2}+\ldots+x_{n-1}+cx_{n}=1\), where \(c\notin\{-(n-1),-(n-2),\ldots,1\}\). However, as explained in Section 4, when we are setting up the integer linear program for the covering problem, we need only consider hyperplanes that intersect \(\{0,1\}^{n}\) maximally. Since \(\Pi_{1}\) covers all the points that \(\Pi_{c}\) does, and more, the above hyperplane is not maximal.
We could hope for stronger applications, then, if we could show that the weight of a _maximal_ hyperplane is either \(1\) or much smaller. Unfortunately, there is not much we can gain here -- we can find maximal hyperplanes of weight \(1-\frac{1}{n-1}\). For example, it is known that the following hyperplanes, together with permutations of coefficients, are maximal hyperplanes of weight \(1-\frac{1}{n-1}\):
* \(x_{1}+\cdots+x_{n-2}-kx_{n-1}-(n-2-k)x_{n}=1\), \(k=1,\ldots,\lfloor\frac{n-2}{2}\rfloor\).
* \(x_{1}+\cdots+x_{n-2}+2x_{n-1}-kx_{n}=1\), \(k=1,\ldots,n-3\).
* \(x_{1}+\cdots+x_{n-3}+2x_{n-2}-kx_{n-1}-(n-2-k)x_{n}=1\), \(k=1,\ldots,\lfloor\frac{n-2}{2}\rfloor\).
Still, it would be interesting to prove this larger separation for maximal hyperplanes, which could be significant for smaller values of \(n\).
**Question 5.1**.: Let \(h\) be a hyperplane that intersects \(Q^{n}\) maximally. Is it true that either \(w(h)=1\) or \(w(h)\leq 1-\frac{1}{n-1}\)?
In our proof, with somewhat more involved arguments, we are able to establish the improved bound in some of our cases, including when we have a coefficient larger than \(1\), or when we have a positive fractional coefficient. However, it remains to resolve the problem in the other cases.
Another direction that could help with reducing the search space for a wider range of parameters would be to characterise those hyperplanes whose weight is close to \(1\). Initial steps in that direction were taken in Theorem 3.1, and it would be great to complete the picture.
**Question 5.2**.: Can we characterise all hyperplanes \(h\) with \(w(h)=1-O\left(\frac{1}{n}\right)\)?
Improved lower boundsIn this setting, our lower bounds come from the fractional relaxation of the integer linear program, whose value was proven to be \(H_{n}k\), where \(H_{n}\) is the \(n\)th Harmonic number, by Clifton and Huang [5]. This immediately gives \(f(n,k)\geq\lceil H_{n}k\rceil\), and Clifton and Huang further proved that this lower bound is asymptotically tight.
However, it is not always sharp. As we have shown in Theorem 4.1, for every sufficiently large \(n\) there are infinitely many choices of \(k\) for which we have \(f(n,k)\geq\lceil H_{n}k\rceil+1\). However, our methods do not allow us to prove any larger separation between the fractional and integer solutions, since an almost \(k\)-cover of size \(\lceil H_{n}k\rceil+2\) could contain a hyperplane of very small weight. That said, in all the computational results we have obtained, we always have \(f(n,k)\leq\lceil H_{n}k\rceil+1\). It would be
interesting to see how large \(f(n,k)-\lceil H_{n}k\rceil\) can be, and to develop methods for proving stronger lower bounds.
**Question 5.3**.: Given \(n\) and sufficiently large \(k\), can we have \(f(n,k)\geq\lceil H_{n}k\rceil+2\)?
It is worth reiterating a question of Clifton and Huang, who asked if the difference between the integer and fractional problems was bounded by an absolute constant.
**Question 5.4** (Clifton-Huang [5]).: Is there an absolute constant \(C>0\), such that for every \(n\) there are only finitely many \(k\) with \(f(n,k)\geq\lceil H_{n}k\rceil+C\)?
Large dimensionsFinally, we note that while this linear programming approach is very fruitful in the cases we have considered, where \(n\) is fixed and \(k\) is large, Clifton and Huang showed the problem behaves very differently when \(k\) is fixed and \(n\) is large. The polynomial method has been fruitful in establishing lower bounds in this range, but Sauermann and Wigderson [10] showed that the solution to the algebraic problem is smaller than Clifton and Huang's conjectured value of \(f(n,k)\).
**Conjecture 5.5** (Clifton-Huang [5]).: _For \(k\geq 2\) and \(n\) sufficiently large, we have \(f(n,k)=n+\binom{k}{2}\)._
## Acknowledgement
Shagnik Das is supported by Taiwan NSTC grant 111-2115-M-002-009-MY2. Wei-Hsuan Yu is supported by MOST under Grant No. 109-2628-M-008-002-MY4. Valjakas Djaljapayan and Yenchi Roger Lin are partially supported by MOST under Grant No. 111-2115-M-003-009.
|
2308.11862 | Empirical Analysis of Software Vulnerabilities Causing Timing Side
Channels | Timing attacks are considered one of the most damaging side-channel attacks.
These attacks exploit timing fluctuations caused by certain operations to
disclose confidential information to an attacker. For instance, in asymmetric
encryption, operations such as multiplication and division can cause
time-varying execution times that can be ill-treated to obtain an encryption
key. Whilst several efforts have been devoted to exploring the various aspects
of timing attacks, particularly in cryptography, little attention has been paid
to empirically studying the timing attack-related vulnerabilities in
non-cryptographic software. By inspecting these software vulnerabilities, this
study aims to gain an evidence-based understanding of weaknesses in
non-cryptographic software that may help timing attacks succeed. We used
qualitative and quantitative research approaches to systematically study the
timing attack-related vulnerabilities reported in the National Vulnerability
Database (NVD) from March 2003 to December 2022. Our analysis was focused on
the modifications made to the code for patching the identified vulnerabilities.
We found that a majority of the timing attack-related vulnerabilities were
introduced due to not following known secure coding practices. The findings of
this study are expected to help the software security community gain
evidence-based information about the nature and causes of the vulnerabilities
related to timing attacks. | M. Mehdi Kholoosi, M. Ali Babar, Cemal Yilmaz | 2023-08-23T01:38:03Z | http://arxiv.org/abs/2308.11862v1 | # Empirical Analysis of Software Vulnerabilities Causing Timing Side Channels
###### Abstract
Timing attacks are considered one of the most damaging side-channel attacks. These attacks exploit timing fluctuations caused by certain operations to disclose confidential information to an attacker. For instance, in asymmetric encryption, operations such as multiplication and division can cause time-varying execution times that can be ill-treated to obtain an encryption key. Whilst several efforts have been devoted to exploring the various aspects of timing attacks, particularly in cryptography, little attention has been paid to empirically studying the timing attack-related vulnerabilities in non-cryptographic software. By inspecting these software vulnerabilities, this study aims to gain an evidence-based understanding of weaknesses in non-cryptographic software that may help timing attacks succeed. We used qualitative and quantitative research approaches to systematically study the timing attack-related vulnerabilities reported in the National Vulnerability Database (NVD) from March 2003 to December 2022. Our analysis was focused on the modifications made to the code for patching the identified vulnerabilities. We found that a majority of the timing attack-related vulnerabilities were introduced due to not following known secure coding practices. The findings of this study are expected to help the software security community gain evidence-based information about the nature and causes of the vulnerabilities related to timing attacks.
secure coding, software vulnerability, timing attack, constant time
## I Introduction
Cyber threats are largely driven by Security Vulnerabilities (SVs) [1]. They are characterised as weaknesses in a computer system that attackers utilise to perform malicious actions [2]. Recently, a sharp increase has been seen in the number of recorded SVs. This is evident from the Common Vulnerabilities and Exposures (CVE) [3], a database that records publicly released information about security flaws. For instance, the number of registered SVs nearly quadrupled in 2022 compared to 2016 (from 6,454 to 25,226 vulnerabilities).
The protection of secret data (e.g., passwords, cryptographic keys) in software is a challenging task [4]. In this regard, cryptographic algorithms and protocols play a critical role in the security of a computer system [5]. However, any form of SVs in cryptographic implementations (e.g., cryptographic libraries) poses a significant threat to their effectiveness, as it is common for attackers to exploit weaknesses to evade security mechanisms [6]. One of the most viable ways to compromise the security measures of a system is through timing side-channel attacks. By exploiting variations in the time required to execute cryptographic operations (e.g., encryption, decryption), these attacks are able to uncover secrets in a non-invasive way [7]. Kocher et al. [8] pioneered the notion of timing attacks in 1996. They demonstrated that precise timing measurement of cryptographic operations could reveal the entire private key of various cryptography systems to an attacker. Since then, timing attacks have further evolved and broken numerous major cryptographic implementations [9, 10, 11, 12]. Also, there have been many countermeasures proposed against these attacks [11]. Timing attacks are particularly concerning because, unlike other side-channel attacks (e.g., electromagnetic attacks [13] and power consumption attacks [14]), restricting physical access to the target device is not sufficient to prevent them. Furthermore, these attacks can be launched remotely [9], which gives adversaries a wide range of attack options. Finally, timing attacks are known to be nearly untraceable and may only leave suspicious access logs on the targeted computer [15]. That is why the extent to which these attacks are launched in the real world is unknown.
Given the devastating consequences of successful timing attacks, several research efforts have been allocated to extensively analyse different aspects of these attacks. However, most of these efforts have focused on studying offense and defense techniques at various levels of cryptography, including algorithms, protocols, implementations, and hardware. We found that no systematic research had been conducted regarding the timing vulnerabilities in non-cryptographic software (i.e., applications that do not implement cryptographic algorithms and protocols). It is equally important to systematically explore and understand the application (i.e., non cryptographic) level vulnerabilities that may contribute to timing attacks since a timing vulnerability is more likely to leak exploitable information the more often it is executed [6].
To fill this research gap, we performed an empirical study aimed at identifying application-level timing SVs and examining the source code of these vulnerabilities and their fixes to determine the responsible coding mistakes. We selected and analysed vulnerabilities reported over approximately two decades on the National Vulnerability Database (NVD) [16]. We performed a thorough inspection of 67 software vulnerabilities that were detected in 56 unique non-cryptographic projects. Our empirical analysis has enabled us to identify and categorise application-level timing vulnerabilities and the
coding mistakes that can introduce such vulnerabilities. We assert that this study is the first of its kind, whose findings are expected to provide practitioners and researchers with valuable insights into the nature of timing vulnerabilities and how to reduce the chances of their introduction in software applications by avoiding certain coding mistakes.
The key contributions of this study are:
1. We have carried out a first of its kind empirical study to demonstrate the existence of timing vulnerabilities at the non-cryptographic level.
2. We have systematically gathered and analysed the relevant data from different sources to identify the secure coding practices that developers can use to avoid the introduction of timing side channels in applications (i.e., non-cryptographic).
3. We have released a fine-grained dataset of vulnerabilities related to timing attacks for further research [17].
## II Background and Related Work
Our research relates to prior works that have investigated the constant-time coding paradigm. Constant-time programming has become the dominant software-driven countermeasure against these attacks and has been adopted in many major cryptographic implementations. In this programming technique, the behaviour of the code is not dependent on the secret data [11]. To fulfill this, it is necessary to ensure that secrets do not influence the control flow (e.g., branch conditions) of the program and its addresses of memory accesses (e.g., array indexes) [15]. Moreover, secrets should not affect the inputs of variable-time machine operations, such as integer division [18]. In the absence of these protections, an attacker may be able to infer secret data through the analysis of the timing side channel.
The process of writing constant-time code is inherently challenging [7]. Prior research indicated that even prominent cryptographic implementations that were deemed to be constant-time had timing leakages [19]. Several tools have been created for automatic constant-time verification [7]. Disselkoen et al. [18] presented Pitchfork, a constant-time verification tool that can analyse cryptographic codes at both primitives and protocol levels.
As opposed to previous studies, which focused on timing attacks and constant-time coding at cryptographic levels (i.e., primitives, protocols, libraries), our study examines timing attack vulnerabilities at the non-cryptography level (i.e., application level). Software at this level is written in high-level programming languages (e.g., Java), and several factors can influence its timing behaviour [11]. For example, these languages often provide abstractions and optimisation features to make the code more efficient. However, these features may result in timing discrepancies in the underlying code, thus compromising constant-time execution [20]. Another factor that impacts timing behaviour is human error, which is the primary focus of our study. As far as we know, our paper is the first to identify common coding mistakes made by developers in high-level programming languages that undermine constant-time programming.
## III Study Design
To understand the characteristics of relevant SVs to timing attacks, we address two Research Questions (RQs). Figure 1 illustrates the overall workflow used to conduct this study. We first introduce our research questions in Section III-A, then describe the process of collecting and preparing the data used to answer these questions in Section III-B, and finally present the analysis processes and our findings in Section IV.
### _Research Questions_
Our investigation is guided by the following RQs:
* **RQ1: How prevalent are timing attack-related security vulnerabilities in non-cryptographic software?** First, we aim to demonstrate the prevalence of timing attack-related SVs in non-cryptographic software. Additionally, the RQ1 findings would benefit the software security community by shedding light on how these SVs are associated with their affected products (i.e., products that have been considered vulnerable to timing attacks).
* **RQ2: (a) What coding mistakes make non-cryptographic software more vulnerable to timing attacks? (b) How do developers patch these software vulnerabilities in real-world projects?** RQ2 findings would give insights to practitioners about the common coding mistakes that increase the success rate of timing attacks. Furthermore, they will unveil what kinds of patches are available to address these vulnerabilities in the source code.
### _Data Collection_
In this study, we used NVD [16], a standards-based vulnerability management data repository. This database was selected specifically because it is considered trustworthy as it is maintained by cybersecurity experts from the National Institute of Standards and Technology (NIST) - a governmental agency of the United States Department of Commerce. It provides a wide
Fig. 1: The overall study design.
range of information regarding known SVs, such as security-related flaws, misconfigurations, affected product names, and impact metrics. Following existing text analysis studies [21, 22], we developed an iterative data-gathering approach to gain a comprehensive overview of SVs related to timing attacks.
We started data collection by exact-matching the key phrase "timing attack" on NVD and then collected the _CVE-ID_ (i.e., vulnerability id) and _Description_ (i.e., vulnerability summary) fields of 89 SVs. Since other SVs may still exist in NVD that do not contain the phrase "timing attack" in their _Description_ fields, we decided to rerun the search using other related keywords to timing attacks in order to ensure we collected as many related SVs as possible. For this reason, we used Natural Language Toolkit (NLTK) [23] in the next step to find the most common phrases within the _Description_ fields of previously 89 collected SVs. Initially, we performed preprocessing tasks, which included removing stop words (e.g., 'a', 'the', 'is', 'and') and punctuations to clean the data. Furthermore, we captured the bigrams and trigrams of all the descriptions and calculated the total appearance frequency of each phrase. The authors of this paper then collectively selected the most relevant key phrases to timing attacks from the list of most popular phrases. Table I displays the selected key phrases with their frequency.
After finalising the list of keywords, we searched NVD again using each of them and collected the _CVE-ID_ and _Description_ fields of retrieved SVs. After removing the duplicated SVs (i.e., same CVE-ID), another round of manual analysis was conducted in order to verify that all of the SVs collected were related to timing attacks. To this end, two authors of this paper individually read each vulnerability's _Description_ field and assessed whether the selected vulnerability was related to timing attacks. Each vulnerability was labelled as either related or unrelated. Following that, the annotators compared their labels, and if disagreements were found, they referred to the _References_ section of the vulnerability on NVD. External links are provided in this section to additional resources (e.g., research papers, vendor advisory) that give more information regarding the particular vulnerability of interest. This process was repeated until there was a consensus between the annotators.
## IV Analysis and Results
### _Distribution of Security Vulnerabilities_
As our study was focused on SVs in non-cryptographic software, we needed to know the distribution of SVs across their _affected product_ (i.e., list of products, platforms and/or hardware that are considered to be vulnerable) fields. This is because SVs can exist in a variety of products, including hardware and different kinds of software in a computer system. So, to figure out which SVs happened in non-cryptographic software, we decided to classify all of the collected SVs by the types (e.g., cryptographic library) of their affected products in RQ1.
_RQ1: How prevalent are timing attack-related security vulnerabilities in non-cryptographic software?_
We categorised the 243 SVs in our dataset based on their _Affected Product_ field. If the affected product were unfamiliar, we would refer to its online repository for further information regarding its functionalities. By following this procedure, we assigned each SV from our dataset to the following seven categories of products. We used similar categories to Lazar et al. [5].
* **Application.** Refers to non-cryptographic software (e.g., Jenkins).
* **Cryptographic Library.** Refers to the implementation of cryptographic protocols (e.g., OpenSSL).
* **Cryptographic Protocol.** Refers to methods which provide details about how cryptographic algorithms need to be utilised (e.g., TLS).
* **Cryptographic Primitive.** Refers to low-level algorithms in cryptography (e.g., AES).
* **OS.** Refers to the underlying Operating System (e.g., Microsoft Windows).
* **Firmware.** Refers to a single-purposed software that includes machine-level instructions for hardware (e.g., router firmware).
* **Hardware.** Refers to the physical layer of a computer system (e.g., Intel microprocessor).
Then, we counted the occurrence of SVs within each category. Table II displays the frequency distribution of SVs related to timing attacks within each product category.
As can be seen, the Application category contains over 51% of SVs. Developers in this category typically do not have expertise in cryptography [5]. Further details about these SVs are provided later in Section IV-B.
We observed that 30% of collected SVs belong to the Cryptographic Library category. Cryptographic libraries are popular with software developers since they provide varying degrees of security that can be integrated into applications for data transmission and storage. They are written by expert developers with profound cryptography knowledge [15], and any errors in their implementation adversely affect many applications that rely on these libraries. SVs in this category have been studied extensively by cryptography researchers [9, 12].
In the Cryptographic Protocol and Cryptographic Primitive categories, we have collected one timing attack-related vulnerability per category. The vulnerability in the Cryptographic Protocol category (CVE-2013-0169) is related to the Lucky Thirteen attack [12]. It is a cryptographic timing attack based on a detailed timing analysis of decryption processing in the Transport Layer Security (TLS) protocol. This vulnerability is an example of a dependency vulnerability that disseminates to other categories through dependent software products. The discovery of these vulnerabilities may threaten the security of dependent software. For instance, AlFardan et al. [12] demonstrated in their experiments that TLS implementations in cryptographic libraries such as GnuTLS, Network Security Services (NSS), CyaSSL, and BouncyCastle are vulnerable to the Lucky Thirteen attack. The aforementioned software products indeed belong to the Cryptographic Library category, which is a higher level category than the Cryptographic Protocol category. In addition, they reported that this vulnerability affected the Opera browser (CVE-2013-1618) - a software product belonging to the Application category.
The vulnerability in the Cryptographic Primitive category (CVE-2005-1797) is related to cache-timing attacks. These attacks represent a specific category of side-channel attacks that exploit the cache behaviour of contemporary computing systems to acquire knowledge about encryption keys. Bernstein [10] investigated the vulnerability of AES encryption to cache-timing attacks and demonstrated that encryption operations conducted with an AES key result in particular cache access patterns, which can be exploited to deduce information about the key.
There were 12 (4.94%) and 26 (10.7%) SVs in the OS and Firmware categories, respectively. However, we could not gather more information about these vulnerabilities because they were closed-source products or the information provided regarding the vulnerabilities was quite limited.
Among the SVs in our dataset, four belong to the Hardware category. The Meltdown [24] (CVE-2017-5754) and Spectre [25] (CVE-2017-5715, CVE-2017-5753) attacks exploited transient execution CPU vulnerabilities and impacted a wide range of modern processors from Intel, AMD, and the ARM family. These SVs have been attributed to design decisions made by hardware manufacturers during the implementation of the speculative execution and branch prediction mechanisms [24, 25]. The fourth vulnerability in this category is PortS-mash [26] (CVE-2018-5407) which impacts processors that run on an SMT (Simultaneous Multithreading) architecture where multiple threads can be executed simultaneously on a single CPU core. As a proof of concept, Aldaya et al. [26] exploited Intel Hyper-Threading technology and performed their timing side-channel attack on Intel Skylake and Kaby Lake architectures.
Since our study aims to analyse the relevant coding mistakes in non-cryptographic software, in Section IV-B, we solely focus on the Application category, which indeed turns out to have the most number of timing attack-related SVs.
### _Source-Code level Analysis_
Source code level analysis involves systematically reviewing the source code of a software application in order to gain a deeper understanding of its structure, functions, and limitations. In this section, we sought to identify relevant code changes that patched the vulnerability and then utilise them to identify common coding mistakes and mitigation techniques.
_RQ2: (a) What coding mistakes make non-cryptographic software more vulnerable to timing attacks? (b) How do developers patch these software vulnerabilities in real-world projects?_
To perform this analysis, we had to have access to the source code of the products that were considered vulnerable (i.e., _Affected Product_). For this purpose, we began locating the corresponding repositories on GitHub or other software repositories for each of the vulnerabilities within the Application category of our dataset. We excluded vulnerabilities from our analysis whose affected products were closed-source (25 entries). As a result, we were left with 100 vulnerabilities in the Application category. The following steps were taken for each of the remaining vulnerabilities:
#### Iv-B1 **Finding the vulnerable and patched versions of the affected product**
We first looked into the _Affected Product_ section of NVD for each vulnerability. Bao et al. [27] reported some inaccuracy regarding the information in this section of NVD. As an example, a certain version can be marked as vulnerable while it is not actually vulnerable. The findings of this studies led us to consider multiple sources of information to increase the reliability of the data we extracted. Since the product's advisory page is the official source of information, we mainly relied on it to identify the patched and vulnerable versions.
#### Iv-B2 **Finding the fixing commit of the affected product**
To find the exact code changes that patched the vulnerability, we first needed to determine which commit was the fixing commit(s). The reason for this is that most large OSS (Open Source Software) projects contain multiple commits within a single version. A link to the fixing commit(s) is usually included in the _References_ section of each vulnerability in NVD. If a direct link was not found, we manually looked for the fixing commit(s) in the corresponding repository we previously located. As part of this process, we searched the repository with _CVE-ID_ as a keyword because repository maintainers usually use it in various sections (e.g., conversations, commit descriptions, code comments) for future reference. If _CVE-ID_ failed to provide any results, we searched the repository
using keywords from TableI to identify fixing commit(s). This approach enabled us to locate fixing commits for 67 out of 100 vulnerabilities. Due to the insufficient information provided, we could not pinpoint the relevant commits and code changes for the remaining 33 vulnerabilities. We marked these vulnerabilities with "insufficient info" and excluded them from the rest of the analysis.
#### V-B3 **Analysis of code changes**
Following the identification of the fixing commit in the previous step, we examined all the information available for each vulnerability. We used a similar approach to Croft et al. [28] to manually analyse code changes related to a vulnerability. Initially, we focused on the sections _Description_ and _References_ of NVD. The links in the _References_ section typically point to valuable information from release notes and official advisory pages of the software product. Furthermore, we concentrated on the information available at the source-code level. Fixing commit description and code comments were taken into consideration, as developers often use these to describe the functionality of code and the context in which it was changed. Upon acquiring a comprehensive understanding of the vulnerability, we examined the changed lines, along with the entire fixing commit code, to determine the underlying cause of the change. We recorded our observations as coding mistakes and the nature of the change for each vulnerability. We followed this approach to gradually build a taxonomy of common mistakes and mitigations.
This 3-step manual analysis consumed over 120 hours of effort and was carried out by the first author, who had three years of experience in software security. In the course of this process, several weekly meetings were held with the other two authors to reduce bias and inaccuracies. In these meetings, we randomly performed the same manual process for several vulnerabilities in different programming languages.
### _Results_
We encountered two categories of coding mistakes during our manual analysis. TableIII displays the frequency of vulnerabilities in each category of coding mistakes.
#### V-C1 **Unsafe Comparison**
The most common coding mistake that we found was lack of constant-time comparison, which is a security technique that aims to ensure the consistency and independence of the processing time associated with comparison operations. Its origins are in computer security, where the processing time of a comparison operation between two values can lead to the disclosure of sensitive information through side-channel attacks, such as timing attacks [7]. A fixed number of operations for each comparison is guaranteed in a constant-time comparison, and regardless of the input values, an attacker cannot deduce any information about the secret from execution time [29]. Based on our source code analysis, we observed that 90% (60 out of 67) of all vulnerabilities in the Application category are associated with unsafe comparison operations. We describe our observations for Java, PHP, and C programming languages in the following. Additionally, to illustrate the popularity level of each product, we display the number of _Stars_ and _Forks_ within their respective GitHub repositories. We provide further details about popularity level in SectionVI.
TableIV presents a selection of vulnerabilities in Java-based applications that are susceptible to timing attacks due to unsafe comparison operations.
We tracked the code changes that patched the vulnerability in these products and observed that Array.equals() and String.equals() methods are not considered timing attack safe. When the code reaches a point where it needs to compare a secret, MessageDigest.isEqual() is the preferred method in the patches. Listing 1 displays a code snippet (related to CVE-2021-38153) from one of the components in Kafka (i.e., distributed data streaming platform) in which two keys need to be compared in order to authenticate the user. The vulnerability has been patched by using the MessageDigest.isEqual() (line 8) method instead of Array.equals(). The added and deleted lines are highlighted in green and red, respectively.
```
1importjava.security.InvalidKeyKeyException;
2#importjava.security.MessageDigest;
3...
4byte[]expectedStoredKey=scramCredential.storedKey();
5byte[]clientSignature=formatter.clientSignature(expectedstoredKey,clientFirstMessage,serverFirstMessage,clientFinalMessage);
6byte[]computedKey=formatter.storedKey(clientSignature,clientFinalMessage.proof());
7if(!Array.equals(computedStoreKey,expectedStoredKey))
8if(!MessageDigest.isEqual(computedStoreKey,expectedStoredKey))
9thrownewAssLexception("Invalidclient credentials");
10catch(InvalidKeyException){
11thrownewAssLexception("Saslclient verificationfailed",e);
```
Listing 1: Example of unsafe comparison in Java adapted from Kafka repository [30].
In the Java Cryptography Architecture (JCA), the MessageDigest class generates secure hashes of data, also known as message digests. In this class, message digest algorithms, such as MD5 and SHA-256, are provided for applications. The MessageDigest.isEqual() method is a utility method used for comparing the results of two
MessageDigest objects. It returns a boolean value of true if the hash values generated by the two |MessageDigest objects are equal, and false otherwise. This means that unlike |Array.equals()| and |String.equals()| methods, MessageDigest.isEqual()| will not return immediately and continue the comparison operation byte by byte to the end irrespective of the first byte being different. This behaviour makes this method secure but less efficient.
In Table V, we display a selection of vulnerabilities in PHP-based applications that are susceptible to timing attacks.
Based on the code changes that patched these vulnerabilities, we found that native comparison operators in PHP (==,!=_,!==,!==) are regarded as unsafe against timing attacks when used to compare sensitive data such as cryptographic keys and passwords. In the provided patches, we observed that developers used the |hash_equals()| function (in PHP versions greater than 5.6.0) to patch these vulnerabilities. This built-in function is suitable for comparing two strings in a time-constant manner. It accepts two input string arguments and evaluates their lengths in the first instance. If the strings differ in length, the function immediately returns false. Otherwise, the function conducts a constant-time comparison of the two values through a byte-by-byte comparison method and returns a boolean value of true if the values are identical or false if they are not.
Listing 2 displays a code snippet (related to CVE-2015-5730) from WordPress (i.e., web development platform) repository. It was indicated in release notes of WordPress that an attacker could have leveraged this vulnerability by a timing attack to lock a post and prevent it from being edited [31].
```
if($this->get_instance_hash_key($decoded)|=$value['instance_hash_key'].){ if('|$hash_equals|($this->get_instance_hash_key()|$decoded),$value['instance_hash_key'])){ returnnull; } }
```
Listing 2: Example of unsafe comparison in PHP adapted from WordPress repository [32].
Table VI lists vulnerabilities in applications written in C language that are susceptible to timing attacks as a result of unsafe comparison operations. As per the patches provided for these vulnerabilities, |memcmp()| and |strcmp()| are not secure against timing attacks since their comparisons stop once the first difference is encountered. Contrary to other languages, C and C++ developers implement their own functions to ensure safe comparisons. For instance, Listing 3 displays a code snippet (related to CVE-2013-2061) from OpenVPN (i.e., network software) repository. The openvpn_decrypt function in crypto.c of OpenVPN 2.3.0 and earlier uses the |memcmp()| function (line 3) for comparing HMAC (Hash-based message authentication code) tokens. The non-constant-time nature of this function may enable remote attackers to obtain sensitive information through a timing attack.
```
1...
2/*ComparelocallycomputedHMACwithpacketHMAC*/
3if(memcmp(local_hmac,BPTR(buf),hmac_len))
4if(memcmp_constant_time(local_hmac,BPTR(buf),hmac_len))
5CRYP_ERROR("packetHMACauthenticationfailed");
6
7ASSERT(buf_advance(buf,hmac_len));
8...
```
Listing 3: Example of unsafe comparison in C adapted from OpenVPN repository [33].
To mitigate this vulnerability, a new function has been implemented by developers and included in the patch to perform the comparison in constant-time. The implementation of |memcmp_constant_time()| can be seen in Listing 4.
```
1staticint memcmp_constant_time(constvoid*a,constvoid*b,size_t size){
3constuint8_t*a1=a;
4constuint8_t*b1=b;
5intret=0;
6size_t;
7
8for(i=0;i<size;i++){
9ret|=s1++^*b1++;
10}
11
12returnret;
13
14
15
16
17
18
19
20returnret;
21}
```
Listing 4: Implementation of a constant-time comparison function in C adapted from OpenVPN repository [33].
This function is designed to compare two blocks of memory (a| and b) in constant-time. To achieve this, in line 9, the function uses the bitwise XOR operator (\(\widehat{\times}\)) to compare each byte of the memory blocks. The XOR operation compares each bit of two input values and returns |1| in each bit position where the corresponding bits of the two operands are different and |0| where they are the same. The resulting bits from each byte comparison are then ORed together to form a final result ( ret), which indicates whether the two memory blocks are equal or not. This approach ensures that the execution time of the function is constant and independent of the values being compared, which prevents timing side channels from revealing information about the memory blocks. We discovered similar implementations of constant-time comparison functions in the patches provided for the remaining vulnerabilities listed in Table VI. Additionally, we observed that the same technique (i.e.,
bitwise XOR operation) was used in implementing constant-time comparison functions in C++ [34].
Table VII summarises our observations regarding safe and unsafe comparisons in 11 different programming languages. We have provided the complete list of vulnerabilities and their fixing commit links in our released dataset [17] of vulnerabilities related to timing attacks.
#### Iv-C2 **User Enumeration Issues**
User enumeration attack is a type of enumeration attack that targets login pages in an attempt to identify valid usernames. This is accomplished by sending login requests with a list of potential usernames and analysing the error messages returned by the system. In certain circumstances, combining user enumeration with timing attacks may lead to more successful attacks. An attacker can use timing attacks to exploit differences in system response times to infer whether a username is valid. The information thus acquired can then be used to launch additional attacks.
Table VIII displays vulnerabilities in products that are susceptible to a mix of timing and user enumeration attacks. These seven vulnerabilities originated from observable timing discrepancies on the login forms. According to the patches provided for these vulnerabilities, the applications took considerably longer to respond to login attempts with a valid username and an invalid password than with an invalid username and password. Thus, an attacker could leverage timing attacks to find valid usernames by attempting to log in and assessing the time it takes to evaluate each login attempt for valid and invalid usernames.
We observed that one of the common mitigation strategies is to avoid stopping the verification process abruptly. Even if the provided username is invalid, developers verified the invalid password with a synthetic password in order to prevent timing discrepancies [35]. For instance, Listing 5 displays a code snippet (patch for CVE-2022-34174) from Jenkins repository. The code snippet is a Java method that implements user authentication by checking the validity of the provided username and password against an existing record in a data store (line 4). After applying the patch, when a provided username is invalid (line 9), the method uses a MultiPasswordEncoder object named PASSWORD_ENCODER(line 11) to encode and verify passwords, and employs a precomputed encoded value called ENCODED_INVALID_USER_PASSWORD (line 24) to intentionally waste time, in order to prevent timing attacks from distinguishing between existing and non-existing users. If the provided username exists, the method checks if the associated password is correct (line 14); if not, it throws a BadCredentialsException (line 15). The |generatePassword()| method (line 25) generates a random password of length 20, which is used to initialise the value of ENCODED_INVALID_USER_PASSWORD in line 24.
```
1importjavas.util.Random;
2protectedUserDetailsauthenticate2(Stringusername,Stringpassword) throws AuthenticationException{Detailsu=load(username);
3Detsilsu;
4try()
5u=load(username);
6
7}catch(UsernameNotFoundExceptionex){
8//Wastetimeto preventtimingattacks distinguishingexistingandnon-existinguser
9passWORD_ENCODER_matchches(password,ENCODER_INVALID_USER_PASSWORD);
10throwex;
11}
12if(fu.isPasswordCorrect(password)){
13thrownewBadCredentialsException("Bad credentials");
14}
15publicstaticfinalMultiPasswordEncoder
16passWORD_ENCODER=newMultiPasswordEncoder();
17
18/**
19*Thisvalueisusedtopreventtimingdiscrexpancies
20whentryingtoauthenticatewithaninvalidusername
21*comparedtojustownpassword.Iftheuser doesn'texist,comparetheprovidedpasswordwiththisvalue.
22*/
23privatestaticfinalStringENCODER_INVALID_USER_PASSWORD=PASSWORD_ENCODER.encode(generatePassword());
24privatestaticStringgeneratePassword(){
25Stringpassword=newRandom().ints(20,33,127).mapToObj(i->(char)i)
26.collect(StringBuilder::new,StringBuilder::appendCodePoint,StringBuilder::append).toString();
27returnpassword;
28}
```
Listing 5: Fixing commit for CVE-2022-34174 adapted from Jenkins repository [35].
Another observed technique for preventing timing oracles and user enumeration is the addition of a random timer during the authentication process [36].
## V Discussion
This study aimed at empirically investigating non-cryptographic software weaknesses that may assist timing attacks in succeeding. Through RQ1, we first gained a broad view of timing attack-related SVs across various categories of vulnerable products. Furthermore, through RQ2, we evidentially identified the common coding mistakes that application (i.e., non-cryptographic) developers make regarding these attacks. It is important to clarify that in this study, we were not implying that timing vulnerabilities in non-cryptographic software are more significant than those in cryptographic software. As we mentioned in previous sections, due to the complex nature of timing attacks, it is impossible to guarantee that a piece of code is not vulnerable to these attacks. To this end, we wanted to emphasise that timing leakages should be minimised at all levels of coding, including non-cryptography. We present the following observations from our empirical analysis of software vulnerabilities related to timing attacks:
* Application developers made repetitive coding mistakes (i.e., unsafe comparison operations), which indicates a lack of awareness of timing attacks.
* Whenever the code is required to deal with sensitive information (e.g., user authentication), time-constant comparison operations should be used to perform the verification.
* The verification process on login forms should not be skipped, even in the event of a major error (e.g., incorrect username). The identified mitigation techniques are verifying a synthetic password and adding a random timer.
### _Implications for Developers and Researchers_
We provide application developers with insight into the scope of timing attacks. Moreover, we suggest secure coding practices that can enhance the security level of their code against the growing threat of such attacks. Specifically, we identified constant-time comparison functions in 11 programming languages, which can be leveraged to improve code resistance to these attacks. Additionally, we outlined strategies for addressing user enumeration issues, which are a known attack vector for timing attacks. For researchers, we have identified some promising research directions. The results of this study may be used as a basis for a user study with application developers to assess their level of awareness about the secure coding practices. An alternative direction would be to utilise the software vulnerability dataset generated in this research to develop a framework for detecting timing attack vulnerabilities at the non-cryptography level to assist application developers.
## VI Threats to Validity
_Construct Validity:_ Our manual source code analysis may be biased or inaccurate. However, following the previously published studies [28, 37] has helped us minimise such threats and confirm the validity of our methodology.
_Internal Validity:_ The outcomes of our investigation may be influenced by the correctness of the analysed patches. Developer expertise is crucial to providing patches that address the intended vulnerability without introducing new threats. Gousios et al. [38] demonstrated that popular GitHub repositories have a rigorous peer review process for accepting code contributions. The majority of patches analysed in this study are from popular GitHub repositories whose maintainers are strongly familiar with the coding style of the project. To lessen the aforementioned threat, we reported the Stars and Forks counts of analysed repositories in the vulnerability tables throughout the paper.
_External Validity:_ Another possible concern is that the number of software vulnerabilities in each programming language may affect the completeness of derived conclusions. For instance, we examined only one software vulnerability in less popular programming languages such as Go, Perl, Rust, and Lua. More vulnerability information sources are required to draw a generic conclusion. We are also aware that there may be other relevant software vulnerabilities that could not be included in our dataset for various reasons, such as insufficient disclosed information or that the software was not open-source. However, we have systematically searched for related vulnerabilities in the NVD dataset, which is widely used and respected among cybersecurity professionals and contains more than 218k reports about known SVs in software and hardware.
## VII Conclusion
In this paper, we have conducted an empirical study of 243 security vulnerabilities related to timing attacks. We investigated the prevalence of these vulnerabilities in various categories of affected products. In particular, to improve the security of non-cryptographic software against timing attacks, we performed a thorough source code analysis of 67 software vulnerabilities in 11 programming languages. Through this
study, we have revealed that developers who are not cryptography experts tend to make two specific types of coding mistakes (i.e., unsafe comparisons and user enumeration issues) that make their code more vulnerable to these attacks. We found that adherence to a taxonomy of known secure coding practices could have prevented the majority of software vulnerabilities related to timing attacks.
## Acknowledgment
This work has been supported by the Cyber Security Cooperative Research Centre Limited whose activities are partially funded by the Australian Government's Cooperative Research Centre Programme.
|
2307.10321 | Terahertz Communications and Sensing for 6G and Beyond: A Comprehensive
Review | Next-generation cellular technologies, commonly referred to as the 6G, are
envisioned to support a higher system capacity, better performance, and network
sensing capabilities. The THz band is one potential enabler to this end due to
the large unused frequency bands and the high spatial resolution enabled by the
short signal wavelength and large bandwidth. Different from earlier surveys,
this paper presents a comprehensive treatment and technology survey on THz
communications and sensing in terms of advantages, applications, propagation
characterization, channel modeling, measurement campaigns, antennas,
transceiver devices, beamforming, networking, the integration of communications
and sensing, and experimental testbeds. Starting from the motivation and use
cases, we survey the development and historical perspective of THz
communications and sensing with the anticipated 6G requirements. We explore the
radio propagation, channel modeling, and measurement for the THz band. The
transceiver requirements, architectures, technological challenges, and
state-of-the-art approaches to compensate for the high propagation losses,
including appropriate antenna design and beamforming solutions. We overview
several related technologies that either are required by or are beneficial for
THz systems and networks. The synergistic design of sensing and communications
is explored in depth. Practical trials, demonstrations, and experiments are
also summarized. The paper gives a holistic view of the current state of the
art and highlights the open research challenges towards 6G and beyond. | Wei Jiang, Qiuheng Zhou, Jiguang He, Mohammad Asif Habibi, Sergiy Melnyk, Mohammed El Absi, Bin Han, Marco Di Renzo, Hans Dieter Schotten, Fa-Long Luo, Tarek S. El-Bawab, Markku Juntti, Merouane Debbah, Victor C. M. Leung | 2023-07-19T07:04:09Z | http://arxiv.org/abs/2307.10321v2 | # Terahertz Communications and Sensing for 6G and Beyond: A Comprehensive View
###### Abstract
The next-generation wireless technologies, commonly referred to as the sixth generation (6G), are envisioned to support extreme communications capacity and in particular disruption in the network sensing capabilities. The terahertz (THz) band is one potential enabler for those due to the enormous unused frequency bands and the high spatial resolution enabled by both short wavelengths and bandwidths. Different from earlier surveys, this paper presents a comprehensive treatment and technology survey on THz communications and sensing in terms of the advantages, applications, propagation characterization, channel modeling, measurement campaigns, antennas, transceiver devices, beamforming, networking, the integration of communications and sensing, and experimental testbeds. Starting from the motivation and use cases, we survey the development and historical perspective of THz communications and sensing with the anticipated 6G requirements. We explore the radio propagation, channel modeling, and measurements for THz band. The transceiver requirements, architectures, technological challenges, and approaches together with means to compensate for the high propagation losses by appropriate antenna and beamforming solutions. We survey also several system technologies required by or beneficial for THz systems. The synergistic design of sensing and communications is explored with depth. Practical trials, demonstrations, and experiments are also summarized. The paper gives a holistic view of the current state of the art and highlights the issues and challenges that are open for further research towards 6G.
6G, Beamforming, Imaging, Integrated Communications and Sensing, ICAS, Localization, Positioning, Sensing, Terahertz, THz Communications, THz channels
## I Introduction
Today, the fifth generation (5G) mobile networks are still in the way of being massively deployed worldwide [1], but both academia and industry have shifted their focus to the next-generation technologies, commonly referred to as the sixth generation (6G) [2]. A collection of research groups, standardization bodies, regulatory organizations, and government agencies [3] has initiated a variety of programs to discuss the 6G vision [4] and develop key technologies [5], as we will elaborate later in Sec. II-A. To support disruptive applications, such as virtual and augmented reality [6], Internet of Things [7], Industry 4.0, connected and autonomous vehicles [8], and yet-to-be-conceived use cases like Metaverse, holographic-type telepresence [9], Tactile Internet [10], digital twin [11], full immersiveness [12], multi-sense experience, and blockchain [13], 6G requires significantly stringent performance, e.g., hyper rates on the order of terabits-per-second (Tbps) [14], ultra-reliability, near-zero latency, and massive connectivity density, far beyond what its predecessor can offer [15].
It is envisioned that the major mission of mobile networks needs to be transformed from _connected people and things_ to _connected intelligence_[16]. In addition to enhanced communications capabilities including the evolution of enhanced mobile broadband (eMBB+), the evolution of ultra-reliable low-latency communications (URLLC+), and the evolution of massive machine-type communications (mMTC+) [17], as shown in Fig.1, wireless _sensing_ along with networked artificial intelligence (AI) are expected to play critical roles in 6G and beyond to meet the demands of information and communications technology after 2030 [18]. Driven by the continuous improvement in frequency bands, bandwidths, antennas, devices, and signal processing, 6G and beyond are able to integrate sensing and communications into a unified system [19]. This integration enables 6G and beyond to _see_ the physical world through electromagnetic (EM) waves [20]. It offers high-resolution sensing, localization, imaging, and environment reconstruction capabilities to improve communications performance. Also, it supports a wide range of
novel applications beyond communications such as object tracking, security screening, remote sensing, process monitoring, simultaneous localization and mapping (SLAM), gesture recognition, and activity detection [21].
### _Why do 6G and beyond need the THz band?_
The THz band has attracted a lot of interest in recent years and is recognized as a promising technique for 6G [22]. Prior to stepping into the technical details, the authors of this article would like to first clarify a fundamental question that might still have confusion or disputes in some prior literature. That is, _why do we need to exploit the THz band in 6G and beyond_? We try to address this concern from the perspectives of both THz communications and THz sensing, as well as their synergy.
#### I-A1 THz Communications
At the World Radiocommunication Conference (WRC) held in 2019 a.k.a WRC-19, the International Telecommunication Union - Radiocommunication (ITU-R) assigned a total of 13.5 GHz spectrum, consisting of a group of high-frequency bands, for the deployment of 5G millimeter wave (mmWave) communications [1], see Sec. II-B. Despite the spectral redundancy of mmWave bands, it might not be sufficient to meet the growing need for bandwidth over the next decade. There are enormous spectral resources at higher frequencies that were already used for a wide variety of non-cellular applications such as remote sensing, radio astronomy, and radar [23], to name a few. With the advancement in antenna technology and radio-frequency components, these frequencies previously considered unsuitable for mobile communications due to their unfavorable propagation characteristics become technologically usable [24].
Fig. 2 illustrates the whole EM spectrum, consisting of radio, microwave, infrared (IR), visible light, ultraviolet, X-rays, and Gamma rays, from the lower to higher frequencies. It is noted that the definition of the EM spectrum in the general case differs from the naming of frequency bands from the perspective of wireless communications, as shown in this figure. Based on these considerations, THz is considered a suitable candidate to realize Tbps communications under the current level of hardware and signal-processing technologies. The reasons are explained as follows:
* **Spectrum scarcity of the sub-6 GHz band**: Favourable propagation characteristics of sub-6 GHz frequencies facilitate the use of sophisticated transmission technologies such as massive multi-input multi-output (MMIMO) [25], non-orthogonal multiple access (NOMA) [26], and high-order modulation like 1024-ary quadrature amplitude modulation (1024QAM) to achieve high spectral efficiency. However, spectrum scarcity and non-continuity pose a significant challenge to achieving higher rates. Even if the sub-6 GHz band ultimately determines a bandwidth of 1 GHz for International Mobile Telecommunications (IMT) services, a Tbps link can only be realized under extreme spectral efficiency of 1000 bps/Hz, as suggested by the Shannon capacity \(R=B\log(1+S/N)\). Unfortunately, such a level of this performance metric is infeasible in the foreseeable future, as the peak spectral efficiency specified for 5G, see ITU-R M.2410 [27], is 30 bps/Hz (in ideal conditions).
* **Insufficient mmWave bandwidth for Tbps**: By far, there are a total of 13.5 GHz spectral resources available within the mmWave band below 100 GHz. A data rate of 1 Tbps can only be achieved with transmission schemes having a spectral efficiency of approaching 100 bps/Hz, which is currently infeasible for sub-6 GHz frequencies and more challenging to implement for mmWave signals. Therefore, _the only possibility for Tbps communications relies on massively abundant frequencies above 100 GHz_.
* **Constraints of the optical bands**: Despite the enormous available spectrum in optical bands at IR [28], visible-light [29], and ultraviolet frequencies [30], several issues limit the practicality of optical wireless communications (OWC). The restriction of low transmission power due to hardware constraints and eye-safety purposes, the effects of several types of atmospheric attenuation on the signal propagation (e.g., fog, rain, dust, or pollution), high diffuse reflection losses, and the impact of misalignment between transmitter and receiver lead to limited data rates and short transmission ranges [31], locking its feasibility of large-scale use for mobile systems in 6G and beyond.
* **Adverse health effects of extreme high bands**: Ionizing radiation, including ultraviolet, X-rays, and Gamma rays, poses a significant risk to human health as it has strong energy power to disdodge electrons and generate free radicals that can lead to cancer. The adverse health effects of ionizing radiation are controllable if used with care, so it is used in specific fields such as radiotherapy, photography, semiconductor manufacturing, and nuclear medicine, among others. However, it is still too dangerous for personal communications [22].
Unlike ionizing radiation, THz frequencies are non-ionizing because their photon energy is not sufficient (0.1 to 12.4meV, which is over three orders of magnitude weaker than ionizing photon energy levels) to release an electron from an atom or a molecule, where typically 12eV is required for ionization. The THz band offers abundant spectral resources, ranging from tens of gigahertz to several terahertz, depending on the transmission distance. This makes the available bandwidth
Fig. 1: Envision of 6G usage scenarios, where sensing and AI will be critical pillars in addition to enhanced communications services [17].
more than ten times greater than that of mmWave bands, while the operating frequency is at least one order of magnitude below the optical bands. In addition, the technology required to make Tbps transmission over the THz band a reality is rapidly advancing, and the development of new transceiver architectures and antennas built upon novel materials with remarkable properties are finally overcoming one of the significant challenges [32].
#### Ii-A2 THz Sensing
As we know, the spatial resolution of a propagated signal becomes much finer with increasing frequencies, thereby enabling high-definition spatial differentiation [33]. In addition to THz communications, THz sensing (including positioning, imaging, and spectroscopy) exploit the tiny wavelength on the order of micrometers and the frequency-selective resonances of various materials over the measured environment to gain unique information based on the observed signal signature [34]. Compared with wireless sensing over other bands, THz sensing offers the following advantages:
* **High resolution and penetration capability**: Although low-frequency signals are able to sense, detect, and localize objects, as radar [23] and Global Navigation Satellite System (GNSS), THz sensing/positioning can improve the resolution due to tiny wavelengths, even for objects hidden from direct view. THz waves are able to penetrate a variety of non-conductive materials, e.g., plastics, fabrics, paper, ceramics, and dielectric substances. This allows THz sensing to detect hidden objects, structural defects, and layers beneath surfaces, making it useful in security screening, quality control, process monitoring, and material characterization [35].
* **Non-ionizing radiation**: Compared to X-rays and Gamma rays, THz waves have much lower photon energy, making them non-ionizing [36]. This means THz sensing is generally considered safe for biological samples and humans, allowing for non-destructive and non-invasive imaging and diagnosing.
* **Low environmental interference**: In contrast to visible or IR radiation [28], THz waves are less vulnerable to environmental factors such as ambient light, fog, or smoke. It allows THz sensing for outdoor or adverse conditions, expanding its usability in fields such as remote sensing, atmospheric monitoring, and outdoor security.
* **Spectroscopic analysis**: THz waves interact with molecules in a characteristic manner, leading to unique spectral fingerprints. THz spectroscopy provides valuable information about molecular vibrations and rotational transitions, enabling the identification and analysis of chemical substances, including explosives, drugs, and biomolecules [37]. It is particularly effective for identifying substances with distinct THz absorption or reflection properties.
#### Ii-A3 Synergy between THz communications and THz sensing
As discussed above, the THz band offers not only massive spectral resources for wireless communications but also unique advantages in sensing, positioning, imaging, and spectroscopy [33]. Hence, it attracted a lot of interest recently as a key enabler for implementing integrated sensing and communications (ISAC) for 6G and beyond [38]. On top of implementing THz communications and THz sensing in a unified system, these dual-functional wireless networks offer a great synergy through **sensing-aided communication**[39, 40] and **communication-aided sensing**[41], as elaborated in Sec. IX.
Using sensing information in communications may be one of the significant benefits of ISAC, which enables a more deterministic and predictable propagation channel. It facilitates the design of efficient communications algorithms and protocols, such as sensing-aided channel estimation [42], predictive beamforming served by sensing [43, 44], fast beam alignment and tracking [45], and link blockage mitigation [46]. On the other hand, mobile communications networks also provide significant opportunities and benefits for network sensing or sensing as a service [47]. Nodes share sensing results through the mobile network, where multiple network nodes (base stations, user equipment, etc.) can act as a collaborative sensing system [48]. This collaboration, achieved through sensing data fusion, reduces measurement uncertainty and provides a larger coverage area, as well as higher sensing accuracy and resolution.
### _Motivations and Contributions_
Recently, the wireless community has published a lot of research articles and surveys on the topic. The current survey papers tend to focus on specific aspects of THz communications, such as antenna fabrication [55], propagation characterization [58], measurement [63], channel modeling [64], beamforming [57], and hardware [53]. These surveys are more beneficial for researchers who are focusing on a particular aspect of THz communications and need an exhaustive collection of the existing research outcomes. On the other hand, some magazine articles such as [51, 66] offer overviews which cover relatively wide ranges of aspects but are rather concise on many topics. Moreover, prior literature primarily concentrated on the THz communications from the perspective of the conventional wireless community, while the THz sensing has received much less attention. Last but not least, most of the literature does not take into account the particular applications, requirements, and scenarios of 6G.
Responding to the discussion above, our article presents a comprehensive treatment and technology survey about THz
Fig. 2: The electromagnetic spectrum and the positions of mmWave, THz, and optical bands.
communications and sensing to clarify the advantages of THz over other bands for 6G and beyond, potential THz-based 6G applications, THz signal propagation characterization, THz channel modeling, THz measurement, THz antennas, photonic-electronic devices for THz transceiver, beam forming, beam alignment, THz networking, the synergy with other potential 6G technologies, THz-based integrated sensing and communications, and THz experimental test-beds. Table I compares this work with the existing ones in terms of the covered topics, aiming at justifying the novelty and contributions of this survey. From an application and implementation perspectives, we hope that this work can provide researchers in THz communication, THz sensing, and 6G with a holistic view of the current state of the art and highlight the issues and challenges that are open for further research.
The major contributions of this survey include:
* First, this work aims to answer a fundamental question: _Why do 6G and beyond need to exploit the THz band?_
Fig. 3: Outline of the structure of this survey.
It clarifies the comparative advantages of THz over other frequency bands in both communications and sensing in the scenarios of 6G and beyond.
* We provide a state-of-the-art overview of related fields by summarizing the global 6G development, latest spectrum assignment for IMT, and early exploration efforts in THz technologies.
* This article envisions potential THz-based communications and sensing applications for 6G and beyond.
* This survey comprehensively characterizes THz signal propagation, covering the phenomenon of _path loss, atmospheric absorption, weather effects, and blockage_.
* Up-to-date THz measurement campaigns by means of three measurement methods, i.e., _frequency-domain vector network analyzer (VNA), time-domain sliding, and time-domain spectroscopy (TDS)_, are outlined.
* THz channel modeling, both deterministic and statistical, are surveyed.
* This survey provides readers with the necessary knowledge required to design and build transceivers for THz communications and sensing, including the recent advances in THz antennas, THz electronic devices, and THz photonic devices.
* We elaborate on how to compensate for the large propagation loss through beamforming over large-scale antenna arrays. The fundamentals of ultra-massive multi-input multi-output (UMIMO), lens antenna array, beam tracking, beam estimation, and beam alignment are introduced.
* From a systematic perspective, we explore the paradigms for THz networking, with an emphasis on the synergy of THz communications and sensing with other 6G-enabling technologies, covering MMIMO, UMMIMO, NOMA, reconfigurable intelligent surfaces (RIS), non-terrestrial networks, digital twins, AI and machine learning (ML). Moreover, we discuss security, localization, integrated communications and sensing, multi-connectivity, and channel awareness for THz networks.
* This article discusses the building blocks, opportunities, and challenges for ISAC over the THz band, elaborating its unique advantages, use cases, key performance indicators (KPIs), joint waveform design, efficient algorithm design, and potential solutions.
* Last but not least, we list the latest advances in THz trials and experiments to give readers an insightful view of the practicality of THz communications and sensing.
### _The Structure of this Survey_
Overall, this survey aims to provide researchers with a holistic view of the current state of the art about all issues required to design and build THz-based wireless communications and sensing systems for 6G and beyond. Also, we hope this work can highlight the issues and challenges that are open for further research to speed up the research endeavors. To improve the readability, an outline of the survey is illustrated in Fig.3.
## II An Overview of THz-based 6G Systems
With the objective of facilitating an insightful view, this section summarizes the current state of the art in the related fields. First, Sec. II-A offers a global view of 6G development, followed by the status of spectrum usage for the IMT services worldwide in Sec. II-B. Then, the early THz exploration efforts are listed in Sec. II-C.
### _Global View of 6G Development_
At the beginning of 2019, South Korea's three network operators and U.S. Verizon were in a dispute with each other, ying for the title of being the world's first provider of the 5G communications services. This event marked the arrival of the 5G era [1]. In the past few years, the term '5G' has remained one of the most prominent buzzwords in the media, drawing unprecedented attention from the whole society. Apart from continuously enhancing network capacity and improving system performance as previous generations had done, 5G expands mobile communications services from human-centric to human-and-things, and meanwhile from the consumer market to vertical industries [67]. This has substantially increased the potential scale of mobile subscriptions from billions (i.e. equivalent to the world's population) to almost countless interconnectivity among humans, machines, and things.
In 2020, the outbreak of the COVID-19 pandemic led to a significant loss of human life worldwide and imposed unprecedented challenges on societal and economic activities. However, this public health crisis has underscored the unique role of telecommunications networks and digital infrastructure in keeping society operational and families connected. This is particularly relevant for the values of 5G applications, such as remote health care, online education, mobile working, autonomous vehicles, unmanned delivery, and smart manufacturing [68]. In July 2018, the International Telecommunication Union - Telecommunication (ITU-T) standardization sector established a focus group called _Technologies for Network 2030_ with the aim of studying the capabilities of networks for 2030 and beyond [69].
In 2020, the European Commission (EC) initiated the beyond 5G program, under its Horizon 2020 calls -- _ICT-20 5G Long Term Evolution_ and _ICT-52 Smart Connectivity beyond 5G_ -- where a batch of pioneer research projects was sponsored. At the beginning of 2021, the EC launched its 6G flagship research project _Hexa-X_[70], followed by the second phase of European level 6G research _Hexa-X-II_ in early 2023 [71]. The EC has also announced its strategy to accelerate investment in '_Gigabit Connectivity_' including 5G and 6G to shape Europe's digital future [72]. In October 2020, the Next Generation Mobile Networks (NGMN) alliance announced its new '_6G Vision and Drivers_' project, intending to provide early and timely guidelines for global 6G activities. The first report for this project was published in April 2021 [73]. At its meeting in February 2020, the ITU-R sector decided to start studying future technology trends for the future evolution of International Mobile Telecommunications (IMT-2030) [74].
Motivated by the revolutionary force of 5G, the governments of many countries recognized the significance of mobile com
munications technologies for driving economic prosperity and sustainable growth. In the past years, many countries have set up research initiatives officially or announced ambitious plans for the development of 6G. The world's first 6G effort, '_6G-Enabled Wireless Smart Society and Ecosystem (6Genesis) Flagship Program_', was carried out by the University of Oulu in April 2018, as part of the Academy of Finland's flagship program [75]. This project focuses on groundbreaking 6G research, with four interrelated strategic areas including wireless connectivity, distributed computing, devices and circuit technology, and services and applications. In September 2019, the world's first 6G white paper '_key drivers and research challenges for 6G ubiquitous wireless intelligence_' was published as an outcome of the first 6G Wireless Summit [76]. Subsequently, a series of white papers have been published, covering twelve specific areas of interest, such as ML, edge intelligence, localization, sensing, and security.
In October 2020, the Alliance for Telecommunications Industry Solutions (ATIS) established the '_Next G Alliance_', an industry-led initiative aimed at advancing North American mobile technology leadership in 6G over the next decade [77]. Founding members of the initiative include leading companies such as AT&T, T-Mobile, Verizon, Qualcomm, Ericsson, Nokia, Apple, Google, Facebook, and Microsoft. Next G Alliance places a strong emphasis on technology commercialization and seeks to encompass the full lifecycle of 6G research, development, manufacturing, standardization, and market readiness. In addition to this, SpaceX, a U.S. company known for its revolutionary reusable rockets, announced the Starlink project in 2015 [78]. This project aims to deploy a very large-scale low Earth orbit (LEO) communications satellite constellation to offer ubiquitous internet access services across the whole planet. The Federal Communications Commission (FCC) approved its initial plan of launching 12,000 satellites, and an application for 30,000 additional satellites is currently under consideration. The Starlink service is currently available to the public in a few countries and regions. Although it is an exaggeration to claim that Starlink will replace 5G or be considered 6G, the impact of such a very large-scale LEO satellite constellation on 6G and beyond should be taken into account seriously in the mobile industry.
In November 2019, the Chinese Ministry of Science and Technology kicked off the research and development efforts for 6G technology, in collaboration with five other ministries or national institutions. The event also marked the establishment of a working group, named _IMT-2030(6G) Promotion Group_, responsible for managing and coordinating the program, and an expert group comprising 37 top researchers from academia, research institutes, and industry. In June 2021, IMT-2030(6G) Promotion Group released its white paper '_6G Vision and Candidate Technologies_', outlining the state-of-the-art research findings of the group [79]. It covers the 6G vision, the driving forces behind its development, potential use cases, ten candidate technologies, and additional insights.
In late 2017, the Japanese Ministry of Internal Affairs and Communications formed a working group to investigate next-generation wireless technologies. Their research findings indicated that 6G should offer transmission rates at least ten times faster than 5G, near-instant connectivity, and massive connection of up to ten million devices per square kilometer. In December 2020, Japan established the _Beyond 5G Promotion Consortium (BSGPC)_ with the objective of expediting the development of 6G while enhancing the country's international competitiveness through industry-academia-government collaboration. B5GPC published its inaugural white paper '_Beyond 5G white paper: Message to the 2030s_' in March 2022 [80], summarizing the requirements and expectations of each industry for 6G, the necessary capabilities, and technological trends. South Korea announced its ambition to set up the world's first 6G trial in 2026. In addition, they have unveiled the _K-Network 2030_ initiative, which aims to sponsor the development of key 6G technologies, e.g., developing cloud-native networks on South Korean-made AI chips, launching a low-orbit communications satellite by 2027, and creating an open radio access network (RAN) ecosystem for domestic firms.
Last but not least, the German Federal Ministry of Education and Research (BMBF) announced in February 2021 a new funding program called '6G Vision' as part of Germany's broader initiative to establish the country as a leader in 6G technology. In August 2021, under the umbrella organization and networking of a leading project named _the 6G platform_, four 6G research hubs, i.e., 6G-life, 6GEM, 6G RIC, and Open6GHub, were built [81]. A total budget of approximately 250 million euros was assigned, covering 160 research groups at overall 21 universities and 15 research institutes, as well as more than 40 small and medium enterprises. In the subsequent year, eighteen 6G industry projects, such as 6G-ANNA, 6G-TakeOff, 6G-Terafactory, and 6G-Next, and seven projects on resilience, e.g., HealthNet, AKITA, and ConnRAD, were established [82].
In France, the National Agency for Research (ANR) launched in 2021 a national acceleration strategy (the France 2030 plan [[https://www.gouvernement.fr/france-2030](https://www.gouvernement.fr/france-2030)]) on future communication technologies with a focus on digital transition, telecommunication, and global innovation. On May 1st, 2023, under the France 2023 plan, the project called PEPR 5G and Future Networks started whose objective is to develop advanced technologies for 5G and future networks, by integrating their environmental and societal impacts and the security of the transmitted data. The ANR PEPR 5G project is comprised of 10 interlinked projects, among which the project titled NF-SYSTERA (Devices and SYStems for high-speed links in the sub-TERAhertz range), whose objective is to explore frequency bands beyond 90 GHz for future wireless communication systems in the sub-THz and THz range.
### _Up-to-date Spectrum Usage for IMT_
Over the past few decades, the evolution of mobile communications has followed certain key criteria:
* _the signal bandwidth becomes increasingly wide_;
* _the operating frequency band is increasingly high_; and
* _the spectral demand is increasingly large_.
We witnessed that each new-generation cellular system demanded more spectral resources and utilized a larger channel
bandwidth to support more system capacity and realize a higher data rate than its predecessor.
Let us have a brief review of the history of cellular systems. Initially, the signal bandwidth of a channel in the first generation (1G) system is 20 kHz to 30 kHz that is already sufficient to carry the analog voice signal of a mobile user. The low-frequency band under 1 GHz with favorable propagation characteristics was a preferred choice for system designers at that time. For example, the most dominating 1G standard - Advanced Mobile Phone System (AMPS) [83] - utilizes a pair of 25 MHz (for downlink and uplink, respectively) to carry maximally 832 analog voice users at each cell cluster. Global System for Mobile Communications (GSM) [84] first launched in 1991 supports 1,000 subscribers within a spectrum of \(2\times 25\,\mathrm{MHz}\), where eight digital-voice users are multiplexed over each 200 kHz channel using time division multiple access (TDMA).
For the third generation (3G) system, Wideband Code-Division Multiple Access (WCDMA) employs a much wider bandwidth of 5 MHz to carry tens of users per channel simultaneously by means of spread-spectrum modulation techniques [85]. Meanwhile, the applied frequency band migrated gradually from low frequencies to higher frequencies, crossing over 2 GHz for the first time due to the demand for more spectral resources. Further, Long-Term Evolution Advanced (LTE-Advanced), as the unified fourth generation (4G) standard [86], supports a maximal bandwidth of 100 MHz using carrier aggregation so as to realize the peak rate of 1 Gbps in the downlink. Accordingly, its frequency band spans over a wider range, from 450 MHz to 6 GHz, since hundreds of megahertz spectral resources are needed to satisfy the explosive traffic growth of mobile Internet. Until 4G, the cellular systems operated in low-frequency bands below 6 GHz, which are referred to as _the sub-6 GHz band_ when high-frequency bands are considered in 5G new radio (NR) [1]. A summary of previous mobile generations with an emphasis on bandwidth and operating frequency bands is given in Table II.
During the ITU-R WRC held in 2015, also known as WRC-15, an item on the agenda was designated to identifying high-frequency bands above 24 GHz that could be used for IMT-2020 mobile services. After conducting follow-up studies after WRC-15, the ITU-R found that ultra-low latency and high data-rate applications would require larger, contiguous spectrum blocks. As a result, WRC held in 2019, a.k.a. WRC-19, assigned a total of 13.5 GHz of the spectrum, consisting of a group of high-frequency bands, for the deployment of 5G mmWave communications. That is
* 24.25-27.5 GHz
* 37-43.5 GHz
* 45.5-47 GHz
* 47.2-48.2 GHz
* 66-71 GHz
Meanwhile, the Third Generation Partnership Project (3GPP) specified the relevant spectrum for 5G NR, which was divided into two frequency ranges:
* the First Frequency Range, including the sub-6 GHz frequency band from 450 MHz to 6 GHz
* the Second Frequency Range, covering 24.25 GHz to 52.6 GHz.
Initial mmWave deployments are expected to operate in 28 GHz (3GPP NR band n257 and n261) and 39 GHz (3GPP n260) with time-division multiplexing (TDD) mode, followed by 26 GHz (3GPP n258), as specified in Table III.
### _Prior Exploration of THz_
The term _terahertz_ was initially used in the 1970s to describe the spectral line frequency coverage of a Michelson interferometer or the frequency coverage of point contact diode detectors [87]. Before that, spectroscopists had coined this term for emission frequencies below the far IR range, which is the lowest frequency part of IR radiation with a frequency range of about 300 GHz to 20 THz. Millimeter wave refers to the frequency band from 30 GHz to 300 GHz. Hence, the border between the far IR and THz, and the border between mmWave and THz, are still rather blurry. Typically, the THz band refers to EM waves with a frequency band from 0.1 THz to 10 THz. However, other definitions, e.g., 300 GHz to 3 THz, are used parallelly. The difference in using frequency (THz) and wavelength (mmWave) for naming leaves an ambiguity of
the range from 100 GHz to 300 GHz, which is also referred to as _upper-mmWave_ or _sub-THz_ by some researchers. It is envisaged that 5G mainly focuses on the frequency bands below 100 GHz while 6G and beyond will cross over this frequency point. The current trend seems to give more emphasis on using cm and low mmWaves for enhancing communications in 6G, but the THz band is still very critical for sensing.
In order to avoid harmful interference to Earth Exploration Satellite Service (EESS) and radio astronomy operating in the spectrum between 275 GHz and 1 THz, the ITU-R WRC-15 has initiated the activity called _'Studies towards an identification for use by administrations for land-mobile and fixed services applications operating in the frequency range 275-450 GHz'_. At the WRC-19 conference, a new footnote was added to the radio regulations, allowing for the open of the spectrum between 275 GHz and 450 GHz to land mobile and fixed services. Together with the already assigned spectrum below 275 GHz, a total of 160 GHz spectrum, containing two big contiguous spectrum bands with 44 GHz (i.e., from 252 GHz to 296 GHz) and 94 GHz bandwidth, respectively, is available for THz communications without specific conditions necessary to protect EESS [88].
The _mmWave Coalition_, a group of innovative companies and universities united in the objective of removing regulatory barriers to technologies using frequencies ranging from 95 GHz to 275 GHz, submitted comments in January 2019 to the FCC and the National Telecommunications and Information Administration (NTIA) for developing a sustainable spectrum strategy and urged NTIA to facilitate the access to spectrum above 95 GHz. In March 2019, the FCC announced that it opens up the use of frequencies between 95 GHz and 3 THz in the United States, provided 21.2 GHz of spectrum for unlicensed use and permitted experimental licensing for 6G and beyond. In the year 2016, the Defense Advanced Research Projects Agency (DARPA), in collaboration with prominent entities from the semiconductor and defense industries such as Intel, Micron, and Analog Devices, established the Joint University Microelectronics Program (JUMP), comprising six research centers with the goal of addressing both current and emerging challenges in the realm of microelectronic technologies. One such center, the _Center for Converged TeraHertz Communications and Sensing (ComSecTer)_, is focused on the development of advanced technologies tailored to meet the requirements of a future cellular infrastructure.
The first attempt in building a wireless communications system at THz frequencies started in 2008 with the foundation of a Terahertz Interest Group (IGTHz) under the IEEE 802.15 umbrella. In May 2014, Task Group 3d was formed to standardize a switched point-to-point communications system operating in the frequencies from 60 GHz to the lower THz bands. During the meeting in March 2016, the supporting documents for IEEE 802.15.3d were approved, and the call for proposals was issued. Based on the proposal reviews and two sponsor recirculation ballots, IEEE 802.15.3d-2017 was ratified by the IEEE Standards Association (SA) Standards Board in September 2017 [89]. IEEE 802.15.3d-2017 specifies an alternative Physical (PHY) layer at the lower THz frequency band from 252 GHz to 325 GHz for switched point-to-point connections. This standard aims for a maximum speed of over 100 Gbps with eight bandwidths configurations from 2.16 GHz to 69.12 GHz and effective coverage from tens of centimeters to a few hundred meters [90].
## III Potential Applications of THz in 6G and Beyond
The massive amount of spectrum at THz frequencies offers opportunities for ultra-fast wireless applications [51]. It also introduces a new level of flexibility in mobile system design, as THz links can be utilized for wireless backhaul among base stations, which enables ultra-dense architecture, accelerates network deployment, and reduces costs associated with site acquisition, installation, and maintenance. Due to the tiny wavelengths of THz signals, the antenna dimension is very small, opening up possibilities for innovative applications such as nanoscale communications for nanoscale devices or nanomachines, on-chip communications, the Internet of Nano-Things, and intra-body networks [58]. Moreover, THz signals can also be used beyond communication, facilitating high-definition sensing, imaging, and positioning of the surrounding physical environment [91]. This offers the potential to efficiently implement integrated communications and sensing at the THz band. Table IV lists and compares the literature with this survey in terms of potential THz applications and use cases.
### _Terahertz Communications Applications_
Terabit Cellular HotspotsThe proliferation of mobile and fixed users with high-throughput demand in densely populated urban areas or specific locations, such as industrial sites, necessitates the deployment of ultra-dense networks. The utilization of the THz band can offer an abundance of spectral resources and ultra-wide bandwidth for small cells, which possess a relatively short coverage distance and high likelihood of line-of-sight (LoS) paths, allowing for Terabit communications links. These small cells cater to both static and mobile users in both indoor and outdoor settings, providing specific applications such as ultra-high-definition video delivery, information shower, high-quality virtual reality, and holographic-type communications [9]. By incorporating conventional cellular networks operating in low-frequency bands, a heterogeneous network, consisting of a macro-base-station tier and a small-cell tier, can facilitate seamless connectivity and full transparency across a wide coverage area and global roaming, thus fulfilling the extreme performance requirements of 6G and beyond mobile networks [51].
Terabit Campus/Private NetworksTHz frequencies provide a means for implementing super-high-rate, ultra-reliable, and hyper-low-latency connectivity within a private or campus network for specific applications such as Industry4.0 and Tactile Internet [10]. This allows for seamless interconnection between ultra-high-speed optical networks and production devices with no discernible speed or delay difference between wireless and wired links. In addition, abundant bandwidths at THz frequencies also make massive connection density a reality [63]. These capabilities facilitate the deployment of
industrial networks, linking a vast number of sensors and actuators within a factory, and campus networks providing high data-throughput, low-latency, and high-reliability connections for equipment and machines such as automated guided vehicle (AGV) in a logistic center.
Terabit Device-To-Device and Vehicle-to-EverythingTHz communications represent a promising tool for providing direct Tbps links between devices in close proximity [92]. Indoor usage scenarios, such as homes or offices, can benefit from the formation of particular device-to-device (D2D) links among a set of personal or commercial devices [93]. Applications such as multimedia kiosks and ultra-high-speed data transfer between personal devices can be supported with Tbps links, enabling the transfer of the equivalent content of a blue-ray disk to a high-definition large-size display in less than one second. THz communications could also have a significant impact on Brain-Computer Interface (BCI) applications, enabling the transfer of vast amounts of collected brain-wave data to the computer that processes the data. In computer vision, THz communications can facilitate the transfer of high-definition video data to platforms running machine learning-based analytical software. Additionally, Tbps D2D links can be applied in outdoor settings for vehicle-to-everything scenarios [94], providing high-throughput, low-latency connectivity between vehicles or between vehicles and surrounding infrastructure [95].
Secure Wireless CommunicationThe use of large-scale antenna arrays can compensate for the path loss and atmospheric attenuation challenges of THz communications. These arrays can provide high gain and narrow beamforming, enabling long-distance communications links while minimizing interference and eavesdropping risks [109]. Additionally, the use of ultra-wide signal bandwidth and spread spectrum techniques can enhance the security of THz communications. Spread spectrum techniques spread the signal over a wide range of frequencies, making it harder for an eavesdropper to intercept the signal. However, it is worth noting that THz communications face unique security challenges that must be addressed. For example, THz waves can penetrate some materials, including clothing and certain types of packaging, which could potentially be exploited by malicious actors [112].
Terabit Wireless BackhaulThe installation of fiber optical connections is typically time-consuming and costly, and it may not always be feasible to deploy public optical networks within certain buildings or areas due to property owner objections. However, the next-generation mobile network is expected to be highly heterogeneous, requiring high-throughput backhaul or fronthaul connectivity between network elements such as macro base stations, small cells, relays, and distributed antennas. Highly directive THz links can provide ultra-high-speed wireless backhaul or fronthaul [97], reducing the time and cost of installation and maintenance while enabling greater flexibility in network architecture and communications mechanisms [100]. In addition, mobile or fixed users in rural or remote areas nowadays suffer from worse coverage and low quality of service (QoS). If a cost-efficient and flexible solution cannot be guaranteed, the digital divide between rural areas and major cities will increase. As a wireless backhaul extension of the optical fiber [32], THz wireless links can work well as an essential building block to guarantee a universal telecommunications service with high-quality, ubiquitous connections everywhere.
Terahertz Nano-CommunicationsAs we know, the minimal size of an antenna used for the transmission of terahertz signals can be on the magnitude order of micrometers [103]. Intuitively, it will enable wireless connection among nanoscale machines or nanomachines using nanoscale antennas for very tiny specific equipment that performs particular tasks at the nanoscale, such as a biosensor injected into the human blood vessel. Each component of a nanomachine is up to a few hundred cubic nanometers in size, and the size of the entire device is in the order of a few cubic micrometers at most. Several specific use cases of THz nano-communications are provided by [31], i.e.,
* _Health Monitoring:_ Sodium, glucose, and other ions in the blood, cholesterol, cancer biomarkers, or the presence of different infectious agents can be detected utilizing nanoscale biosensors injected into the human body or embedded under the skin. A set of biosensors distributed within or around the body, comprising a body sensor network, could collect relevant physical or biochemical data related to a human's health [37].
* _Nuclear, Biological, and Chemical Defense:_ Chemical and biological nanosensors are able to detect harmful chemicals and biological threats in a distributed manner. One of the main benefits of using nanosensors rather than classical macroscale or microscale sensors is that a chemical composite can be detected in a concentration as low as one molecule and much more timely than classical sensors [36].
* _Internet-of-Nano-Things:_ Using THz nano-communications to interconnect nanoscale machines, devices, and sensors with existing wireless networks [101] and the Internet makes a truly cyber-physical system that can be named as the Internet of Nano-Things (IoNT) [107]. The IoNT enables disruptive applications that will reshape the way humans work or live.
* _On-Chip Communication:_ THz communications can provide an efficient and scalable approach to inter-core connections in on-chip wireless networks using planar nano-antenna arrays to create ultra-high-speed links [111]. This novel approach will expectedly fulfill the stringent requirements of the area-constraint and communication-intensive on-chip scenario by its high bandwidth, low latency, and low overhead.
### _Terahertz Sensing Applications_
Terahertz SensingAt THz frequencies, the spatial resolution of a signal becomes much finer due to the tiny wavelengths, allowing for high-definition spatial differentiation [19]. THz sensing techniques take advantage of the frequency-selective resonances of various materials in the measured environment, as well as the small wavelengths, typically on the order of micrometers [38]. This enables the extraction of unique information based on the observed
signal signature. THz signals can penetrate non-conducting materials like plastics, fabrics, paper, wood, and ceramics, but they face challenges when penetrating metal materials or when water heavily attenuates their radiation power. The specific strength and phase variations of THz signals caused by different thicknesses, densities, or chemical compositions of materials enable the accurate identification of physical objects [105].
Terahertz ImagingUsing THz radiation to form images has many particular technical advantages over microwaves and visible light. THz imaging [106] exhibits high spatial resolution due to smaller wavelengths and ultra-wide bandwidths with moderately sized hardware than imaging using low frequencies. Compared with infrared and visible light, THz waves have better penetration performance, making common materials relatively transparent before THz imaging equipment. There are many security screening applications, such as checking postal packages for concealed objects, allowing THz imaging through envelopes, packages, parcels, and small bags to identify potential hazardous items [35]. Based on the property that THz radiation is non-ionizing and therefore no known health risk to biological cells except for heating has motivated its application in the human body, where ionizing radiation, i.e., ultraviolet, X-Ray, and Gamma Ray, will raise high health risks. Therefore, THz imaging is suitable for the stand-off detection of items such as firearms, bombs,
and explosive belts hidden beneath clothing in airports, train stations, and border crossings [60].
Terahertz PositioningIt is envisioned that 6G and beyond are required to offer high-accurate positioning and localization in both indoor and outdoor environments, in addition to communications services, which GNSS and conventional multi-cell-based localization techniques using low-frequency bands fail to provide. Devices incorporating THz sensing and THz imaging will likely also provide centimeter-level localization anywhere [114]. On the other hand, leveraging THz imaging for localization has unique benefits compared to other methods. The THz imaging can localize users in the non-line-of-sight (NLoS) areas, even if their travel paths to the base station experience more than one reflection (e.g., multiple bounces). High-frequency localization techniques are based on the concept of SLAM [113], in which the accuracy is improved by collecting high-resolution images of the environment, where the THz imaging mentioned above can provide such high-resolution images. SLAM-based techniques consist of three main steps: imaging the surrounding environment, the estimation of ranges to the user, and the fusion of images with the estimated ranges. Since SLAM deals with relatively slow-moving objects, there is sufficient time to process high-resolution THz measurements. Such measurements can hold sensing information, resulting in complex state models comprising the fine-grained location, size, and orientation of target objects, as well as their electromagnetic properties and material types [91].
## IV THz Propagation and Channel Characterization
Although we have a lot of experience with radio channels over lower frequency bands, we do not have the same with the THz band(s) that exhibit several distinct characteristics of their own [63, 115]. Like microwave and mmWave, THz signals suffer from free-space path loss (FSPL), as inherent attenuation when an electromagnetic wave is radiated from an isotropic antenna. Unfortunately, the receive antenna for the THz band has a weak ability to capture the radiation power due to the tiny wavelength. This leads to a propagation phenomenon that FSPL proportionally grows with the increase of the carrier frequency.
Since the wavelength of a THz wave falls into the same order of magnitude as the dimensions of molecules in the atmosphere and human tissue, strong molecular absorption and particle scattering, which are negligible over low-frequency bands, become significant [64]. To be specific, water vapor and oxygen molecules suspended in the atmosphere impose an incredible loss up to approximately \(20\,000\,\mathrm{dB}\) per kilometer in the worst case. In addition to this gaseous absorption from water molecules, liquid water droplets, in the form of suspended particles into clouds, rain-falling hydrometeors, snowflakes, and fogs, can attenuate the signal strength since their dimensions are comparable to the THz wavelength. Furthermore, surrounding physical objects become sufficiently large in size for scattering, and ordinary surfaces are also too rough to make specular reflections. As a result, a THz wave is susceptible to blockages like buildings, furniture, vehicles, foliage, and even humans.
Full knowledge of signal propagation characteristics and accurate channel modeling is mandatory for designing transmission algorithms, developing network protocols, evaluating system performance, and deploying commercial networks. Therefore, this section comprehensively characterizes THz signal propagation, including path loss, atmospheric absorption, weather effects, and blockage, aiming to provide readers with the prerequisite knowledge of THz channels, as the basis for designing THz communications and sensing in 6G and beyond.
### _High Free-Space Path Loss_
When an isotropic radiator feeds an EM wave into free space, the energy evenly spreads over the surface of an ever-increasing sphere, as illustrated in Fig. 4. The metric _effective isotropic radiated power (EIRP)_ indicates the maximal energy in a particular direction relative to a unity-gain isotropic antenna. Hence, it equals the product of the transmit power \(P_{t}\) and the transmit antenna gain \(G_{t}\) in the direction of a receive antenna. The _law of conservation of energy_ states that the total energy contained on the surface of a sphere of any radius \(d\) remains constant [116]. Power flux density, namely the power flow per unit area of the incident field at the antenna, is equivalent to the EIRP divided by the surface area of a sphere with radius \(d\), i.e., \(\frac{P_{t}G_{t}}{4\pi d^{2}}\). The received power captured by a receive antenna is proportional to its aperture \(A_{r}\), we have
\[P_{r}=\left(\frac{P_{t}G_{t}}{4\pi d^{2}}\right)A_{r}. \tag{1}\]
Meanwhile, the gain of a receive antenna \(G_{r}\) depends on its effective aperture area under the following relationship
\[A_{r}=G_{r}\left(\frac{\lambda^{2}}{4\pi}\right), \tag{2}\]
Fig. 4: Illustration of the free-space radiation of an electromagnetic wave, where the energy of a signal radiated from a transmit antenna spreads isotropically. At a propagation distance of \(d\), therefore, the energy evenly distributes over the surface of a sphere with a radius of \(d\).
where \(\lambda\) stands for the wavelength of the transmit signal. Substituting (2) into (1) yields the well-known Friis transmission equation presented by Harald T. Friis in 1946 [117], i.e.,
\[P_{r}=P_{t}G_{t}G_{r}\left(\frac{\lambda}{4\pi d}\right)^{2}. \tag{3}\]
Due to the large dynamic range across several orders of magnitude, we usually express the strength of signals and noise in decibels (dB). Free-space path loss is defined as the ratio between the transmit and receive power on a logarithm scale:
\[\mathrm{PL}=10\lg\frac{P_{t}}{P_{r}}=20\lg\left(\frac{4\pi d}{\lambda}\right) -10\lg\left(G_{t}G_{r}\right), \tag{4}\]
which implies that the FSPL increases 20 dB per decade (ten times) as a function of the carrier frequency. For example, it gets an extra loss of 20 dB from 30 GHz to 300 GHz under the same condition of \(d\), \(G_{t}\), and \(G_{r}\). Note that extremely high path loss at THz frequencies only rises from the small aperture area of the receive antenna, which is proportional to the wavelength of its operating frequency.
FSPL does not properly reflect the realistic characteristics of wireless propagation because the physical environment of a terrestrial wireless communications system is distinct from free space. In addition to a LoS path, an electromagnetic wave is reflected, diffracted, and scattered by the surrounding objects in an urban or indoor scenario, forming NLoS paths between a pair of transmit and receive antennas. Due to the differences in energy attenuation, propagation delays, and phase rotations, the additional copies of an electromagnetic wave referred to as multi-path components cause an extra drop in the received signal power. Hence, extensive measurements and accurate modeling of path loss in different frequencies, distances, and propagation environments have to be carried out. The novel uses of the THz band in 6G and beyond such as kiosk downloading, nano-scale networks and wireless backhaul raise many peculiarities, which need to be well investigated.
Plenty of research groups have conducted indoor and outdoor channel measurements to make clear the path-loss effects under the scenarios of THz wireless communications. In 2020, a research team at the University of Southern California employed a channel sounder and a frequency extender for THz channel measurements and built a platform for the exploration of THz communications within a short distance of up to 5.5 m [118]. Later, this team presented the first set of double-directional outdoor measurements over a 100 m distance in urban scenarios based on radio frequency over fiber (RFoF) extensions [119], and the results in an urban environment on a linear moving route for a distance up to 15 m [120]. In addition, their findings unveil that metallic-covered surfaces lead to a considerable enhancement of multi-path, indicating a critical impact of building materials [121] at THz frequencies. Shanghai Jiao Tong University along with Huawei have investigated the LoS and NLoS path loss by carrying out measurements in indoor scenarios at frequencies ranging from 130 GHz to 320 GHz with a frequency-domain VNA-based sounder [122, 123, 124].
The researchers at New York University (NYU) have also indicated that the factory scenario is a rich-scattering environment due to a massive number of metal structures and objects. In their recent paper [125], path-loss analyses in a factory building based on sub-THz channel measurements with a maximal distance of 40 m using directional horn antennas at 142 GHz were provided. Based on those measurement setups, NYU has also investigated the urban microcell (UMi) large-scale path loss at 28, 38, 73, and 142 GHz [99]. The outcomes indicate that the path-loss exponents are similar across all frequencies, implying that the inter-site separation of base stations does not need to change as frequencies shift towards THz, as antenna gains grow quadratically with frequency if the antenna aperture remains constant [126, 127].
### _Atmospheric Absorption_
Although gaseous molecules absorb some energy of an EM wave, atmospheric absorption is negligible over the sub-6G band such that traditional cellular systems do not take it into account when calculating the link budget. However, this effect substantially magnifies for the THz wave, and the absorption loss becomes extremely large at certain frequencies. Such attenuation arises from the interaction of an EM wave and a gaseous molecule [128]. When the THz wavelength approaches the size of molecules in the atmosphere, the incident wave causes rotational and vibrational transitions in polar molecules. These processes have a quantum nature where resonances take place at particular frequencies depending on the internal molecular structure, leading to large absorption peaks at certain frequencies [129].
As a main gaseous component of the atmosphere, oxygen plays a major role in atmospheric absorption under clear air conditions. In addition, water vapor suspended in the air strongly affects the propagation of an electromagnetic wave. The attenuation caused by water vapor dominates the THz band, except only a few specific spectral regions where the effect of oxygen is more evident. A more extensive study of atmospheric absorption is often carried out in radio astronomy and remote sensing. However, from the perspective of wireless communications, the absorption of some additional molecular species, e.g., oxygen isotopic species, oxygen vibrationally excited species, stratospheric ozone, ozone isotopic species, ozone vibrationally excited species, a variety of nitrogen, carbon, and sulfur oxides, is usually negligible compared with that of water vapor and oxygen [130].
Responding to the need to accurately estimate the gaseous absorption at any air pressure, temperature, and humidity, the ITU-R conducted a study item and recommended a mathematical procedure to model these attenuation characteristics. As a combination of the individual spectral lines from oxygen and water vapor, along with small additional factors for the non-resonant Debye spectrum of oxygen below 10 GHz, pressure-induced nitrogen absorption over 100 GHz, and a wet continuum to account for the excess absorption from water vapor, the ITU-R P676 model [131] is built to calculate the value of atmospheric attenuation at any frequency from 1 GHz to 1000 GHz. Alternatively, the high-resolution transmission molecular absorption (HITRAN) database [132], which is a compilation of spectroscopic parameters, can be used to
predict and analyze the transmission in the atmosphere. To compute the atmospheric absorption at THz, it needs to extract spectroscopic data from the HITRAN database and then apply radiative transfer theory.
The atmospheric attenuation from 1 GHz to 1000 GHz is illustrated in Fig. 5. Assume the air is perfectly dry with a water vapor density of 0 g/m\({}^{3}\), only the effect of oxygen molecules exists, as indicated by _Oxygen_ in the figure. Meanwhile, the _Standard Air_ shows the usual atmospheric condition at the seal level (with an air pressure of 1013.25 hPa, the temperature of 15 \({}^{\circ}\)C, and a water-vapor density of 7.5 g/m\({}^{3}\)). Except for two frequency windows centered on 60 GHz and 118.7 GHz, where many oxygen absorption lines merged, the atmospheric attenuation due to water vapor dominates the THz band. As we can see, this absorption loss accounts for a peak of approximately 20 000 dB/km at 560 GHz. In other words, only a short distance of 1 m brings a loss of approximately 20 dB, which is prohibitive for efficient wireless communications. In contrast, the atmospheric attenuation at the sub-6 GHz band is on the order of magnitude around 0.01 dB/km, which is negligible.
To be noticed, for some existing studies of channel characterization in the THz band, which focus on very short-range indoor coverage or nano-communications [133], the atmospheric effect is typically not a major factor. However, for macro-scale THz communications, especially outdoor environments, atmospheric absorption should be taken into consideration. Early in the 1980s, some researchers represented their studies for various atmospheric conditions and elevation angles. For instance, Ernest K. Smith calculated the absorption values due to atmospheric oxygen and water vapor with frequencies spanning from 1 GHz to 340 GHz [134]. Hans J. Liebe and Donald H. Layton [135] performed laboratory measurements of water vapor attenuation at 138 GHz and formed an empirical propagation model that utilizes a local line base to address frequencies up to 1 THz. Molecular absorption and power emission have also been speculated to cause molecular noise, but the phenomenon has not been confirmed by measurements. In [136], Kokkoniemi _et al._ studied the molecular absorption noise from different perspectives and gave their derivations and the general ideas behind the noise modeling.
Some work [137, 96, 138] revealed that oxygen molecules are the main cause of atmospheric absorption over the mmWave band, i.e. 60 GHz and 118.7 GHz predicted by [131], while THz signals primarily suffer from water vapor. In [139], it is highlighted that propagation losses at THz frequencies are more heavily affected by atmospheric absorption compared to that of the mmWave band. Recently, a joint EU-Japan research project under the framework of Horizon 2020, i.e. _ThoR: TeraHertz end-to-end wireless systems supporting ultra-high data Rate applications_, developed an automatic planning algorithm for backhaul links operating at 300 GHz. This algorithm has been tested and evaluated in a realistic scenario of an ultra-dense network in Hannover [140, 141], which includes both wireless and fiber backhaul, taking into account various atmospheric effects. Recent standardization efforts in IEEE 802.15.3d [142] also show that high atmospheric absorption can be alleviated by proper choice of the carrier frequency, e.g., the band from 275 GHz to 325 GHz while having drastically more bandwidth than 5G.
Fig. 5: Illustration of atmospheric absorption from 1 GHz to 1000 GHz, where the legend _Standard Air_ stands for a normal atmosphere condition with air pressure 1013.25hPa, temperature 15 \({}^{\circ}\)C, and water-vapor density 7.5 g/m\({}^{3}\), according to [131], while _Oxygen_ highlights the effect of oxygen absorption by setting water-vapor density to 0 g/m\({}^{3}\). Except for frequencies centered at 60 GHz and 118.7 GHz, the effect of water vapor dominates most of the whole high-frequency band.
### _Weather Effects_
Besides the gaseous absorption, an additional atmospheric impact in an outdoor environment is the weather [143]. Extensive studies of satellite communications channels since the 1970s provided many insights into the propagation characteristics of mmWave and THz signals under various weather conditions [144]. Like water vapor in the atmosphere, the outcomes revealed that liquid water droplets, in the form of suspended particles in clouds, fogs, snowflakes, or rain falling hydrometeors, absorb or scatter the incident signals since their physical dimensions are in the same order as the THz wavelength. Such attenuation is not as strong as the path loss and atmospheric absorption but still needs to be taken into account for proper channel characterization [145].
A cloud is an aggregate of tiny water particles (with a dimension as small as 1 um) or ice crystals (from 0.1 mm to 1 mm). Water droplets, in the form of raindrops, fogs, hailstones, and snowflakes, are oblate spheroids with radii up to a few tens of millimeters or generally perfect spheres with radii below 1 1 mm. The size of water droplets is comparable to the THz wavelength (0.1 mm to 1 mm). As a result, water droplets attenuate the power of THz waves through absorption and scattering. The ITR-R provided a power-low equation to model the rain attenuation as a function of distance, rainfall rate in millimeters per hour (mm/h), and the mean dimension of raindrops [146]. Fig.6 shows the rain attenuation described by this ITU-R P838 model from 1 GHz to 1000 GHz and rain rate from light rain (1 mm/h) to heavy rain (200 mm/h).
Such attenuation can be treated as an additional loss that is simply added on top of the path loss and gaseous absorption. Besides the ITU-R model, there are other models such as a simplified one given in [147] to describe the rain attenuation. The measurement at 28 GHz demonstrated that heavy rainfall with a rain rate of more than 25 mm/h brings attenuation of about 7 dB/km. Extreme attenuation of up to 50 dB/km occurs at a particular frequency of 120 GHz and an extreme rain rate of 100 mm/h to 150 mm/h. As a rule of thumb, rain provides an excess attenuation of approximately 10 dB to 20 dB over a distance of 1 km at the THz band. Furthermore, the attenuation of cloud and fog can be calculated by the ITU-R P840 model [148] under the assumption that the signals go through a uniform fog or cloud environment. Until now, an equivalent ITU-R model to calculate the snow attenuation does not exist and the snow particles are merely considered as rain drops.
In addition to prior studies of weather effects on the propagation of mmWave signals [145, 149, 143], many recent efforts of investigating THz wave propagation in the presence of rains and fogs have been undertaken through theoretical analysis and experiments such as [150, 151, 152, 153, 154, 155]. Eom-Bae Moon _et al._[150] measured THz pulse propagation through a 186 m distance under different weather conditions such as rain falling at 3.5 mm/h and snow falling at 2 cm/h, demonstrating the potential of LoS THz communications, THz sensing, and THz imaging through fog and smoke. J. Ma _et al._[151] measured the effects of rain attenuation on 0.1 THz to 1 THz frequencies through THz time-domain spectroscopy and a rain chamber, which was designed to generate controllable and reproducible rain conditions. The measurements agreed with theoretical values calculated using Mie scattering theory. In [152], the authors studied rain attenuation with different rainfall rates at mmWave (77 GHz) and low-THz (300 GHz) frequencies. It revealed that the measured results at 77 GHz best agree with the ITU-R P838 model [146] whereas the calculation based on Mie scattering and the Weibull distribution are best fit to the measured data at 300 GHz. Interestingly, the characteristics of rain attenuation have been investigated using raindrop size distribution collected in Indonesia [153]. It reported that attenuation is smaller than the ITU-R model for lower frequencies below 10 GHz but larger than the ITU-R model for higher frequencies over 100 GHz, indicating that the regional variation of rain attenuation should be considered. In [156], Juttula _et al._ analyzed the possible co-channel interference due to rain droplets, showing that the typical interference levels remain modest.
In addition to rain attenuation, some research efforts have been carried out to make clear the effects of foggy conditions on THz signal propagation. Y. Golovachev _et al._ theoretically and experimentally studied the effects of water droplets suspended in the atmosphere on the propagation of mmWave and THz waves, using a frequency-modulated continuous-wave high-resolution radar operating at 330 GHz [154]. Y. Yang _et al._ experimentally demonstrated the propagation of THz signals through 137 m of dense fog with approximate visibility of 7 m, and reported the observed THz attenuation in [155]. Besides, propagation loss during snowfall has also been studied through measurement campaigns. With the aim of investigating the effect of adverse weather conditions on THz waves and assessing their feasibility for outdoor applications, F. Norouzian _et al._ assessed the attenuation through various intensities of snowfall experimentally at 300 GHz in their work [157], as well as low-THz frequencies (100 GHz to 300 GHz) in [158]. Unlike other weather conditions, there is no theoretical basis for snow because of the challenging nature of defining snowflake shape and size distributions. Nevertheless,
Fig. 6: Rain attenuation measured in dB/km for terrestrial communications links as a function of rain rate and frequency, covering the range from 1 GHz to 1000 GHz. The rain rate is measured in millimeters per hour (mm/h), averaging over a period of time such as one hour. The values from light rain (1 mm/h) to heavy rain (200 mm/h) are illustrated. The peak of attenuation occurs on the frequency band from 100 GHz to 300 GHz since the wavelength in this band matches the size of raindrops.
results from the measurements indicate that snow attenuation at 300 GHz is less than 20 dB/km for snowfall rates below 20 mm/h [158]. Last but not least, there is work has been done to investigate one of the most common types of contaminant in outdoor environments, i.e., dust or sand in the atmosphere. The authors of [159] quantified the attenuation of 150 GHz and 300 GHz THz waves in the sand for the outdoor scenario of low-THz sensing.
### _Blockage Loss_
Due to the tiny wavelength in THz, the dimensions of surrounding physical objects are sufficiently large for scattering, while specular reflections on ordinary surfaces become difficult. On the other hand, THz systems rely heavily on pencil beams to extend the effective propagation distance. As a result, a direct path between the transmitter and receiver is desired. However, the LoS THz link is highly susceptible to being blocked by macro objects, such as buildings, furniture, vehicles, and foliage, and micro-objects, e.g., humans, in comparison with the traditional sub-6G band [160].
A single blockage might cause a signal loss of a few tens of dB. The extent of foliage loss is related to the depth of vegetation [147], where 17 dB, 22 dB, and 25 dB are observed at 28 GHz, 60 GHz, and 90 GHz, respectively. Blockage loss due to vehicles [161] is determined by the vehicle type and geometry, from 20 dB at the front-shield glass to 50 dB at the engine area. The human body blockage imposes a more profound influence because of the dynamic movement of humans and the close interaction of THz devices with humans. The loss attributed to the self-body blockage [162] is expected to reach approximately 40 dB at the THz band. Such blockage losses can dramatically decay the signal power and may even lead to a thorough outage. Hence, it is necessary to clarify the traits of blockage and find effective solutions to avoid being blocked or quickly recover the connection when a link gets blocked.
Statistical models can be applied to estimate the value of blockage loss. For instance, self-body blockage loss is approximated as a Boolean model where a human is treated as a three-dimensional cylinder with centers forming a two-dimensional (2D) Poisson point process (PPP). A LoS blockage probability model assumes that a link of distance \(d\) will be LoS with probability \(p_{L}(d)\) and NLoS otherwise. The expressions of \(p_{L}(d)\) are usually obtained empirically for different settings. The blockage probability for a LoS link with a self-body blockage can be estimated by the method given in [163]. In [164], 3GPP specified an urban macro-cell scenario, where a calculation method for a LoS blockage probability is given. The same model applies to the urban micro-cell scenario, with a smaller distance range. There are some variations in the LoS probability expressions across different channel measurement campaigns and environments, e.g., the model developed by NYU [160].
Some prior works also revealed quantitative results to estimate the blockage loss at THz frequencies. For instance, a dynamic blockage causes an extra loss of around 15 dB to 40 dB [165], while the blockage duration is dependent on the density and speed of dynamic blockers that can last longer than 100 ms [162]. Some researchers noted that in the absence of an LoS path, relying on reflected signal paths may not provide adequate signal power. Reflection from rough surfaces like concrete or brick walls attenuates THz signals with a power drop ranging from 40 dB to 80 dB [153]. Another research work [95] shows that the blockage caused by vehicles leads to a loss of 25 dB to 60 dB over the frequency band of 300 GHz. Moreover, some works studied the impact of micro-mobility such as shakes and rotations of user equipment even when the user is in a stationary position [161]. Compared to blockages caused by human bodies, the study of micro-mobility blockages is still in its early stages. Some studies suggest that micro-mobility follows a Markov pattern where user behavior is not controlled [166], while others assume that the user behavior is more regulated [167].
## V THz Channel Measurement and Modeling for 6G and Beyond
Novel 6G usage scenarios such as kiosk downloading, nano-communications, wireless backhauling, and integrated communications and sensing pose many peculiarities in transmission distances, hardware capabilities, and propagation environments [63]. The unique propagation characteristics and particular requirements motivate the research community to rebuild their knowledge and experiences on wireless channels. Extensive measurement campaigns and channel modeling efforts are expected for the success of deploying 6G and beyond. On the other hand, many challenging issues such as capable measurement equipment and novel modeling methodologies are barriers ahead [101]. This section surveys the state-of-the-art measurement campaigns and channel modeling efforts for the THz band in the use of 6G and beyond.
### _THz Channel Measurement_
Profound knowledge of propagation characteristics and proper channel models are prerequisites for designing transmission algorithms, developing network protocols, and evaluating performance for THz communications and sensing. Channel measurement is the most critical method for a full understanding of THz signal propagation and subsequently accurate THz channel models. Due to unique characteristics, THz channel measurement requires cutting-edge sounding equipment. There are two major kinds of measurement devices for THz channels, i.e. vector network analyzer (VNA) [168] and channel sounder (CS) [169].
Both devices acquire channel information by transmitting a reference signal and processing the corresponding received signal at the receiver. But their measurements are implemented in distinct ways. The VNA operates in the _frequency domain_[168], which measures the channel transfer function (CTF) for a specific narrowband channel at each time and sequentially scans all frequency points within the band of interest. The CTF of each narrowband channel is modeled by a scalar, and the wideband CTF is obtained by aggregating a large number of narrowband CTFs. Channel impulse response (CIR) is the inverse discrete Fourier transform (IDFT) of the wideband
CTF. This method inherits the advantages of narrowband channel measurements, such as high precision due to individual calibration for each frequency point, and low measurement noise. But it takes a long time and cannot capture dynamic channel effects.
A channel-sounding [169] approach operates in the _time domain_, taking advantage of the classical technique called direct-sequence spread spectrum (DSSS). The transmitter of a CS sends a maximal-length sequence (m-sequence) as a stimulus signal so as to achieve a Dirac-impulse-shaped autocorrelation function. Since a received signal is the convolution of a transmitted signal and a time-varying channel, cross-correlating the received signal with a delayed version of the m-sequence yields the CIR of a measured wideband channel. Time-domain CS works much faster than VNA with frequency scanning and can capture channel dynamic variations. However, its precision is disturbed by strong thermal noise, which is proportional to signal bandwidth. According to the channel measurement methods, there are correspondent state-of-the-art measurement campaigns, which are summarized below.
When deciding on a measurement technique for THz channels, several factors such as signal bandwidth, speed, distance, power consumption, cost, and complexity of the measurement system need to be taken into consideration. In practice, three methodologies are employed for measuring THz channels [63], i.e., _frequency-domain channel measurement using VNA, time-domain channel measurement using sliding correlation, and time-domain channel measurement using TDS_. Based on these three measurement approaches, we provide an overview of the state-of-the-art measurement campaigns as follows.
#### V-A1 Frequency-Domain VNA Measurement
As introduced, the VNA is a measuring device utilized to assess the response of a component or network at one port in response to an incoming wave at another port. The frequency-domain channel measurements performed using VNA are founded on the principles of linear signal systems. It is important to note that commercial VNAs typically have a limitation on the frequency range that is less than 67 GHz, requiring the use of up-conversing modules for measurements in the THz band. Most of the VNA-based measurement campaigns were focused on frequencies ranging from 140 GHz to 750 GHz and utilized directional horn antennas with antenna gain values between 15 to 26 dBi. The separation between the transmitter and receiver (i.e. the Tx-Rx distance) varied from a minimum of 0.1 m to a maximum of 14 m, and in some cases, was extended to a distance of 100 m with the use of RFoF technique [119].
University of Surrey collaborated with the National Physical Laboratory of the United Kingdom to establish a channel measurement system covering the frequency range of 500-750 GHz [170]. This system utilized the Keysight PNA-X VNA, which was configured with VDI extender heads. The measurement was conducted for LoS scenarios within a distance range of 0.23 m. A research team from Koc University in Turkey also measured THz LoS scenarios with the Tx-Rx distance ranging from 0.01 m to 0.95 m over the frequency band of 260-400 GHz [171]. The measurement was conducted using a VNA in conjunction with a sub-harmonic mixer. The effect of linear and angular displacement between the transmitter and receiver was also investigated. Some researchers from the University of Southern California made advancements to their existing 140 GHz VNA-based measurement equipment, enabling it to support frequencies ranging from 140 GHz to 220 GHz. Indoor LoS measurement in an office setting was conducted, with the measurement distance ranging from 0.5 m to 5.5 m [118].
In addition to LoS, measurement campaigns for NLoS scenarios were also performed with the presence of a reflective surface [172]. Georgia Institute of Technology developed a measurement system for the frequency band of 280-320 GHz using an N5224A PNA VNA and VDI transceivers (Tx210/Rx148). The system was utilized to measure short-range scenarios, such as a desktop environment up to 0.7 m [173] and computer motherboards [174] at frequencies of 300-312 GHz, as well as a data center with a propagation distance of 0.4-2.1 m at 300-320 GHz [175]. They also evaluated the THz wave propagation in a realistic data-center environment. The measurements were taken at a Tx-Rx distance of 1.75 m and 2.28 m, within the frequency range of 300-312 GHz [176].
The previously mentioned studies looked at how THz waves travel and are affected by obstacles when the antennas are in a fixed position. But for some uses, THz signals may come from multiple directions and need to be investigated as well. A joint team from Shanghai Jiao Tong University (SJTU) and Huawei built VNA-based measurement equipment for 140 GHz. Indoor measurements were conducted in a typical meeting room and an office room at distances ranging from 1.8 m to 7.3 m and from 3.75 m to 20 m, respectively. This team analyzed the behaviors of multi-path THz signals components over time and direction, and studied the relationships among different channel parameters, as reported in [177, 123, 178]. Later, the equipment was enhanced to measure 220 GHz THz signals in the same scenario, with distances ranging from 1.8 m to 7.3 m for the meeting room and 2 m to 30 m for the office room, respectively [122]. The receiver was mounted on a rotation unit driven by step motors. Moreover, Y. Wang _et al._ from SJTU [179] built a VNA-based measurement system covering the THz band from 260 GHz to 400 GHz. Indoor channel measurements in the frequency range of 306-321 GHz were performed in an L-shaped hallway and a long corridor at the campus, with distances ranging from 7.7-25 m and 5-31 m, respectively.
#### V-A2 Time-Domain Sliding Correlation Measurement
Some studies have been done to measure the characteristics of THz waves using the sliding correlation (SC) method over the frequency range up to 300 GHz. A team from NYU developed a measurement system that can switch between two modes: sliding correlation and real-time spread spectrum [180]. Using the SC mode, they focused on the study of how the THz waves at 140 GHz reflect and scatter [181]. They measured THz channels in indoor scenarios including offices, conference rooms, classrooms, long hallways, open-plan cubicles, elevators, and factory buildings, some results are reported in the literature such as [182, 183, 184, 125]. Besides, this team also conducted the directionally resolved outdoor wideband measurement campaigns in an urban microcell environment [185, 126, 99].
A research team at Technische Universitat Braunschweig
in Germany has been dedicated to channel measurement, simulation, and antenna design for THz frequencies up to 300 GHz since 2007 [186, 187]. They built a channel sounder at 300 GHz that uses the sliding correlation method with m-sequences of order 12. The clock frequency is selected as 9.22 GHz and the bandwidth reaches 8 GHz, meaning most of the sequence power is focused within 8 GHz in the frequency domain [188]. A joint team from Technische Universitat Braunschweig and Beijing Jiao Tong University utilized this equipment to observe the propagation of mmWave and THz waves in railway scenarios [189]. They conducted channel measurements and modeling for a variety of specific scenarios such as train-to-train (T2T) [94], infrastructure-to-infrastructure (I2I) [94], train-to-infrastructure (T2I) [190], and intra-wagon [191, 192], covering the frequency range from 60 GHz to 300 GHz. These measurement activities were conducted in stationary environments, and the measurement of dynamic environments remains a challenge that has to be addressed later [193].
In cooperation with Technische Universitat Braunschweig, some researchers at the University of Tampere in Finland investigated reflection and penetration losses of THz frequencies in vehicular communications [95, 194]. Petrov _et al._ measured the signal transmission in automotive settings at 300 GHz, and Eckhardt _et al._ performed a thorough examination of signal transmission at 300 GHz in both single-lane and multi-lane automotive scenarios, respectively. Eckhardt _et al._ also carried out measurements in an actual data center at 300 GHz with a channel sounder, dividing the environment into inter-rack and intra-rack components [195]. The study evaluated the path loss, power delay profile (PDP), and power angular spectrum (PAS), demonstrating the viability of wireless communications at 300 GHz in a data-center scenario.
#### V-A3 Time-Domain Spectroscopy Measurement
Time-domain spectroscopy is the most straightforward method for measuring impulse responses. It involves transmitting a train of extremely narrow pulses, where the period of the pulse train is greater than the maximum excess delay of the channel. The amplitude of a sampling instance can be considered as the amplitude of the channel impulse response at the time of the exciting pulse, at a delay that is equal to the difference between excitation pulse transmission and the sampling time of the observation [63]. By sampling the received signal at a high speed in the time domain, the impulse response can be directly calculated. THz-TDS makes use of an extensive and scalable bandwidth in the THz frequency band. However, the large setup size and low power output limit its application scenarios [143]. To mitigate the power limitation, lenses are often used at both the transmitter and receiver to enhance the intensity of the pulse signal. The lens beam is highly concentrated, making it well-suited for measuring material properties such as reflection, scattering, and diffraction in the THz frequency range. Despite these, THz-TDS is primarily utilized for channel measurements over short distances, typically less than a few meters.
Many THz-TDS measurements utilized the equipment of Picometrix T-Ray 2000 THz-TDS, which has the capability to emit terahertz pulses with a bandwidth ranging from 0.1 THz to 3 THz [196, 143, 197]. Hossain _et al._ used the THz-TDS to assess the interference between THz devices in the 300 GHz frequency band and applied stochastic geometric techniques to model and analyze the interference [196]. For outdoor scenarios, Federici _et al._ measured the attenuation of THz signals due to weather conditions with the THz-TDS equipment, conducted theoretical analysis, and summarized the impact of various weather factors on THz communications links [197, 143]. Working in conjunction with Brown University, the team at Technische Universitat Braunschweig employed a THz-TDS to study the reflection coefficients [198, 199] and the scattering coefficients [200] of different indoor materials across the frequency band from 60 GHz to 1 THz.
THz-TDS is not well-suited for directional scanning due to its large size and narrow beam width. Moreover, the standardization of measurement, calibration, and data analysis using the TDS technology in the THz band needs to be established [201]. Real-time spread spectrum is faster than correlation mode, but it sacrifices dynamic range. To address the diversity of communications scenarios and their varying requirements for speed, dynamic range, Tx-Rx distance, and others in THz channel measurement, it is worth exploring new channel measurement systems that combine both temporal and frequency domain methods.
### _THz Channel Modeling_
Developing a wireless communications system requires an accurate channel model that fully captures the major propagation characteristics for the operating carrier frequency. It allows wireless researchers and engineers to assess the performance of different transmission algorithms and medium-control protocols without having to conduct expensive and time-consuming real-world field measurements on their own. A large number of channel models, focusing on the sub-6 GHz frequency band for traditional cellular systems [164], have been built through curve fitting or analytical analysis based on field measurement data. These models reflect all propagation effects, both known and unknown, and therefore work well. Given the peculiarities of THz signal propagation, it is necessary to develop particular THz channel models for research, development, performance evaluation, and standardization [202] of THz communications and sensing in 6G and beyond.
Two widely used techniques for developing appropriate channel models are deterministic [203] and stochastic modeling [204]. The former utilizes the electromagnetic laws of wave propagation to determine the received signal strength at a particular location. The most popular deterministic modeling approach is known as ray tracing [205]. The parameters of each ray, such as attenuation, angle of departure, angle of arrival, propagation delay, and Doppler shift, can be computed taking into account the geometric optic rules of propagation including the computation of path losses via the Friis transmission equation, the Fresnel equation for reflections, the Khirchoff scattering theory, and the universal theory of diffraction. Ray tracing is highly applicable for various static 6G applications at the THz band, e.g., indoor hot spots, wireless backhaul, and nano-networks.
However, the ray-tracing approach suffers from high computational complexity and long simulation time, where accurate information about the geometric environment, the exact knowledge of the boundary conditions, and the properties of different objects are required [205]. To further alleviate the complexity, stochastic modeling is applied to provide a statistical description of the propagation channel. These models are derived from empirical data and need much less computational complexity in comparison with the deterministic ones [206]. In this way, channel data can be generated easily without profound channel knowledge, allowing researchers and engineers to focus on their design and simulation works.
The state-of-the-art channel models in terms of different methodologies are surveyed in this part, as listed in Table V, where we divide these channel models into three categories: _deterministic, statistical, and hybrid_.
#### V-B1 Deterministic Channel Modelling
Generally, there exist mainly three representative methods for deterministic channel modeling, including ray tracing [205], finite-difference time-domain (FDTD) [213], and channel measurement-based method.
Let us first look at the most popular method namely ray tracing. Visibility tree [207] and ray launching [231] are two alternatives to achieve ray tracing. To date, ray tracing has been calibrated through field measurements, e.g., the work [190] reports indoor and T2I inside-station scenarios at 300 GHz THz frequencies. In [208], some researchers presented a ray-tracing algorithm based on the Beckmann-Kirchhoff model to simulate the diffuse scattering from rough surfaces at THz frequencies. The authors of [211] modeled the THz channel for a \(7\,\mathrm{m}\times 7\,\mathrm{m}\times 3\,\mathrm{m}\) office room, including both LoS and NLoS scenarios and three types of plaster with varying degrees of roughness. In another study [232], a comprehensive multipath channel model based on the ray tracing method was developed for the entire THz band, while assessing capacity and analyzing key parameters in the low THz band (0.1-1 THz). Furthermore, a 3D end-to-end channel model for the THz spectrum was developed, incorporating the elevation plane [233].
The calibration and validation for frequencies between 1.0-10 THz remain challenging due to the lack of material parameters. For single-antenna systems, conventional ray tracing models can analyze communications on a point-to-point basis between the transceiver. In contrast, when dealing with multiple-antenna systems, performing ray tracing for each Tx-Rx link can be prohibitively complex [234]. To reduce the computational complexity associated with multiple antennas, it is possible to perform a single ray tracing simulation that extracts not only the amplitudes and delays but also the directions of the paths. This information can be combined with the array characteristics to generate the transfer function between each transmit and receive antenna pair, which is independent of the antenna array size [235]. Another approach to alleviating the computational burden is to use simplified ray tracing models, such as map-based models. It is based on ray-tracing and uses a simplified 3D geometrical description of the environment [236], which can be much more accurate if Laser is employed for scanning the environment [209], [210], [237].
FDTD is a kind of numerical analysis technique that relies on solving Maxwell's equations directly. This technique is particularly suited for scenarios involving small and complex scatterers, where surface materials exhibit a higher degree of roughness at THz frequencies [213]. However, it demands many memory resources to track solutions at all locations, as well as substantial time and computational power to update the solution at successive time instants [212]. When applied to objects with large dimensions relative to the tiny wavelength of THz signals, FDTD suffers from high computational complexity. In order to apply it effectively, a database of the target environment with sufficiently high resolution is required. This database may be generated from Laser scanning for a point cloud, as mentioned above [237]. In a small intra-device channel, a comparison between ray tracing and FDTD was presented in the literature [214].
Last but not least, another modeling approach called the channel measurement-based method relies on real-world field measurement of the target environment and large-volume data analysis [215]. In recent years, the trend of open-source data has motivated many researchers to make their measurement results available online. Some standardization groups, such as the NextG Channel Model Alliance [238] under the National Institute of Standards and Technology (NIST), aim to make data exchange easier. The European ARIADNE project has provided the initial measurement results and created channel models for D band links in LoS and NLoS office environments [239]. In the context of THz channels, there are some challenges due to the volume of measured data, which is affected by both the large bandwidth and large-scale antenna arrays.
#### V-B2 Statistical Channel Modelling
Statistical approaches are used to capture the statistical behaviors of wireless channels in various scenarios [219]. One of its main strengths is low computational complexity, which enables fast construction of channel models based on key channel statistics and facilitates simulations of wireless communications. It is broadly classified into two categories, i.e. physical models and analytical models [240, 241, 242]. The former describes the statistics of the double-directional channel characteristics, such as power delay profile, arrival time, and angle distribution, which are independent of antenna properties. In contrast, the latter mathematically characterizes the impulse response of a channel and antenna characteristics.
Physical ModelsEarly research work on statistical channel modeling for the mmWave or THz band focused on enhancing and adapting the well-known Saleh-Valenzuela (S-V) model through calibration [220], which is based on the observation that multipath components arrive in the form of clusters [216]. Some other research work maintained this concept of clustering while utilizing different distributions, instead of the Poisson process, to describe the time of arrival (ToA) in order to achieve greater conformity with measurement outcomes [218], [219], [221]. Another example referred to as the Zwick model [243] exploits multipath components rather than clusters and does not account for amplitude fading. In [217], the original Zwick model was enhanced to incorporate its applicability to multi-input multi-output (MIMO) systems.
Analytical ModelsAnalytical models take into account the channel and antenna characteristics as a whole, thereby characterizing the impulse response from the transmitter (Tx) antenna element's connector to the receiver (Rx) antenna element's connector. These individual impulse responses are organized into a matrix and the statistical properties of these matrix elements, including correlations, are encompassed. The Kronecker-based model [222] assumes that the correlation between the transmit and receive arrays is separable. However, as the number of antennas increases and single-reflection propagation dominates in the THz band, this assumption becomes less accurate. To respond to this, some other models account for either MIMO or MMIMO channels from the perspective of beams or eigenspaces. For instance, an approach called the virtual channel representation (VCR) [206] characterizes physical propagation by sampling rays in a beam space. These aforementioned models can be also named correlation-based stochastic models (CBSMs). Despite their limited spatial determinism capability, CBSMs are well-suited for evaluating the performance of MMIMO systems due to their low complexity. Unfortunately, it is challenging to properly describe MMIMO channels, especially UMMIMO channels over the THz band, due to the lack of consideration for the near-field effect and non-stationarity. To address this issue, an enhanced method referred to as beam-domain channel model (BDCM) was proposed [223] through rethinking the far-field assumption. As a result, the BDCM is applicable to UMMIMO scenarios for the THz band.
#### V-B3 Hybrid Channel Modelling
Since different methods have particular pros and cons, hybrid methods are developed to exploit the feasibility of combining the benefits of two or several individuals for a particular scenario [7]. As the deterministic method shows high accuracy with the price of high time and resource consumption but the statistical method has low computational complexity, most existing approaches focus on the hybrid deterministic-statistical way, see [224, 225, 226, 227]. Moreover, there are other methods that combine two deterministic approaches, like ray tracing and FDTD, called the hybrid deterministic approach. In this paradigm, FDTD works for studying regions close to complex discontinuities, while ray tracing is used to trace the rays outside the FDTD regions. The single interaction between ray tracing and FDTD was presented in [212], where the location of the receiver is restricted in the FDTD region. Later, some works such as [224, 225] extended it for time efficiency and multiple interactions between ray tracing and FDTD.
Although statistical channel models are highly efficient, they struggle to accurately capture spatial consistency and the temporal evolution of cluster correlations. It motivated the development of some hybrid models that incorporate both statistical and geometrical approaches. This hybrid approach enables the inclusion of some channel features that are impossible to characterize through a stochastic model. In 2002, a quasi-deterministic (Q-D) channel model [244] was initially proposed for mmWave channels, which was adopted by the IEEE 801.11ad standardization group for indoor scenarios at 60 GHz [245]. Moreover, the Q-D channel model has been successfully applied for other wireless standards, such as the mmWave Evolution for Backhaul and Access (MiWEBA) [228] and the IEEE 802.11ay [246], which is an evolution of the IEEE 802.11ad standard.
Another hybrid method called geometry-based stochastic channel model (GSCM) incorporates a geometrical component during the stochastic modeling process [219, 241, 242]. Although the placement of scatterers is stochastic, a simplified ray-tracing method that is employed in GSCM is deterministic. A new non-stationary GSCM, called the beyond 5G channel model (B5GCM), was recently introduced in [230], where the researchers derived the correlation functions based on a general 3D space-time-frequency model. It can be categorized into two types: regular-shaped GSCM, which is primarily used for theoretical analysis, such as correlation functions, and irregular-shaped GSCM that can better replicate measured results. Notably, the COST259, COST273, and COST2100 models [247] take advantage of this hybrid benefit, where the clusters of scatterers are placed to form the clusters of multi-path components with similar delays and directions. The well-known 3GPP Spatial Channel Model (SCM) [248] and WINNER II model [249] also followed this approach. Furthermore, Samimi et al. [229] utilized temporal clusters and spatial lobes to handle the temporal and spatial components, respectively.
## VI THz Transceiver: Antennas and Devices
This section aims to provide readers with the necessary knowledge required to design and build THz transceivers. We discuss the cutting-edge antenna technologies that are appropriate for transmitting and receiving THz signals, along with the fundamental and innovative aspects of photonic-electronic devices and components used in constructing THz transceivers.
### _THz Antennas_
The conventional concepts of electromagnetic antennas can also be applied in the THz regime, as mentioned in the references [250, 55]. However, due to the very high frequency involved, there are certain limitations and disruptive effects that need to be taken into consideration. The tiny wavelength associated with THz frequencies results in the need for extremely small structures, which raises concerns regarding manufacturing processes. on the other hand, this downsizing of structures enables the feasibility of employing novel manufacturing techniques like low-temperature co-fired ceramic (LTCC), antenna-on-chip design, substrate-integrated waveguide (SIW), and others.
Another significant issue that arises with increasing frequency is the skin effect of conductive materials. Since the skin depth, which refers to the depth of the current in a conductive material, decreases dramatically, the conductivity of metallic materials drops, leading to increased losses in the antenna system. To overcome this issue, researchers have explored the use of new materials like graphene. Graphene, with its exceptional electrical and thermal properties, holds promise for mitigating the skin effect and improving the performance of THz antennas. Its high electron mobility and low resistivity
make it an attractive material for overcoming the limitations posed by traditional conductive materials.
In the following, we provide a summary of the main antenna techniques and current research advances in this field.
#### V-A1 Horn Antennas
Horn antennas are widely used in wireless applications due to their favorable characteristics. Those belong to high directivity antennas, with an achievable gain of up to 25 dBi. Similar to hollow waveguides, these antennas feature low power loss, making them appropriate for low-noise as well as high-power applications. Furthermore, they can be operated within a wide frequency range, which is advantageous for broadband signals. Due to the small wavelength of THz waves, the manufacturing of horn antennas is challenging. Nevertheless, various technologies could be utilized here. Among others, the authors of [251] introduced elective laser melting 3D printing technology to produce several conical horn antennas, covering the frequency bands E (60 GHz to 90 GHz), D (110 GHz to 170 GHz), and H (220 GHz to 325 GHz). Another appropriate technology is LTCC, as a 300 GHz LTCC horn antenna reported in [252].
#### V-A2 Planar Antennas
Planar transmission line technology, commonly used for THz frequencies, offers various antenna designs. Among these designs, patch antennas based on microstrip line technology are particularly well-known. However, planar antennas generally exhibit inferior performance compared to horn antennas. This is primarily due to the EM wave partially propagating within a dielectric substrate, resulting in increased overall loss. Additionally, the short wavelength relative to the substrate thickness in THz planar structures can lead to substrate mode issues, as mentioned in [253]. In such cases, a portion of the wave energy becomes trapped in the substrate and cannot be effectively utilized. Therefore, it is necessary to implement appropriate measures to mitigate this problem.
Despite the drawbacks discussed above, planar antennas are very popular. The major reason is the production flexibility of the planar structures. The planar shapes can be easily designed and realized using various technologies [254, 255, 256, 257]. Due to their size, patch antennas can be efficiently combined to build an antenna array, which allows higher control on the antenna pattern [258, 259, 260]. Due to the short wavelength of THz waves, the distances between the transceiver components might become critical. Thus, the antenna placement on the chip becomes advantageous. In this case, the planar antennas also play an important role, since they are compatible with on-chip design [261, 262, 263].
#### V-A3 Substrate-Integrated Antennas
Very short wavelength causes some issues for planar antenna design. However, it can be turned into an advantage by bringing the low-cost manufacturing capability and flexibility of planar structures to the high-performance world of waveguide technology. Authors in [264] give a comprehensive overview on the substrate-integrated circuit technology. In particular, the concept of SIW shows promising results. The idea of SIW is to connect the upper and the lower conductive layers of a substrate by means of via walls. In this way, a rectangular waveguide inside a substrate is formed, whereas the distance between vias defines the upper cutoff frequency. Also, this technology can be extended to on-chip usage. There is a variety of antennas and antenna array designs reported in the literature, such as horn antennas [265, 266], slot antennas [267, 268], patch antennas [269, 270] and some others [271, 272, 273]. Furthermore, a comprehensive review on SIW antenna technology can be found in [274].
#### V-A4 Carbon-Based Antennas
Graphene and carbon nanotubes, which are rolled sheets of graphene, are two promising materials to be used within THz range. As mentioned before, due to the strong skin effect, metallic materials, especially copper, suffer reduced conductivity. Thus, the performance of antennas using metallic components is reduced. In contrast, graphene features high conductivity due to the propagation of plasmon mode in this frequency range. Furthermore, the conductivity can be tuned by chemical doping, but also by applying electric and magnetic fields, which allows to the production of tunable antennas [275]. This and some other advantageous properties make graphene interesting for THz applications in general and in particular for antenna design. The most reported efforts are to exchange the metallic conductors by graphene or carbon nanotubes in e. g. planar antenna structures [276, 277]. Besides, the approach of building a dipole antenna based on carbon nanotubes was reported [278, 279].
### _THz Components and Devices_
The THz band is generally defined as a frequency band between 0.1 THz and 10 THz, which is far higher than the sub-6GHz band while is much lower than the optical bands. For a long time, it used to be addressed as a terahertz gap [52]. On the one hand, _photonic_ devices are not capable of producing such low frequencies. On the other hand, _electronic_ oscillators are struggling to reach those high frequencies. In this regard, the THz band used to be unreachable for either _photonic_ or _electronic_ technologies. For this reason, the attempts to generate the THz radiation were made from both sides of the spectrum.
Initially, due to the limited capability of the components, prior THz research was concentrated on applications such as imaging or spectrometry [35]. This fact attributes to two major reasons:
* These applications require comparably high output power of the signal but are not demanding on the receiver side, since the amplitude rather than the phase of a signal acts as the information carrier.
* The size of the system or the specific operational conditions are not the limiting factor. In this case, some technologies, especially photonic generation and detection are advantageous.
In contrast, THz communications and sensing require the capability of accurate phase recovery, especially the case for digital modulation when both in-phase and quadrature (IQ) branches are exploited. Furthermore, compactness and low energy consumption play critical roles. Thus, the utilization of semiconductor-based integrated circuits is advantageous for THz communications and sensing applications in 6G and beyond systems. In the remaining part of this section, we discuss the fundamentals and state-of-the-art advances of both
photonic and electronic components for implementing THz signal transmission and reception.
#### Iii-B1 Electronic Devices
There exist two classes of electronic devices capable of generating THz radiation. On one hand, there are high-power devices, the mostly based principle of an electron tube. These can generate signal power from 10 W to 1 MW. Usually, high transmission power is required for special-purpose applications such as satellite communications. On the other hand, there are semiconductor devices, based on various types of semiconductor materials. Even though these devices show some limitations such as low transmission power, their compactness and cost make them most suitable for conventional communications and sensing devices [280]. Common materials utilized are III-V-semiconductors such as Gallium Arsenide (GaAs), Gallium Nitride (GaN), and Indium Phosphide (InP). Also, Silicon Germanium (SiGe) based devices show promising performance. Compared to the aforementioned technologies, a well-know semiconductor technology known as complementary metal-oxide-semiconductor (CMOS) features some limitations in terms of the transmit power, noise figure, and other parameters. However, recent research results argue that the CMOS technology may also be suitable for THz communications and sensing applications [104].
Most semiconductor THz transmitters and receivers follow the established transceiver design for lower frequencies, such as the heterodyne transceiver approach, as shown in Figure 7. That means a modulated baseband signal is passed to a front-end circuitry. There, it is upconverted to the THz carrier frequency by mixing with the local oscillator (LO) signal. Finally, the signal is amplified and sent over an antenna. At the receiver, the same steps are applied in reverse order, and the downconverted signal is passed to the baseband circuitry for further processing.
The most critical part of the described transceivers is the LO generation since a THz signal needs to be synthesized. In order to do so, a multiplexer cascade is used to multiply, or upconvert, a high-frequency signal to the THz range. On the other hand, every multiplexer stage increases the power of inter-modulation products. A higher number of multiplexer stages also lead to increased noise figure and distortion of the generated signal. Therefore, the number of multiplexers is one of the limiting factors for THz signal generation. For this reason, higher-order modulation products may be utilized as a trade-off. Table VI gives an exemplary literature overview on the available transmitter and receiver front-end performance for different semiconductor technologies. Also, the results on some transceiver chains are discussed below.
Among III-V compound semiconductor materials, InP components show the best results in terms of output power and noise figure. Furthermore, InP transistors feature the maximum oscillating frequency \(f_{\max}\) of up to 1.5 THz [294], which is the highest number as compared to other semiconductors. In the literature, a transceiver design based on a high-electron-mobility transistor (HEMT) was reported. HEMT supports THz frequencies up to 850 GHz, but particular attention was given to the frequency band around 300 GHz [295, 281, 296]. In addition, most recent results show that a transceiver system achieved a maximum throughput of 120 Gbps [282].
SiGe and CMOS are both silicon-based technologies suitable for the THz frequency band. Due to the intrinsic characteristics of silicon, the maximum oscillation frequency is limited below 1 THz. There are studies that show heterojunction bipolar transistors (HBTs) based on 130 nm SiGe process reaching \(f_{\max}\) of 620 GHz [297] as well as 720 GHz [298]. The components fabricated by SiGe HBTs exhibit some advantages such as good linearity, high gain, and low noise. However, the power gain is limited, which prevents the operation above 500 GHz. Among the recent research work, one of the particular interests focused on the components and devices for D-band communications and sensing. As a result, several transceiver concepts were presented such as [285, 286, 299, 300]. The authors of [301] demonstrated a complete communications link that realizes a high throughput of 200 Gbps. Furthermore, some researchers presents their transceiver design operating around 300 GHz [289, 291, 302, 303, 304]. Here, a communications link of 100 Gbps was realised [305]. In addition, a detailed survey on SiGe transceiver for different purposes is provided in [306].
Last but not least, the performance of THz devices built by the CMOS technology is lower as compared to other semiconductor technologies. CMOS field-effect transistors (FETs) are able to reach \(f_{\max}\) of around 450 GHz, also the power gain is limited. Nevertheless, this technology is known for low production costs, which is its major advantage. Transceiver systems operating at 240 GHz [289, 307, 308] or 300 GHz [309, 310, 311, 312] were realized. A system, consisting of 105 Gbps transmitter and corresponding 32 Gbps was shown by authors
Fig. 7: Architecture of superheterodine (a) transmitter and (b) receiver design including IQ-modulator
in [290, 291, 292]. The highest operating frequency reported is 390 GHz, which realized a data throughput of 28 Gbps. Further achievements of CMOS transceiver are summarised in [104, 313, 314].
#### Vi-B2 Photonic Devices
In order to get from very high-frequency photonic radiation to a lower THz frequency, an optical frequency downconverter needs to be implemented. Here, the most common technology is known as photomixing. As shown in Figure 8, two laser signals with frequencies \(f_{1}\) and \(f_{2}\) are forwarded to a photomixing diode such as untraveling-carrier photodiode (UTC-PD) [316]. The photodiode, similar to conventional high-frequency mixing, generates then the modulation products. In this way, a THz signal \(f_{\mathrm{THz}}=f_{1}-f_{2}\) is generated. Photomixing provides high tunability and modulation bandwidth. Also, a variety of complex modulation schemes may be implemented with moderate effort, as compared to electronic solutions. Figure 8 exemplary shows two modulation approaches, modulation of both laser signals as well as the single laser signal only [315]. A further advantage is the ability to deliver an optical signal over a large distance by means of an optical fiber. Thus, optical signal generation and modulation may be performed separated from the THz signal generation, which may yield flexibility to the transmitter system [24].
Another way of photonic THz signal generation is called quantum cascade laser (QCL). Conventional lasers emit photons through the recombination of electron-hole pairs across the band gap. Thus, the lower bound for the frequency of the emitted photons is defined by the band gap, which limits the ability of the lasers to work below the infrared band. In contrast, QCL are unipolar [317]. The photons are emitted by quantum jumps of electrons between two different energy levels. These energy levels are defined by the structure of the quantum wells. This provides the possibility to tune the frequency of the laser. Due to the low energy gap between the energy levels, QCLs are well suited for mid-infrared applications. In recent decades, there was also a big progress on QCL development for far-infrared, or THz, range [318]. QCL are able to cover the frequency band of 1 THz to 5 THz with a peak radiation power of 1 W [319, 320, 321]. However, the operations frequency of QCL is in the cryogenic range, which limits their application area.
## VII Beam-Forming and Alignment for THz Communications and Sensing
The use of the THz band can alleviate the problem of spectrum scarcity and facilitate novel applications, such as nano-scale networks and in-device communications. However, its practical use is challenged by large propagation losses, which generally lead to very short distances of signal transmission [322]. The main causes of this problem include the high spreading loss that grows quadratically with carrier frequency, the gaseous absorption due to oxygen molecules and water vapor in the atmosphere, and the adverse effects of weather conditions, as discussed in the previous section. Such a propagation loss can reach hundreds of decibels per kilometer or even higher. Additionally, this problem is further aggravated by the following two factors [168]:
Fig. 8: Configuration of photomixing transmitters with (a) Double sub-carrier modulation. (b) Single sub-carrier modulation [315]
* _Strong thermal noise_: Noise power is proportional to signal bandwidth with the constant power density. Therefore, the unique advantage of massive bandwidth at the THz band imposes a side effect of strong thermal noise.
* _Hardware constraint_: The transmit power at the THz band is quite constrained since the output power decreases with frequency and is at the level of decibel-milli-Watts in the foreseeable future. Hence, raising power to extend the communications distance is not feasible [100].
To extend the signal transmission distance beyond a few meters, high-gain directional antennas are necessary to compensate for such a high propagation loss in THz communications and sensing. Thanks to tiny wavelengths, massive numbers of elements can be tightly packed in a small area to generate high beamforming gains [323, 324, 325]. In this section, we will discuss the cutting-edge antenna forms, novel beamforming techniques, and necessary beam alignment schemes at the THz band [57].
### _Ultra-Massive MIMO_
Since the length of a resonant antenna is typically in the order of the wavelength at the resonance frequency, the dimension of an array with tens of elements is a few square meters and a few square centimeters at the sub-6G and mmWave bands [102], respectively. Moving to the THz band, the antenna length further reduces. Hundreds of elements can be compacted in an array within a few centimeters using conventional metallic materials. However, this number is not sufficient to overcome the huge propagation loss suffered by THz signals [322].
Taking advantage of surface plasmon polariton (SPP) waves [326], the inter-element separation of an array can be further reduced to the SPP wavelength, which is much smaller than the EM wavelength. Consequently, nanomaterials that support the propagation of SPP waves, such as graphene and metamaterials, are employed to further improve the hardware compactness. Graphene, a one-atom-thick carbon nanomaterial with unprecedented mechanical, electrical, and optical properties, is employed to fabricate plasmonic nano-antennas with a smaller size of almost two orders of magnitude compared to metallic THz antennas [103]. In particular, thousands of graphene-based nano-antennas can be embedded in a few square millimeters at 1 THz. The emergence of nano-antennas paves the way for building very large-scale arrays for THz communications. In 2016, Akyildiz and Jornet [327] presented the concept of UMMIMO communications, and demonstrated a \(1024\times 1024\) system where both the transmitter and receiver are equipped with an array of \(1024\) nano-antennas.
A massive number of elements impose challenges such as prohibitive power consumption and high hardware complexity. It is worth rethinking the array architecture and beamforming schemes in UMMIMO systems at the THz band. Fully digital beamforming can generate a desired beam but it leads to unaffordable energy consumption and hardware cost since each antenna in a large-scale array needs its dedicated radio frequency (RF) chain. This motivates the study of analog beamforming with low complexity. By employing analog phase shifters, only a single RF chain is needed, substantially lowering hardware and power costs. Nevertheless, the analog architecture supports only a single stream, limiting data rates and the number of users. As a compromise of these two forms, hybrid digital-analog architecture is the best choice for THz from the perspective of performance and complexity trade-off. Combining an analog-shifter network with a few RF chains, hybrid beamforming can significantly reduce hardware cost and low energy consumption, while achieving comparable performance as digital beamforming.
Although hybrid beamforming has been extensively studied for the sub-6GHz and mmWave bands [328, 329, 330], the peculiarities of the THz band, such as channel sparsity [331] and beam squint [332], impose many difficulties for designing an UMMIMO system. Currently, many new forms of hybrid beamforming are discussed in the literature, including array of subarrays (AoSA) to balance the power consumption and data rate, widely-spaced multi-subarray to overcome the low spatial multiplexing gain due to channel sparsity, and true-time-delay-based hybrid beamforming to address the problem of beam squint [333].
#### Vi-A1 Array of Subarrays
In a hybrid architecture, the connection between elements and RF chains has two basic forms: fully-connected (FC) and AoSA [334]. In the FC hybrid beamforming, each element is fully connected to all RF chains via a signal combiner, and the signal of an RF chain radiates over all antenna elements via an individual group of phase shifters. Any RF chain should have the capability to drive the entire large-scale antenna array, which is power-aggressive. Particularly, the use of a large number of phase shifters and combiners will exacerbate the problems of high hardware cost and power consumption. In contrast, all elements in AoSA are divided into disjoint subsets called subarrays, and a subarray is only accessible to one specific RF chain [335]. AoSA conducts signal processing at a subarray level with fewer phase shifters, such that hardware cost, power consumption, and signal power loss can be dramatically reduced. In addition, beamforming and spatial multiplexing can be jointly optimized by cooperating with precoding in the baseband.
Recent literature shows the strong interest of researchers to exploit an array of subarrays. For instance, Lin and Li published a series of works on this topic. In [336], they analyzed the ergodic capacity of an indoor single-user THz communications system and obtained an upper bound, providing guidance on the design of antenna subarray size and numbers for certain long-term data rate requirements with different distances. In [337], an adaptive beamforming scheme was proposed for multi-user THz communications that considers the impact of transmission distance. They [338] examined the array-of-subarray structure for multi-user sub-THz communications and analyzed its spectral and energy efficiency. They then showcased a THz-based multi-user system for indoor usage that uses an array-of-subarrays architecture to handle hardware restrictions and channel characteristics in the THz band, which has shown a great advantage by comparing with the FC structure in both spectral and energy efficiency [334]. In [339], Tarboush _et al._ proposed an accurate stochastic simulator of statistical THz channels, named TeraMIMO, aiming at
catalyzing the research of UMMIMO antenna configurations. TeraMIMO adopted the AoSA antenna structure for hybrid beamforming and accounted for spatial sparsity.
To further reduce the complexity, various alternating optimization algorithms have been proposed for the AoSA architectures [335]. In contrast to the FC architecture, the AoSA architecture has a restricted number of phase shifters that equals the number of antennas. However, since the RF chains and antennas are connected exhaustively, the FC architecture can achieve data rates comparable to those of the optimal fully-digital beamforming architecture. Conversely, the data rate of the AoSA architecture is significantly lower compared to that of the FC architecture. This is attributed to the partial interconnection between antennas and RF chains. Hence, we need to balance the power consumption and data rate of the THz hybrid beamforming, inspired by the challenge of designing large-scale antenna arrays in THz UMMIMO systems. To address this issue, some researchers introduced a new form of hybrid beamforming called dynamic array-of-subarrays (DAoSA) [340, 341, 342], which features a flexible hardware connection. DAoSA achieves a good balance between power consumption and data rates by intelligently determining the connection between sub-arrays and RF chains.
#### V-A2 Widely-Spaced Multi-Subarray
Due to the tiny wavelength, the THz channel is usually sparse, consisting of a LoS path and a few reflection paths. The transmit power concentrates on the LoS path, and the overall angular spread of THz signals is small. For instance, a maximal angular spread of 40\({}^{\circ}\) has been observed for indoor environments in the THz band, compared to 120\({}^{\circ}\) for indoor scenarios at 60 GHz mmWave frequencies [343]. Since the number of spatial degrees of freedom is upper-bounded by the number of multipath components, the number of data streams or the potential spatial multiplexing gain is much small, limiting the achievable data rate at the THz band. A widely-spaced multi-subarray hybrid beamforming architecture is proposed [344] to overcome the low spatial multiplexing gain due to channel sparsity. Instead of critical spacing, the inter-subarray separation is over hundreds of wavelengths, reducing the correlation between the subarrays.
The widely-spaced multi-subarray (WSMS) hybrid beamforming architecture is promising by exploiting intra-path multiplexing for THz UMMIMO systems [345]. It was discovered in [346] that when the distance between antennas is expanded, the planar-wave assumption becomes invalid, and it is necessary to consider the propagation of spherical waves between antennas. Previous research has examined the use of intra-path multiplexing in LoS MIMO architecture operating at microwave and mmWave frequencies, which enables multiplexing gain to be achieved using just a single LoS path [347]. Given that the intra-path multiplexing gain is not restricted by the number of multipath, it is a highly viable and promising solution for THz communications, which are known to exhibit significant channel sparsity [348]. In [349], the authors demonstrated that the WSMS architecture can substantially improve the spectral efficiency of THz systems through the use of additional intra-path multiplexing gain, which sets it apart from existing hybrid beamforming that solely relies on inter-path multiplexing. As the follow-up, the authors designed an alternating optimization algorithm to maximize the sum rate [350] under the WSMS architecture.
#### V-A3 True-Time-Delay-Based Hybrid Beamforming
Most of the current hybrid beamforming architectures rely on phase shifters, which are frequency-independent, inducing the same phase rotation at different frequency components of a signal. Under the ultra-wide bandwidth at the THz, these shifters only provide correct phase shifting for a certain frequency point, whereas other frequency points suffer from phase misalignment. As a result, the formed beam is squinted with a substantial power loss, e.g., 5 dB claimed in [333]. To solve the problem of beam squint at the THz band, true-time-delay (TTD) can be applied to substitute phase shifters [332]. The TTD is frequency-dependent, and the phase rotation adjusted by TTD is proportional to the carrier frequency and perfectly matches the ultra-wideband THz beamforming.
According to [351], the TTD-based phase shifting is aligned with the requirements of wideband THz hybrid beamforming, given its proportional relationship with the carrier frequency. While ideal TTDs with infinite or high resolution are capable of precise phase adjustments, they are often associated with high power consumption and hardware complexity [352]. For the perspective of practical THz systems, low-resolution TTDs that strike a balance between energy efficiency and performance are more suitable, as reported in the literature such as [352, 353, 354]. In [355], a novel hybrid precoding architecture named delay-phase precoding (DPP) was introduced to mitigate the issue of beam squint in THz communications systems. By incorporating a time delay network between digital and analog precoding, DPP generates frequency-dependent beamforming vectors. Similarly, Gao _et al._[356] proposed a TTD-based hybrid beamforming that aims to address beam squint through virtual subarrays, as first presented in [357]. The proposed algorithm achieves near-optimal performance as that of full-digital precoding.
In order to address the limitation of TTD, Nguyen and Kim [358] proposed a hybrid beamforming scheme that takes into account the relationship between the number of antennas and the required delay for TTD. They also carried out joint optimization under limited delay to create an optimal compensation scheme for beam squint. It is noted that most research work, as mentioned above, focused on 2D hybrid beamforming, which is primarily designed for uniform linear array (ULA). However, ULAs may not be suitable for UMMIMO systems due to limited antenna aperture. In contrast, uniform planar arrays (UPAs) that can accommodate a large number of elements compactly, are more potential for deploying UMMIMO systems. There is a lack of research on beam squint compensation in 3D hybrid beamforming using ULAs for THz broadband UMMIMO systems. Responding to this, the authors of [332] proposed a 3D beamforming architecture by leveraging two-tier TTD, which is able to combat the beam squint effect from both the horizontal and vertical directions. The impact of the array structure on the beam squint has been analyzed in [359].
### _Lens Antenna Array_
Despite its high potential at the THz band, hybrid beamforming is still confined by high hardware and power costs due to the use of many analog phase shifters. Some studies [360] demonstrate that the power consumed by phase shifters becomes critical. In this context, a disruptive antenna technology called _lens antenna_[361] draws the focus of academia and industry.
James Clerk Maxwell predicted the existence of EM waves in 1873 and inferred that visible light is a kind of EM waves. To verify Maxwell's theory, early scientists who believed a radio wave is a form of invisible light concentrated on duplicating classic optics experiments into radio. Heinrich Hertz proved the existence of EM waves and also first demonstrated the refraction phenomena of radio waves at 450 MHz using a prism. These experiments revealed the possibility of focusing radio waves on a narrow beam as visible lights through an optical lens. In 1894, Oliver Lodge [362] successfully used an optical lens to concentrate 4 GHz radio waves. In the same year, Indian physicist Jagshal Chandra Bose [363] built a cylindrical sulfur lens to generate a beam in his microwave experiments over 6 GHz to 60 GHz. In 1894, Augusto Righi at the University of Bologna focused radio waves at 12 GHz with 32 cm lenses. In World War II, the race of developing radar technology fostered the emergence of modern lens antennas. Used as a radar reflector, the famous Luneberg lens [364] was invented in 1946, which is also attached to stealth fighters nowadays to make it detectable during training or to conceal their true EM signature.
As refracting visible light by an optical lens, a lens antenna uses a shaped piece of radio-transparent material to bend and concentrate EM waves [365]. It usually comprises an emitter radiating radio waves and a piece of dielectric or composite material in front of the emitter as a converging lens to force the radio waves into a narrow beam. Conversely, the lens directs the incoming radio waves into the feeder in a receive antenna, converting the induced electromagnetic waves into electric currents. To generate narrow beams, the lens needs to be much larger than the wavelength of the EM wave [366]. Hence, a lens antenna is more suitable for mmWave and THz communications, with tiny wavelengths. Like an optical lens, radio waves have a different speed within the lens material than in free space so the varying lens thickness delays the waves passing through it by different amounts, changing the shape of the wavefront and the direction of the waves.
On top of lens antennas, an advanced antenna structure referred to as a lens antenna array has been developed [367]. A lens antenna array usually consists of two major components: an EM lens and an array with antenna elements positioned in the focal region of the lens. EM lenses can be implemented in different ways, e.g., dielectric materials, transmission lines with variable lengths, and periodic inductive and capacitive structures. Despite its various implementation, the function of EM lenses is to provide variable phase shifting for electromagnetic waves at different angles [368]. In other words, a lens antenna array can direct the signals emitted from different transmit antennas to different beams with sufficiently separated angles of departure. Conversely, a lens antenna array at the receiver can concentrate the incident signals from sufficiently separated directions to different receive antennas [369].
Recent research work reported a few high-gain THz lens antennas, such as dielectric or metallic lens antennas [370, 371, 372]. Dielectric lens antennas have been demonstrated with high gain and wide operating bandwidth by integrating the dielectric lens with a standard rectangular waveguide feed [370] or a leaky-wave feed [371]. But their radiation efficiency needs to be improved. On the other hand, metallic lens antennas have no dielectric loss, making them suitable for THz communications and sensing. In [372], a high-gain THz antenna using a metallic lens composed of metallic waveguide delay lines was reported. For wideband signal transmission, recently, the authors of [373] presented a fully metallic lens antenna with a wide impedance bandwidth and high gain at the D band from 110 GHz to 170 GHz. A flared H-plane horn is used to achieve a large H-plane radiation aperture to further increase the radiation gain.
The deployment of MMIMO systems entails the challenges associated with a huge amount of antenna elements [374]. EM lens arrays with a reasonable number of elements can lower the required number of antennas and corresponding RF chains while maintaining high beamforming gain. However, dielectric EM lenses are difficult to integrate with multiple antenna techniques due to their bulky size, high insertion loss, and long focal lengths to control the beam gain. The metallic lens antennas are defined as artificial composites that obtain electrical properties from their structure rather than their constituent materials. Some studies on metallic lens antennae have been done to achieve beam gain from a single antenna, like [375]. Jaehyun Lee _et al._ proposed a large-aperture metallic lens antenna designed for multi-layer MIMO transmission for 6G, demonstrating that a single large-aperture metallic lens antenna can achieve a beam gain of up to 14 dB compared to the case without a lens. By adopting the proposed large-aperture metallic lens antenna, system-level simulations show that the data throughput of the user equipment is effectively increased [376].
### _Beam Alignment: Training of Beams_
The utilization of the promising THz spectrum range is hindered by significant propagation losses imposed on its frequencies. To counteract the losses, large antenna arrays, such as UMMIMO and lens antenna array discussed above, can be employed, but this leads to highly directional and narrow beams [377]. To ensure a satisfactory signal-to-noise ratio (SNR) at the receiver and prevent connection loss, it is critical to maintaining degree-level alignment between the transmitter and receiver beams. Therefore, beam alignment is a critical issue that must be addressed for establishing a reliable connection. This is accomplished by aligning the beams at the transmitter and receiver to the direction of the channel paths, where channel state information (CSI) is critical for implementing the fine alignment [378]. However, traditional channel sensing methods used at lower frequencies are not feasible at THz frequencies due to the significant path
losses that render pilot signals undetectable during the link establishment stage.
As a result, significant research efforts have been made in recent years to understand the unique characteristics of the THz channel and to develop appropriate beam alignment algorithms. These efforts aim to establish beam alignment during the link establishment stage [379, 380] and to maintain alignment while the beams are in motion (beam tracking). Two categories of beam alignment techniques have been identified: _beam training_[381] and _beam estimation_[382], as shown in Fig. 9. The former involves transmitting known signals and adjusting beamforming parameters to align the beams. The latter involves estimating CSI from received signals and using it to refine the beamforming parameters. This part surveys the state-of-the-art advances in beam tracking techniques, while beam estimation is discussed in the subsequent part.
Beam training involves scanning the channel with directional beams from codebook to determine the beam pair at the transmitter and receiver that results in the highest SNR of the received signal [383]. Beam training can be broadly classified into two categories: exhaustive and hierarchical training, which are discussed as follows:
#### V-C1 Exhaustive Training
Many studies have adopted exhaustive training, which involves sequentially probing all the predefined directions in the codebook to find the optimal beam pair that maximizes the SNR at the receiver [384]. This approach is used in the IEEE 802.15.3c standard [385]. However, this method is time-consuming and not practical at THz frequencies, where beams from a large-scale antenna array tend to be very narrow, making it difficult to scan the entire space in a reasonable amount of time. Additionally, the accuracy of the training is limited by the codebook's resolution.
Responding to these limitations, Tan and Dai [386] have improved the exhaustive search by exploiting the beam split effect. They used delay-phase-precoding architecture to control the beam split effect and accelerate the tracking process [387]. By doing so, the split beams have wider angular coverage and can scan many directions in a single shot. Another approach from the RF domain used leaky-wave antennas (LWA) at the transmitter and receiver of a THz link to estimate the angle-of-departure (AoD) and angle-of-arrival (AoA) [388]. The angular radiation of an LWA is frequency-dependent, and the received spectral peak determines the AoD. The bandwidth of the received signal is proportional to the rotation angle over the AoA of the LWA receiver, which speeds up the channel scanning process but requires additional hardware at both the transmitter and receiver.
#### V-C2 Hierarchical Training
From a practical point of view, many studies have adopted hierarchical training to reduce training overhead [381, 389, 381, 388, 389, 390, 391]. Hierarchical algorithms are based on multi-resolution codebooks, which contain wide beams at lower levels and narrow beams at higher levels. The search begins at the lowest level and gradually moves to higher levels to find the optimal narrow beam. In [390], the authors proposed a subarray-based multi-resolution codebook, where beams at each level are generated by the contribution of all subarrays. In [391], the authors proposed an accelerated hierarchical training that concurrently scans angular space with different RF chains. The authors of [392] proposed a multi-modal beam pattern-based training that simultaneously radiates beams targeting different directions using a single RF chain. The equally spaced activation approach has been proposed to generate the steering vector for multiple beam radiation. However, the loss at THz frequencies may render the training algorithm ineffective. The authors in [378] and [393] adopted hierarchical training that utilizes the power-angular spectral correlation between sub-6GHz and THz frequencies.
In [394], Stratidakis _et al._ proposed a localization-aided hierarchical beam tracking algorithm that uses location information to reduce pilot overhead. This algorithm assumes the linear motion of a user, which may not be accurate in a realistic scenario. In [395], the authors proposed a unified 3D training and tracking framework based on a 3D hierarchical codebook built upon the quadruple-uniform-planar-array architecture. This proposal offers two advantages: a unified framework for training and tracking and 3D space coverage compared to 2D space coverage in most existing works. The training overhead of hierarchical algorithms is much lower compared to exhaustive ones. However, hierarchical algorithms suffer from a high overhead of feedback messages required for coordination between the transmitter and receiver. The number of levels in multi-resolution codebooks also leads to higher training overhead, which may not enhance performance, especially in multi-hop THz links [396]. A new approach is proposed in [397], where it is based on a multi-armed bandit algorithm and utilizes prior knowledge of channel frequency-selective fading. This algorithm is designed with a hierarchical structure to accelerate the beam alignment process.
### _Beam Alignment: Estimating of Beams_
Beam estimation is a method of acquiring channel information with the goal of reducing training overhead when compared to beam scanning techniques. The estimation process begins with initial training, which involves collecting channel
Fig. 9: Illustration of beam alignment techniques.
measurements. These measurements are then processed to derive the angular information of the target channel. Prior studies have proposed a variety of algorithms, based on linear estimation [398, 399], compressive sensing (ComS)-based sparse estimation [400, 401, 402, 403, 404], beamspace-based estimation [405, 406], subspace-based estimation [382, 407, 408, 409], or deep learning-based estimation [382, 410, 411].
#### V-B1 Linear Estimation
The authors of [398] used an extended Kalman filter, which is a well-known example of linear estimation, to perform beam tracking for a mobile station (MS). The MS sends training symbols over the uplink during each time slot, and the extended Kalman filter-based algorithm at the base station (BS) iteratively estimates the channel parameters (the path gain, AoD, and AoA) from the observed signal. The proposed algorithm achieves milli-degree level angle estimation with moderate mobility of the MS and antenna array size. However, the study assumes the BS is equipped with a fully digital beamformer, which is not practical due to the high power consumption imposed by a large number of RF chains. Additionally, the study assumes that the MS is parallel to the BS during tracking, such that AoD equals AoA, which is not realistic because the orientation of the MS can be arbitrary in real-world scenarios. Other linear methods, such as maximum likelihood and least square, can also be applied [399]. However, these estimators require a large number of observations and do not exploit the sparsity feature of THz channels.
#### V-B2 Compress Sensing-based Sparse Estimation
The sparsity property of THz channels can significantly reduce the computational complexity of beam estimation algorithms by transforming the problem into a sparse recovery problem. A technique referred to as ComS is considered an optimal approach to solving these problems, as discussed by the authors in [403]. They analyzed two ComS-based algorithms, i.e., _greedy compressive sampling matching_ and _Dantzig selector-based method_, for solving convex programs. Results show that the ComS-based methods have higher accuracy compared to linear estimation based on least square. The authors in [400] utilized ComS-based techniques to accelerate the training proposed in [378]. In this approach, the estimated angles from wide beams in the first stage are refined using an L1-norm regularized least squares method to obtain accurate estimates of AoD and AoA, reducing the scope of the narrow beam search in the second stage.
Another work in [404] proposed an orthogonal matching pursuit-based fast algorithm to estimate the AoA and AoD of a BS-MS link. This study considered the cost and power consumption of adopting an auxiliary fully-digital array for channel estimation and evaluated the effect of RF imperfections and low-resolution analogue-to-digital converter (ADC) per RF chain. This study adopts the virtual channel model, which assumes that the AoA and AoD are discretely distributed over a spatial grid utilizing the angular sparsity of the THz channel, making it a sparse recovery problem suitable for ComS-based algorithms. However, this discrete grid assumption reduces the estimation accuracy due to the grid resolution. To mitigate this limitation, the authors in [412] proposed an iterative reweight-based super-resolution estimation scheme, which optimizes the on-grid estimation iteratively to converge to neighboring off-grid angles. Simulation results show that the off-grid solution has improved accuracy and spectral efficiency compared to on-grid solutions. Similarly, the authors in [413] proposed a gridless ComS-based algorithm to estimate the AoA for arbitrary 3D antenna arrays, eliminating the quantization error of the grid assumption.
The mentioned ComS-based estimations are generally built on the assumption of angular sparsity of THz channels, which holds in the far field but is not valid in the near field. Therefore, the work in [401] considers ComS-based estimation in the near field where the angular sparsity assumption does not hold. Results show that the channel in the near field exhibits polar sparsity rather than angular sparsity, which was exploited by a ComS-based polar-domain simultaneous orthogonal matching pursuit algorithm.
#### V-B3 Beamspace-based Estimation
As we know, the MIMO beamspace channel can be realized through the use of a discrete lens array (DLA). This kind of arrays function as passive phase shifters that steer beams towards specific directions based on the incident point to the lens aperture [414]. The number of these directions is limited by the number of antenna elements, resulting in a beam-sparse channel. This artificially created sparsity reduces the pilot overhead required for channel estimation compared to conventional methods. In [405], the authors adopted the DLA-based MIMO system architecture to create the MIMO beamspace channel and utilized its sparsity for fast channel tracking. A _priori_ information-aided tracking scheme was proposed for MIMO beamspace systems, where the channel was conventionally estimated in the first three time slots and the physical direction of the MS was then derived based on a temporal variation law. The estimated physical direction was used to determine the support of the beamspace channel, which corresponds to the dominant beam directions. However, the estimation accuracy is greatly dependent on the localization accuracy. In [406], the authors extended the work in [405] and proposed a cooperative localization-aided tracking algorithm with multiple BSs, each equipped with a DLA. These BSs cooperate to accurately localize the MS for improved channel tracking. While beamspace MIMO solutions significantly reduce the overhead in comparison with that of conventional estimation methods, their accuracy may be limited by the discrete directions generated by the DLA and a restricted number of beams.
#### V-B4 Subspace-based Estimation
When estimating continuously distributed AoA and AoD, another way referred to as subspace-based algorithms can be performed, with the aim of avoiding the estimation error caused by the sparse solutions or beam sparsity in beamspace solutions. In general, these algorithms collect channel measurements and identify the eigenvectors that correspond to the signal subspace. Two widely known algorithms, i.e., MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique), belong to subspace-based estimation [382, 407].
In [382], the authors adopted the MUSIC algorithm to achieve millidegree-level AoA estimation. This study utilized a hybrid AoA architecture and collected measurement data
by probing random steering vectors. The covariance matrix of the measurement data was then calculated and decomposed into signal and noise subspaces. The AoA was estimated by searching the MUSIC pseudospectrum function for vectors orthogonal to the noise eigenvectors. The estimation was further refined by collecting new measurements based on the coarse estimated angles. In [407], the ESPRIT algorithm was adopted for super-resolution channel estimation, which involved multiple steps such as spatial smoothing, forward-backward averaging techniques [415], singular value decomposition (SVD), and joint diagonalization. While subspace solutions show improved performance compared to sparse solutions, their computational complexity is significantly higher.
#### Vii-B5 Deep Learning-based Estimation
Recently, deep learning (DL)-based techniques have emerged as a promising alternative to replace conventional estimators. DL-based solutions are particularly effective for complex multi-user scenarios where the input and output of the channel are not directly related. In [382], a branch of DL referred to as deep convolutional neural network (DCNN) is used to estimate the AoA of a multipath channel. The measurement matrix is collected through random beamforming and combining matrices at the transmitter and receiver, respectively, and fed into the neural network. Three convolutional layers extract the spatial peculiarities of the channel, and two fully-connected layers capture the non-linear relationship between these peculiarities and AoA estimation. Results show higher estimation accuracy than the subspace-based MUSIC algorithm at high SNR.
In [410], a DCNN architecture is used to estimate the near-field channel under the spherical wave propagation model. The proposed DCNN-based approach addresses the spherical wave propagation model by considering the inter-subarray phase error as an output parameter of the network. In [411], deep kernel learning (DKL) combined with Gaussian process regression (GPR) is used to estimate the indoor THz channel in a multi-user scenario. In particular, a deep neural network (DNN) is trained to capture the non-linear relationship between the input and output of the channel. Results show that this DNN-based solution outperformed the minimum-mean square error (MMSE) and least squares-based linear estimators.
Prior studies have demonstrated the superiority of DL-based solutions over conventional solutions in complex scenarios. It also revealed that DL networks need lots of computational and storage resources, intensive offline training, and validation. Moreover, their efficiency in low SNR scenarios requires further investigation. In order to achieve fast initial access in wireless networks, a DNN framework called DeepAI has been proposed, which maps the received signal strength (RSS) to identify the optimal beam direction [416]. The authors have introduced a sequential feature selection (SFS) algorithm that selects efficient and reliable beam pairs for DeepAI's inputs in LoS mmWave channels. However, the SFS algorithm fails to improve the accuracy and performance of DeepAI in NLoS scenarios. Simulation results show that DeepAI outperforms the conventional beam-sweeping method. Another DL-based beam selection algorithm is suitable for 5G NR has been proposed by the authors in [417].
## VIII THz Systems and Networks Toward 6G and Beyond
The ongoing research and development of the 6G system is set to revolutionize the way various domains and layers of a mobile network interact and communicate with each other and with authorized third parties [418, 419]. As we stated numerous times throughout this paper, one of the key enablers of 6G is THz communications and sensing, which promise to deliver ultra-high data rates, ultra-low-latency connectivity, high-resolution sensing, and high-accuracy positioning in the coming decades. Nevertheless, the full potential of THz communications and sensing can only be realized through its integration with other emerging technologies.
In this section, we explore THz networks from a systematic point of view, with an emphasis on the synergy of THz communications and sensing with a range of 6G-enabling technologies, including massive multi-input multi-output, ultra-massive multi-input multi-output, non-orthogonal multiple access, reconfigurable intelligent surfaces, non-terrestrial networks, digital twins, artificial intelligencem machine learning. Moreover, we discuss security, localization, joint communications and sensing, multi-connectivity, and channel awareness for THz systems and networks. By examining these synergies, we hope to shed light on the most significant research challenges and opportunities facing the development and deployment of THz communications and sensing in 6G and beyond networks, as well as the potential benefits for future applications, use cases, and services.
### _THz-MMIMO Systems and Networks_
Compared to lower frequencies at sub-6 GHz and mmWave, the THz bands have a much smaller signal wavelength, which leads to a tiny size of antenna (i.e. a larger number of antennas within the same surface area) and narrower beams. Both the factors are beneficial for MMIMO and grant it a greater potential in the THz band than at lower frequencies. For example, Akyildiz and Jornet reported that UAMIMO systems up to the dimension of \(1024\times 1024\) can be realized at 1 THz with arrays that occupy just 1 mm\({}^{2}\)[322]. However, the application of MMIMO and/or UMMIMO in practical 6G THz communications and sensing are challenged in various aspects. In addition to the barriers in the fabrication of nano-antenna arrays, the complexity and sparsity of THz channels are also limiting the exploitation of MMIMO in this band. Accurate channel models, physical (PHY) layer enabling technologies, as well as novel link layer design, are needed to release the full potential of THz MMIMO.
A lot of works in modeling THz MMIMO channels have been reported since the late 2010s, overwhelmingly with the ray-tracing methodology. Han _et al._ proposed in [420] a model for UAMIMO channels over a distance to 20 m and in the frequency windows of 0.3 THz to 0.4 THz and 0.9 THz to 1 THz. Busari _et al._ studied the 0.1 THz MMIMO channel in [421] a specific outdoor street-side scenario, investigating the impacts of precoding scheme, carrier frequency, bandwidth, and antenna gain on the system regarding spectral and energy efficiencies. Sheikh _et al._ focused on the critical features of
rough surface reflection and diffuse scattering at THz frequencies, and proposed in [422] a 3D indoor model for 0.3 THz and 0.35 THz MMIMO channels with different surface roughness levels, considering both LoS and NLoS scenarios. In the recent work [339], Tarboush _et al._ reported their channel model for wideband UMMIMO THz communications and a simulator based thereon.
Efforts on the physical layer deal with the problem of beamforming and combining from the perspectives of beam training and beam tracking, i.e., finding the best beam pattern and online adjusting it, in order to obtain the best link quality and maintain it against the time variation of the channel. As outlined by Ning _et al._ in their tutorial [57], there are two basic principles of beamforming: the precoding/decoding that is executed in the digital domain, and the beam steering that works in the analog domain. Each of the principles, as well as their hybrid, when applied in the wideband THz systems, must be carefully designed to address two main issues: the spatial-wideband effect that different antennas receive different symbols at the same time, and the frequency-wideband effect that the beam pattern of a phase array codeword changes with the frequency of the signal (a.k.a. the beam squint or beam split). While most existing methods leverage either the digital precoding approach [423, 355], or the precoding/steering hybrid [405, 424, 425, 356], new research interests in the steering approach based on RIS are arising [426, 427]. Whilst higher layer design has not been a major research focus of MMIMO THz systems so far, there is pioneering work on multi-access scheme reported in [206].
### _THz-NOMA Systems and Networks_
Another promising RAN technology for enabling THz-MIMO systems and networks is NOMA, which allows allocating of the same radio resources to more than one user simultaneously and invokes the so-called successive interference cancellation (SIC) approach on the receiver side to decode the information for different users successively [428]. Compared to lower frequencies, the low-rank channels in the THz bands can be much more correlated because of the limited-scattering transmission, which reduces the channel orthogonality between different users and makes NOMA a promising technique to improve the spectral efficiency [26]. Serghiou _et al._[64] believe that in LoS scenarios where spatial processing approaches fail to separate the users from each other, a combination of NOMA with UMMIMO can provide more fair user access in terms of resource allocation, and therewith achieve better spectral efficiency of the overall network. Meanwhile, with proper resource allocation algorithm, NOMA can also enhance the energy efficiency in THz communications and sensing systems [429].
To evaluate the feasibility of MIMO-NOMA systems in THz bands, Sabuj _et al._ proposed in [430] a finite blocklength (FBL) channel model and therewith evaluated the system performance regarding critical machine-type communications (CMTC) scenarios. In contrast to its good performance in LoS scenarios, THz-NOMA performs much poorer when the connected devices are blocked by obstacles [431]. To address this issue, RIS appears as a promising solution. In [432], Xu _et al._ proposed a smart RIS framework for THz-NOMA, which delivers significant enhancements in the system energy efficiency and the reliability of super-fast-experience users. The principle of NOMA requires users to be paired/clustered for sharing radio resources, and relies on an appropriate clustering to achieve satisfactory system performance. Shahjala _et al._ comparatively reviewed the user clustering techniques for MMIMO-NOMA THz systems in [433], and proposed a fuzzy C-means-based clustering approach in [434].
It shall be noted that many popular clustering policies are tending to pair a user with good channel, called cell center user (CCU), to another with poor channel, called cell edge user (CEU). Such policies lead to a gain in the spectral efficiency of the overall system, but a degradation at the CEU due to power splitting. To address this issue, Ding _et al._ proposed in [435] a cooperative non-orthogonal multiple access (NOMA) scheme where the CCU always forwards the message for CEU that it obtains during the SIC, so that the performance loss at CEU is compensated. However, this design is forcing the CCU to work as a relay which drains its battery. Therefore, simultaneous wireless information and power transfer (SWIPT) is often introduced into CNOMA systems so that the CCU is able to harvest energy from the radio signal to support the relaying [436]. At THz frequencies, due to the high spreading loss and atmospheric absorption, the power propagation loss is more critical than that at lower frequencies, and the SWIPT-assisted CNOMA solution can be more important. Oleiwi and Al-Raweshidy analyzed the performance of SWIPT THz-NOMA in [437], and correspondingly designed a channel-aware pairing mechanism [437].
### _THz-RIS Systems and Networks_
The 6G and beyond systems will be revolutionized by the tremendous potential of RIS [438, 439, 440] and THz [441], the two cutting-edge enablers for the access domain of a futuristic communications and sensing network. The synergy between RIS and THz lies in the fact that RIS can be utilized to improve the performance of THz systems by providing a cost-effective solution to the propagation challenges associated with THz frequencies [442]. By utilizing the reconfigurability and versatility features of RIS, it is possible to address the challenges of THz wave propagation, especially the use of bypassing the blockage of THz beams, thereby improving the overall performance of THz communications and sensing.
By controlling the phase, amplitude, and polarization, RIS can effectively steer, reflect, and amplify electromagnetic waves in THz systems and networks. Consequently, it enables a vast array of applications and use-case scenarios, including beamforming, wireless power transfer, and indoor localization, among others. In addition, by employing RIS-assisted spatial modulation, the THz-RIS systems and networks have the potential to dramatically improve their spectral efficiency. More importantly, RIS can be used to generate virtual channels that compensate for the propagation losses of THz waves in order to increase SNR and the coverage area of THz communications and sensing. By far, the intersection of the
RIS and THz has been intensively studied in the literature. There exist a number of overview and survey papers that provide an insight of such synergies, including [443, 444, 445, 446]. In addition to the aforementioned survey papers, we discovered during our research that the intersection between these two technologies has been dramatic, including in the context of massive MIMO, millimeter wave, 3D beamforming, satellite networks, and many others. Moreover, a large number and various types of physical layer-related optimization problems have been jointly investigated.
### _THz-Aided Non-Terrestrial Networks_
With its ambition of ubiquitous 3D coverage, 6G and beyond are envisioned to include different non-terrestrial infrastructures, such as unmanned aerial vehicle (UAV), high-altitude platform (HAP), LEO satellites, and geostationary Earth orbit (GEO) satellites, as an indispensable part of its architecture. Since the air/space channels and air/space-to-ground channels are less subject to blockages w.r.t. terrestrial channels, the LoS link availability is much higher, implying a vast potential for THz communications and sensing [447]. On the one hand, the tremendous amount of spectral resources offers the feasibility of efficient interconnection among these terrestrial, air, and space platforms through THz communications links. On the other hand, non-terrestrial infrastructures enable the flexible deployment of a variety of THz sensing equipment at favorable altitudes and places.
Nevertheless, the practical deployment of THz-non-terrestrial network (NTN) is still facing various technical challenges, which are including but not limited to: feasibility assessment of THz frequencies for space-to-earth links, transceiver implementation, and accurate NTN platform positioning [448]. Regarding the characterization of THz-NTN channels, the authors of [449] proposed an analytical propagation model for low-altitude NTN platforms such as UAVs in the frequency range \(0.275\,\mathrm{THz}\) to \(3\,\mathrm{THz}\), while the authors of [450] modeled the cross-link interference for LEO satellites. The use of satellites to serve air planes on the THz band and related channel models have been analyzed in [451]. Concerning the THz transceiver implementation, NTN systems pose high antenna design requirements. For example, the antennas are supposed to produce multiple high-gain beams to support dynamic networking and realize long-range communications. There are various approaches towards this aim, which are well summarized in the survey by Guo _et al._[452]. For THz CubeSat networks, the antennas are required to provide sufficient beamwidth angle to enable faster neighbor discovery, while simultaneously providing a high gain to overcome the path loss. To fulfill these requirements, Alqaraghuli _et al._ designed a two-stage Origami horn antenna [453].
On the PHY layer, digital signal processing techniques are studied to overcome the limitations of the analog front end in THz transceivers. Tamesue _et al._ propose to deploy digital predistortion in RF power amplifiers of THz-NTN systems to compensate the nonlinear distortion [454]. In [455], Kumar and Arnon reported a DNN beamformer to replace the phase shifters in THz-MMIMO antenna arrays for wideband LEO satellite communication. It also creates additional benefits for NTN by deploying THz communications and sensing in conjunction with other novel enabling technologies. For example, RIS can contribute to the deployment of THz in future integrated terrestrial/non-terrestrial networks by means of enhancing the beamforming [456]. By leveraging the ISAC technology, the differential absorption radars (DARs), which are traditionally used for weather sensing, can be granted an extra capability of communicating with LEO satellites [457].
### _Digital Twin-Aided THz Systems and Networks_
The digital-twin technology [458] is an emerging novel concept (which is also considered to be as a key enabler of the 6G and beyond systems) in which a virtual replica of a physical system, object, process, network, or link is created employing accurate data collected in real-time [459]. It enables the autonomous control, intelligent monitoring, and accurate self-optimization of physical networks, processes, and systems in a fully virtualized environment. To our knowledge, there exists synergy between the digital-twin technology and THz communications/sensing networks that can produce a combined effect on the 6G and beyond aimed at improving the overall performance in delivering data-driven services. This synergy stems from the fact that both rely on accurate and real-time data that is collected from their corresponding data nodes. On the one hand, THz communications can support the transmission of large amounts of data generated by digital-twin nodes by facilitating high-speed communications links [458], while THz sensing can help the acquisition of high-accuracy environment data for digital twin. On the other hand, a digital twin can improve the overall performance of THz communications and sensing by offering a virtual testing, monitoring, decision-making, and optimization environment for the said THz systems and networks [458].
To be specific, digital twins can be utilized to generate and enable virtual replica (also known as virtual model) for manufacturing processes and systems, such as digital twins for the machines, links, services, materials, networks, and products contained in an industry. By controlling and monitoring the virtual replicas in a real-time manner, it can be feasible to detect and subsequently address any maintenance issues and bottleneck that may arise in the said industry, including device and machine complete failures, unsuccessful service delivery attempts, material shortages, among many others. To enable the digital twinning of manufacturing industry, THz communications/sensing systems and networks can be deployed to acquire, transmit, and receive data (and at some points enrichment information) between the physical objects and virtual replicas of the manufacturing systems, enabling real-time control, accurate decision-making, and autonomous optimization [460].
In the literature, the relationship between digital twin technology and THz systems has received scant attention. During the course of our research, we uncovered three references addressing this intersection of the two technologies. First, the authors in [461] proposed a THz signal guidance system in which a digital twin is utilized aimed at modeling, controlling, and predicting the indoor signal propagation features
and characteristics. The authors claim that their methodology achieved the "best" THz signal path from a nearby base station to the targeted user equipment using a number of certain models. Second, in reference [462], the authors proposed a framework that is based on the THz communications system and aimed at implementing the digital-twin prediction for enabling extremely security-sensitive systems and objects. Finally, reference [460] studies the delay minimization optimization problem within the context of THz communications system and visible light communications system. In their study, the authors claim that their approach reduces up to 33.2% transmission delay in comparison to the traditional methods.
### _AI/ML-Aided THz Systems and Networks_
THz communication/sensing systems and AI/ML [463, 464, 465] can benefit from each other synergistically. There are several facets of THz systems that can benefit from the application of AI techniques and ML algorithms in 6G and beyond networks. For example, AI/ML can be employed for (a) signal processing to enhance the quality of THz signals and reduce noise; (b) THz channel estimation to be maintained over long distances and not be affected by atmospheric effects; and (c) the optimization of error correction codes and modulation schemes. On the one hand, the performance, effectiveness, and dependability of THz systems and networks can be improved through the utilization of AI/ML approaches. On the other hand, THz systems can offer high-speed wireless data transfer and high-accuracy sensing capabilities that can be helpful for the deployment of AI/ML services.
In addition, AI/ML algorithms can be utilized to create intelligent and data-driven THz communications and sensing systems capable of adapting to quickly changing environmental conditions. For instance, with advanced AI/ML algorithms, self-healing, self-optimizing, and self-regulating THz communications networks that can modify their parameters autonomously to maintain goal performance and service levels can be created. Moreover, AI/ML can be utilized in THz imaging and sensing application use-case scenarios, including security screening, medical diagnosis, industrial inspection, and many others. THz images can be processed using AI/ML algorithms to extract enrichment and/or useful information, resulting in more accurate and reliable results.
The strong synergistic relationship between AI/ML and THz systems has also been demonstrated by a large number of recent studies published in cutting-edge academic journals and conferences. During our research, we found three papers, [61, 105, 466], that provide a comprehensive overview of various aspects of the AI/ML applicability in THz systems and future research directions in this domain. To be specific, the authors in [61] provide a survey and overview of the current state-of-the-art research in THz communication, including signal processing, front-end chip design, channel modeling, modulation schemes, and resource management. The paper also highlights the challenges and opportunities in 6G THz communications systems and discusses the potential applications of THz communications in various fields. Reference [466] provides a comprehensive review of the recent achievements and future challenges of ML in THz communication. More specifically, the paper summarizes the state-of-the-art research on ML-based THz imaging, sensing, and communications systems, including signal processing, feature extraction, classification, and optimization. The paper also discusses the potential applications of ML in THz technology, such as medical diagnosis, security screening, and wireless communication, and outlines the future research directions and challenges in this field. The authors of [105] cover the fundamentals of THz sensing, including sources of THz radiation, detection techniques, and applications. This paper presents a comprehensive survey of signal processing techniques, including time-domain and frequency-domain methods, feature extraction, and classification. It also reviews recent developments in ML-based THz sensing and highlights the challenges and future directions for signal processing and ML techniques in THz sensing.
Finally, and in addition to the above three overview papers related to THz communications systems, there exist a number of papers that study ML techniques for time-domain spectroscopy and THz imaging [106], the application of AI in THz healthcare technologies [467], two types of low-cost THz tags using ML-assisted algorithms [468], and molecular screening for THz detection using ML-assisted techniques [469].
### _Security in THz Systems and Networks_
The security of information and data acquired, transmitted, and received over a THz communications and sensing system can be effectively improved by physical layer security in the access domain, which is the utilization of properties of the physical layer of the THz system. It is a novel methodology that is strongly believed to secure communications and does not rely only on cryptographic approaches. The intersection between physical layer security and the THz system results from the novel physical properties of THz waves, which can be utilized to improve the security of the transmitted information and data. To the best of our knowledge, physical layer security can be deployed in the following aspects of THz communications and sensing: channel authentication, PHY encryption, beamforming, and PHY key generation. These security techniques can improve the security of THz communications and sensing systems and make them suitable for a variety of 6G scenarios, such as hot spots, wireless backhaul, satellite interconnection, positioning, and imaging. Last but not least, the PHY security and the THz systems can be integrated to produce secure and reliable systems for 6G industrial applications.
As of the time of this writing, a number of studies have uncovered numerous facets of the intersection between PHY security and THz systems in 6G and beyond networks. The first studies that we reviewed were conducted in 2017, including [109, 470]. In the previous paper, a hybrid physical and multi-dimensional coded modulation scheme for THz communications systems is proposed. In the latter paper, physical layer authentication in THz systems is presented. Following that, we also found that three studies were conducted (both based on simulation and calculation) on the PHY security
related to resiliency against eavesdropping using a directional atmosphere-limited LoS THz link [110, 112, 471]. Regarding eavesdropping probability, the authors of [472] have also investigated decreasing message detection using inherent multi-path THz systems. Finally, a physically secure THz system is studied at 0.31 THz using orbital angular momentum in [473].
### _Localization Services in THz Systems and Networks_
Localization services and THz systems are two completely different research areas. Nevertheless, they can be integrated synergistically aimed at improving their capabilities and opening up new avenues for a variety of applications and use cases in the 6G era. The intersection of localization and THz systems results from the unique propagation characteristics of THz frequencies, which can be utilized for localization purposes in the access network domain. THz frequencies are particularly sensitive to rapidly occurring environmental changes, such as the presence of obstacles (a.k.a. problematic objects) or changes in the refractive index of materials in an environment. This extreme sensitivity can be utilized to create THz-based localization services and systems that can operate in a variety of environments, including indoor environments where GPS-based systems may not function optimally.
The integration of localization services into THz systems and network has the potential to enable a variety of novel use cases and applications in the 6G era, such as intelligent factories, manufacturing, healthcare, and many others. THz-based localization services, for instance, could be used to track assets within a factory or warehouse, while THz communications could be used to enable high-speed data transfer between machines and objects in the said factory. THz-based localization services could be deployed to monitor and control patient movement within a hospital, while THz communications could be used to enable wireless video transmission for remote consultations and many other services.
The intersection between localization services and THz communications systems has been studied to some extent in the literature. During our research, we found one tutorial and survey paper that provides a comprehensive overview of THz-band localization techniques for 6G systems. The authors discuss various aspects of THz waves, including propagation characteristics, channel modeling, and antenna design. They also explore different localization methods such as time of arrival, angle of arrival, and hybrid techniques. The paper concludes by highlighting some potential applications of THz-band localization in 6G networks [474]. In addition to the above survey, we discovered in two research articles that researchers have conducted research on various aspects of this intersection. In [475], the authors proposed a deep learning model for 3D THz indoor localization using a structured bidirectional long short-term memory network. The authors claim that their proposed method achieves better localization accuracy than state-of-the-art methods, making it a promising solution for indoor localization in THz-band communications systems. Finally, reference [476] proposes a new deep learning method for THz indoor localization called SIABR, utilizing a structured intra-attention bidirectional recurrent neural network to learn features from the received signal and estimate the location of the target.
### _Multi-Connectivity for THz Systems and Networks_
Due to the high atmospheric absorption and low penetration capability, THz signals suffer from such strong propagation loss, fading, shadowing, and blockage, that they are hard to maintain with mobility even when beamforming and combining are ideally performed. To address this issue, THz systems will need multi-connectivity (MC) as an essential feature so that a continuous and stable data connection between the users and the network can be ensured by means of radio link redundancy in case a single radio link fails. The basic principle of MC is to keep multiple radio connections to different BSs simultaneously, but only use one of them at a time for signal transmission. The effectiveness of multi-connectivity in addressing the issue of blockage in THz band has been confirmed by evidence from various studies: a higher density of BSs is proven to enhance the system performance from the perspectives of link probability [477], capacity [477, 478], and session completion rate [479, 480].
Especially, there are two different strategies for selecting the active radio link, namely the closest line of sight multi-connectivity (CMC) where the closest BS with LoS link is always selected for communications and the reactive multi-connectivity (RMC) in which the active radio link is only re-selected when the current LoS link is blocked. While the CMC strategy is significantly outperforming the single-connectivity strategy, the RMC strategy brings only a marginal - sometimes even negative - gain, and is therefore discouraged despite its low signaling overhead [478]. It shall also be noted that the application of MC has an influence on the handover mechanism since it links the status model. In [481], Ozkoc _et al._ established an analytical framework to assess the joint impact of the MC degree and the handover constraints on system performance of THz cellular networks.
An alternative and more advanced approach to exploit MC are to allow multiple BSs to simultaneously _serve_ multiple mobile stations, i.e., each user may be communicating with multiple BSs rather than one at a time. This is usually known as the network MIMO or distributed MIMO (DMIMO), which exploits the spatial diversity by means of intensifying the BSs instead of antenna units in each array like in classical MMIMO/UMMIMO. To minimize the cross-interference among adjacent BSs and maximize the throughput in DMIMO networks, the coordinated multi-point (CoMP) technologies shall be invoked. CoMP allows different BSs to be clustered into small groups, and to coordinately optimize their user association and beamforming within each group. More specifically, there are two principles of CoMP: the joint transmission (JT) where multiple BSs transmit the same signal simultaneously to the same user equipment (UE), and the coordinated scheduling and beamforming (CSCB) where each BS sends a different signal and the signals are combined at the UE. An example of applying CoMP in the THz band is presented in [482], which combines joint power allocation and quantized co-phasing schemes to maximize the aggregated data rate.
Since the late 2010s, the concepts of DMIMO and CoMP have evolved into the cell-free network (CFN) paradigm, where all UEs in an area are jointly served by numerous single-antenna BSs in a CoMP manner [9]. Having been well studied at mmWave frequencies, the applicability of CFN in the THz band still remains under-studied [102]. Pioneering work was reported in 2022 by Abbasi and Yanikomeroglu [483], considering a NTN scenario. In some research works [484], multi-connectivity also refers to establishing connections in different communications bands, e.g., transmitting control signaling in the sub-6GHz band while delivering data in the THz band (also deliver data when the THz band is in an outage).
### _Channel Awareness for THz Systems and Networks_
While modern wireless data transmission technologies generally rely on the knowledge of channel state to achieve satisfactory performance, the acquisition of accurate CSI can be a critical challenge for THz systems and networks. First, the pilot symbols can be easily blocked due to the susceptibility of THz channels, leading to a low efficiency of classical channel estimation methods. Second, THz channels are selective regarding many different parameters, e.g., time, frequency, beam pattern, beam direction, polarization, etc. Therefore, it takes much effort to comprehensively measure the CSI of a THz channel, in addition to a significant overhead to encode the dimensional and sparse CSI. Furthermore, similarly to the pilots, the CSI report from UEs to the network can also be blocked if transmitted in the THz band itself [66].
Regarding these challenges, out-of-band channel estimation occurs as a promising solution. This involves estimating the CSI of THz channels using channel measurements at lower frequencies, e.g. in the sub-6 GHz and/or mmWave bands, lower frequency bands, leveraging the potential spatial correlation among them. To assess the feasibility of this approach, the authors of [485] and [486] studied the spatial similarity among THz, mmWave, and sub-6 GHz bands based on point cloud ray-tracing simulation and field measurements. Their results support the use of out-of-band beam search strategy, not only in LoS scenarios but also even in NLoS ones, when using well-designed antenna patterns in specific frequency bands. Meanwhile, Peng _et al._ demonstrated the feasibility of out-of-band channel estimation beam searching with both ray-tracing simulations [487] and real-world experiments [488].
However, the exploitation of channel similarity for THz communications and sensing is still facing technical challenges. First, the difference in the size of the antenna array within the same aperture leads to a mismatch in the beamwidth between the lower frequencies and the THz signals [66], which is proven to have a stronger impact on channel similarity than the frequency gap itself [486]. Second, the correlation matrix is difficult to estimate, considering its large size and the small dimension of antenna arrays measuring at lower frequencies [66]. Third, despite the feasibility of out-of-band estimation on static THz channels, the dynamics such as user mobility, scatter mobility, and blockages, are lifting the difficulty of this task to the next level [485].
## IX Integrated THz Communications and Sensing
The novel concepts of "network as a sensor" and "Internet of Senses" become unprecedented essential in the upcoming 6G and beyond cellular networks so as to support a multitude of emerging use cases [19]. The two major functionalities, i.e., sensing (including localization and imaging) and communications, will be merged, synergized, and integrated, beneficial from each other rather than competing for network resources. In other words, future base stations are supposed to provide not only legacy communications services but also localization, sensing, and even electromagnetic imaging capabilities, acting as multi-functional ISAC transceivers [489, 490].
In general physical layer, ISAC broadly contains two widely-adopted terminologies, i.e., radar-communications coexistence (RCC) and dual-functional radar-communications (DFRC) [491]. It aims for enhanced spectral and energy efficiency, reduced hardware cost, and decreased power consumption as well as deployment and computational complexity. In the literature, THz sensing, imaging, localization, and communications were treated separately in [492, 493, 494, 495, 496, 497]. Different from these, we in this section provide a holistic survey on the recent activities in integrated THz communications and sensing. Special attention is paid to use cases and KPIs, waveform design, algorithm development, RIS-boosted ISAC, and potential challenges and solutions. The architectural overview of THz ISAC is depicted in Fig. 10, and the selected major contributions related to THz ISAC are summarized in Table VII.
### _THz-ISAC Use Cases and KPIs_
Localization and sensing, including determining the 2D/3D location and EM properties of objects, alongside multi-scale communications enable a multitude of emerging use cases, which may have different functional and non-functional QoS requirements in terms of accuracy, range, latency, velocity, update rate, reliability, and availability. The use case families consist of various vertical applications, e.g., massive twinning, immersive telepresence, wireless extended reality (XR), cooperative robots, THz internet-of-things (Tera-IoT), local trust zones, vehicular communication and radar sensing [511], and THz integrated access and backhaul (IAB). In general, these fall into the category of data-demanding and delay-sensitive applications. However, the QoS requirements differ from one use case to another. For instance, as two sub-categories of massive twining, manufacturing has more demanding requirements on accuracy, data rate, latency, update rate, reliability,
Fig. 10: An architectural overview of THz ISAC.
and availability than smart city [19]. A full list of selected use cases and their corresponding performance metrics can be referred to [18, 19, 33, 108, 498, 500, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 539, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 750, 751, 752, 753, 754, 755, 757, 759, 761, 755, 759, 762, 763, 764, 765, 766, 767, 768, 769, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 799, 7999, 799, 7999, 799, 799, 799, 7999, 799, 7999, 799, 7999, 7
### _THz-ISAC Algorithm Development_
In general, the ISAC algorithms fall into three categories, i.e., data-driven AI-based approaches, model-based approaches, and hybrid approaches (combination of the former two). AI techniques rely on large-volume date sets for training customized neural network (NN) models for sensing, localization, and signal detection, while tackling the mathematically intractable non-linearity issues from, e.g., phase noise and offset, power amplifier, and mutual coupling. On the contrary, for the model-based ISAC algorithms, the majority of them need to harness the well-justified domain knowledge and modeling, such as geometric relationship among the transceivers and the environmental objects, and take full advantage of channel sparsity in the form of rank deficiency of the channel matrix or a limited number of resolvable paths, as to obtain satisfactory performance.
Under the framework of THz ISAC, joint data detection (signal recovery) and sensing parameter estimation are conducted with multi-task NNs in [516]. In a broad sense, the ML roles on ISAC can be classified into three categories:
* joint sensing and communication (JSAC)
* sensing-aided communications [39, 40]
* communication-aided sensing [41].
To be specific, the first category includes the following activities: JSAC waveform design, spatial beam pattern design, inter- and self-interference cancellation, resource allocation, etc. Without any doubt, communications and sensing can be mutually beneficial for each other. In the category of sensing-aided communication, sensing information (treated as prior information), e.g., the location of the transmitter, receiver, and environmental objects, can be leveraged for enhancing the beam prediction/alignment and reducing the overhead of beam training as well as channel sounding [497, 503]. In the dynamic scenario where the user is under mobility, regardless of low or high velocity, such sensing information can be utilized for predicting potential blockages and enabling smooth handovers [23]. Similarly, communications signals can also be exploited to boost the sensing performance during the data transmission phase. The back-scattered data signals can gradually refine/improve the sensing parameter estimation, similar to data-aided channel estimation in the literature [519].
The DL algorithms alongside other counterparts, e.g., deep reinforcement learning and transfer learning, pave the way for the integrated detectors and estimators for both communications and sensing, in terms of e.g., sensing parameters estimation, interference mitigation/cancellation, beam tracking/prediction, and network resource allocation/management [501, 504]. Meanwhile, it can successfully tackle the mathematically intractable non-linearity issues and hardware impairments in the ISAC systems [501]. Furthermore, some latent features related to sensing parameters can be more readily learned and extracted by the adoption of DL algorithms.
In the DFT-s-OTFS system [502], a two-stage sensing parameter estimation approach was proposed, i.e., coarse on-grid search in the first stage followed by refined off-grid search in the second stage for extracting the sensing parameters. Under the framework of ISAC, data detection and sensing parameter estimation can be performed in an iterative manner until a certain preset stopping criterion is reached. Besides, ISAC performance can be further enhanced by multi-domain cooperation through joint active and passive sensing, and multi-user and multi-frequency operations [503]. Tensor decomposition approach is capable of leveraging the channel sparsity and guaranteeing a unique solution for each environmental sensing parameter without any ambiguity [513]. Such sensed information can be then utilized to reconstruct a high-resolution indoor mapping to further boost the prediction of blockages and the availability of LoS path, and reduce the beam tracking frequency. Thus, higher spectrum efficiency in data transmission can be achieved accordingly.
The traditional model-based algorithms adopt compressive sensing techniques, either on-the-grid, off-the-grid, or the combination of the former two (e.g., in [502]), for extracting channel and sensing parameters by taking advantage of channel sparsity [520, 521, 522]. Model-driven end-to-end learning, belonging to the category of hybrid approaches, for joint single target sensing and MISO communications was studied in [505]. In particular, the authors jointly consider precoder design at the transmitter and target AoA estimation at the receiver while accounting for the hardware impairment.
### _RIS-Boosted THz-ISAC_
With the newly-introduced capability of manipulating the radio propagation environment, RIS is able to expand the communications coverage and enhance the sensing performance [491, 523]. The potential roles that can be played by an RIS are multi-fold: scattering, reflecting, refraction, absorption, polarization, and diffraction. With all the preceding degree of freedoms (DoFs), intelligent, programmable wireless propagation environments can be established for carrying out different tasks, e.g., sensing, communications, localization, and imaging. The RIS can enrich the LoS availability by establishing a virtual one when the real one is suffering from temporary blockage, which frequently occurs at THz frequencies.
The various benefits of integrating RIS into ISAC were discussed in [524]. The gains against the RIS-free counterpart heavily rely on the cross-correlation between the sensing and communications channels. The more the mutual coupling, the more gain can be accomplished in terms of ISAC performance. By introducing the RIS, enhanced flexibility and adaptation to channel dynamics is seen in altering the coupling level of these channels [524]. The importance of tight coupling of communications and localization was also emphasized in [525] for the purpose of harnessing the full potential of RISs. That's to say, the brand-new concept of simultaneous localization and communications (SLAC) requires smart RIS control, co-design of communications and localization resources, and the flexible trade-off and reinforcement between the two functionalities.
In the rich RIS-boosted ISAC literature, various optimization problems are formulated with different objectives along with different constraints. These works can be cast into three different classes:
* sensing-centric design [506]
* communication-centric design [507]
* joint design and optimization [508, 20].
For the first category, the objective function is sensing-oriented, while the communications metrics are taken as constraints. For instance, the authors in [506] maximize the SNR at the radar while considering a communications SNR constraint. By addressing this optimization problem, semidefinite relaxation (SDR) along with bisection search is considered for transmit beamforming design while majority-minimization is considered for RIS design. With respect to the second category, the reference [507] takes interference among the communications users as the objective while treating the desired mean square error (MSE) of DoA estimation as a constraint. In terms of the last category, the weighted sum of two objectives, one for communications and the other for sensing, are usually considered [508]. All the categories share some common constraints, e.g., individual transmit power, sum transmit power, hardware (especially for RIS, e.g., phase quantization, constant modulus of amplitude), etc. A holistic comparison among the three classes can be referred to [499, 21].
Recently, a more promising type of RIS, termed as simultaneously transmitting (refracting) and reflecting reconfigurable intelligent surface (STAR-RIS), was introduced, which is able to offer additional benefits thanks to its inherent dual-mode operation and full-dimensional coverage [526, 527]. The STAR-RIS can concurrently reflect and refract the incident signals towards multiple desired MSs. Because of this, the STAR-RIS can further boost the ISAC performance compared to the sole-reflection-type RIS [528, 509, 529]. An outdoor BS is capable of providing both communications and sensing services to the users located indoors and outdoors by installing a STAR-RIS on a transparent glass window [526, 527].
### _Challenges and Solutions for THz-ISAC_
The open problems for THz-ISAC are listed and discussed in [516, 19]. For example, waveform design should be customized dependent on the sensing applications. Dynamic beamforming control faces great challenges since the beamwidth is narrow and highly directional [33]. As a consequence, the probability of beam misalignment can be inevitably high. A robust design of candidate beams for communications purpose requires a wider beam width. However, to enhance the sensing resolution and accuracy, narrow beams are preferred. Multiple concurrent beams comprise one fixed sub-beam for point-to-point communications and multiple time-varying sub-beams for sensing purposes can achieve a well-balanced performance between communications and sensing [108]. However, multiple simultaneous beams result in degraded beamforming gains.
Imperfections, resulting from IQ imbalance, PA nonlinear distortions, and phase noise at the local oscillator, need to be compensated for when designing robust THz-ISAC algorithms. The wide-band channel becomes highly selective with high Doppler spread, which may break the orthogonality of OFDM transmission and incur inter-carrier interference [491, 510]. Besides, the near-filed propagation, where channel sparsity vanishes in the angular domain, makes the beamforming design intractable. The beam squint effect makes the designed beam deviate from the exact one, resulting in reduced array gain and performance degradation [510].
The beam squint and split effect will become more obvious as the increase of carrier frequency and bandwidth, causing significant performance degradation on sensing and communication. As examined in [491] for a broadside target, beam split can reach as much as \(4^{\circ}\) for 0.3 THz with 30 GHz bandwidth while it is only \(1.4^{\circ}\) for 60 GHz with 2 GHz bandwidth. This effect should be mitigated and compensated for when designing the beamforming patterns, such as DPP [530]. Due to the user mobility and frequent blockage, beam misalignment occurs when LoS path is unavailable between the BS and MS. Provided that the user can be tracked and blockage can be predicted in advance, beam misalignment can be avoided. However, this requires high-precision sensing information. In line with the 5G CSI acquisition signals, the authors in [531] adopt synchronization signal block (SSB) for blockage detection and reference signal (RS) for user tracking.
Until all the above-mentioned challenges are addressed in the forthcoming years, the vision that everything will be sensed, connected, and intelligent can be fulfilled.
## X THz Trials and Experiments
In order to give readers an insightful view of the current status of the practical use of THz communications and sensing towards 6G and beyond, we summarize state-of-the-art THz trials and experiments worldwide in this section, where the achieved data rates at the certain THz bands with specific features are surveyed.
In the past decade, the electronic mixing technology was widely applied to generate high-frequency THz signals by up-converting a low-frequency microwave signal, as the traditional way to realize THz transmission as listed in Table VIII [532, 534, 535]. One of the most remarkable approaches was done by Bell Labs in 2011, where THz radiation at 625 GHz was generated by using an all-solid-state electric mixer. It achieved a data rate of 2.5 Gbps at a distance of 0.2 m under the transmission power of 1 mW [532]. In 2015, the researchers at the University of Stuttgart in Germany successfully transmitted 240 GHz THz signals to the receiver at a distance of 850 m. The trial achieved a peak data rate of 64 Gbps using quadrature phase-shift keying (QPSK) and 8-ary phase-shift keying (8PSK) modulation in a single-channel approach without the use of spatial diversity [534]. In the year 2017, a research team from the China Academy of Engineering Physics achieved ultra-long-distance THz wireless communications over up to 21 km and realized single-channel transmission speed up to 5 Gbps, taking advantage of two Cassegrain antennas with 50 dBi gain each [535].
Because of the inherent properties of electronic devices, the parameters of high-frequency electronic devices gradually approach the theoretical limit, with relatively lower bandwidth and a limited transmission rate. Recently, much attention was
shifted to the photonics-assisted heterodyne beating technique for higher data rate and better signal quality, where the rates of THz transmission is able to reach hundreds of Gbps or even Tbps [533, 536, 537, 538, 539, 540]. It should be pointed out that the THz signal power generated by the photonics-assisted heterodyne beating method is usually limited to the mW level because of the lower responsivity of the uni-traveling carrier photodiode (UTC-PD), resulting in the limited transmission distance. Therefore, some researchers utilized high-gain THz amplifiers or high-gain lens antennas to extend distances to 100 m. In early 2013, the researchers [533] utilized the large frequency range in the THz window between 200 GHz and 300 GHz to implement a single-input single-output (SISO) wireless 100 Gbps link with a carrier frequency of 237.5 GHz over a distance of 20 m. Several years later, a team from Fudan University successfully applied \(2\times 2\) MIMO and wavelength division multiplexing (WDM) technologies at THz signal transmission, achieving a data rate of 120 Gbps by using QPSK modulation [536]. Meanwhile, some researchers at Zhejiang University in China achieved THz signal transmission of 600 Gbps using 64QAM multi-carrier modulation [537]. However, the distances of THz signal transmission of the
above two approaches are only 1.42 m and 2.8 m, respectively.
In the past three years, some research teams have presented prominent improvements in THz communications. The wireless transmission distances were effectively extended to more than 100 m with the assistance of high-gain THz amplifiers or high-gain lens antenna. In 2020, a team at the Karlsruhe Institute of Technology (KIT) took advantage of THz amplifiers and the Kramers-Kronig method for simplifying the design of the receiver and launched an offline multi-carrier THz system. It offers a peak data rate of 115 Gbps at a carrier frequency of 300 GHz over a distance of 110 m [538]. One year later, the Fudan University successfully transmitted a 44.8 Gbps 64QAM-modulated signal over a distance of 104 m without using the THz amplifier but utilizing both suitable dielectric lenses and digital signal processing (DSP) algorithms [539]. In the same year, Yannik Horst _et al._[540] from Switzerland demonstrated the transparent optical-THz-optical link, providing a transmission rate of 240 Gbps over a distance up to 115 m.
With the objective of achieving full-coverage and low-cost deployment towards future 6G mobile communications, the priority of the hybrid optoelectronic down-conversion solution was presented [541] and [542], where a novel fiber-THz-fiber seamlessly converged real-time architecture was successfully demonstrated. It adopts both dual-polarization photonic up-conversion for THz signal generation and hybrid optoelectronic down-conversion for THz reception, by thoroughly revising commercial digital coherent optical modules. In the case of hybrid channel transmission with two hops consisting of a 20km-long fiber and 1m-long THz wireless link, a THz signal with a net rate of 206.25 Gbps was successfully transmitted real-timely [542]. It is also pointed out that zhe THz phased array techniques are key to realizing 6G THz mobile communications and sensing, which meets the needs of application scenarios, such as multiple users and beam tracking.
In addition to the excellent demonstrations and validations that have been achieved by research teams around the world, some equipment suppliers and organizations have also presented great advances in THz commercialization. The NYU WIRELESS is currently focusing on sub-THz bands at 140 GHz, 220 GHz, and higher. The radio-frequency integrated circuit (RFIC) probe stations working up to 220 GHz, and channel sounders for propagation measurements at 140 GHz [92] are provided by the Keysight Technologies. Keysight has also closely corporated with Nokia Bell labs on the sub-THz testbed, which was chosen to verify the performance of transceiver modules, power amplifiers, and antennas under both linear and nonlinear conditions. Recently, Huawei 6G research team has developed and demonstrated THz integrated sensing and communications (THz-ISAC) prototype. Using wireless electromagnetic waves, the prototype can sense and produce images of blocked objects with millimeter-level resolution and communicates at an ultra-high rate of 240 Gbps, opening up new service possibilities for 6G and beyond systems [543].
## XI Conclusions
In summary, the upcoming 6G and beyond cellular systems are envisioned to exploit the THz band beyond 100 GHz, which not only offers an abundant amount of spectral resources for globally ubiquitous, ultra-high-rate, super-reliable, hyper-low-latency, massive-density telecommunications services but also empowers high-resolution cognition through THz sensing, positioning, and imaging. The use of THz frequencies will bring novel applications such as tera-bits-per-second/Tbps hot spots or links, and, in addition, disruptive uses like nano-scale networks and on-chip communications. Despite its high potential, we do not expect that the THz band is able to totally replace the sub-6GHz and mmWave bands, which have been employed as the basis of previous generations of cellular communications networks. Instead, the THz band is highly probably being used as the complementary resource to aid the success of low-frequency bands in future generations of cellular systems. Meanwhile, there are still tremendous works to be done in terms of characterizing and modeling THz channels, developing affordable, usable THz antennas and devices, designing novel algorithms for long-range THz signal transmission, proposing efficient protocols for flexible THz networking, and elaborately considering its synergy with other 6G-enabling technologies. It is hoped that this survey could be able to provide the researchers with a holistic view of all technical aspects and issues required to design and build THz communications and sensing for 6G and beyond from an application and implementation perspective. Although there is a long journey to go before the success of THz communications and sensing in 6G and beyond cellular systems, this survey might be able to speed up a bit the research endeavors.
List of Acronyms
**RAN**: radio access network
**6G**: sixth generation
**1G**: first generation
**2D**: two-dimensional
**3D**: three-dimensional
**3G**: third generation
**3GPP**: Third Generation Partnership Project
**4G**: fourth generation
**5G**: fifth generation
**THz**: terahertz
**UAV**: unmanned aerial vehicle
**UE**: user equipment
**ULA**: uniform linear array
**UPAs**: uniform planar arrays
**UMMIMO**: ultra-massive multi-input multi-output
**WCDMA**: Wideband Code-Division Multiple Access
**WRC**: World Radiocommunication Conference
**ITU-T**: International Telecommunication Union - Telecommunication
**ITU-R**: International Telecommunication Union - Radiocommunication
**NGMN**: Next Generation Mobile Networks
**LEO**: low Earth orbit
**AI**: artificial intelligence
**AoASA**: array of subarrays
**AMPS**: Advanced Mobile Phone System
**GSM**: Global System for Mobile Communications
**LTE-Advanced**: Long-Term Evolution Advanced
**LoS**: line-of-sight
**MIMO**: multi-input multi-output
**MMIMO**: massive multi-input multi-output
**KPI**: key performance indicator
**NLoS**: non-line-of-sight
**BDCM**: beam-domain channel model
**CSI**: channel state information
**NOMA**: non-orthogonal multiple access
**ML**: machine learning
**PHY**: physical
**MC**: multi-connectivity
**CMC**: closest line of sight multi-connectivity
**RMC**: reactive multi-connectivity
**DMIMO**: distributed MIMO
**CoMP**: coordinated multi-point
**JT**: joint transmission
**CSCB**: coordinated scheduling and beamforming
**NTN**: non-terrestrial network
**CFN**: cell-free network
**SIC**: successive interference cancellation
**FBL**: finite blocklength
**CMTC**: critical machine-type communications
**RIS**: reconfigurable intelligent surfaces
**CCU**: cell center user
**CEU**: cell edge user
**CNOMA**: cooperative non-orthogonal multiple access
**SWIPT**: simultaneous wireless information and power transfer
**GEO**: geostationary Earth orbit
**RF**: radio frequency
**HAP**: high-altitude platform
**DNN**: deep neural network
**ISAC**: integrated sensing and communications
**DAR**: differential absorption radar
**FC**: fully-connected
**DAoSA**: dynamic array-of-subarrays
**FSPL**: free-space path loss
**SPP**: surface plasmon polariton
**WSMS**: widely-spaced multi-subarray
**TTD**: true-time-delay
**mmWave**: millimeter wave
**DKL**: deep kernel learning
**GPR**: Gaussian process regression
**TDMA**: time division multiple access
**EC**: European Commission
**NR**: new radio
**1024QAM**: 1024-ary quadrature amplitude modulation
**FCC**: Federal Communications Commission
**IR**: infrared
**IMT**: International Mobile Telecommunications
**QCL**: quantum cascade laser
**GaN**: Gallium Nitride
**InP**: Indium Phosphide
**SiGe**: Silicon Germanium
**CMOS**: complementary metal-oxide-semiconductor
**HBT**: heterojunction bipolar transistor
**LO**: local oscillator
**FET**: field-effect transistor
**GaAs**: Gallium Arsenide
**LTCC**: low-temperature co-fired ceramic
**SIW**: substrate-integrated waveguide
**UTC-PD**: uni-traveling-carrier photodiode
**RCC**: radar-communications coexistence
**DFRC**: dual-functional radar-communications
**EM**: electromagnetic
**QoS**: quality of service
**XR**: extended reality
**Tera-IoT**: THz internet-of-things
**IAB**: integrated access and backhaul
**PA**: power amplifier
**OFDM**: orthogonal frequency-division multiplexing
**DFT-s-OFDM**: DFT-spread-OFDM
**PAPR**: peak-to-average power ratio
**OTFS**: orthogonal time frequency space
**DFT-s-OTFS**: discrete Fourier transform spread OTFS
**NN**: neural network
**JCAS**: joint communications and sensing
**MISO**: multiple-input single-output
**IRS**: intelligent reflecting surfaces
**PSM**: phase-shift matrix
**CRB**: Cramer-Rao bound
**MUI**: multi-user interference
**JSAC**: joint sensing and communication
**DL**: deep learning
**CS**: channel sounder
**AoA**: angle-of-arrival
**AoD**: angle-of-departure
**DoF**: degree of freedom
**SLAC**: simultaneous localization and communications
**SNR**: signal-to-noise ratio
**SDR**: semidefinite relaxation
**DoA**: direction of arrival
**STAR**: **RIS**: simultaneously transmitting (refracting) and reflecting reconfigurable intelligent surface
**DPP**: delay-phase precoding
**SSB**: synchronization signal block
**RS**: reference signal
**MSE**: mean square error
**BS**: base station
**DRC**: dual-function radar and communication
**LWA**: leaky-wave antennas
**DLA**: discrete lens array
**OWC**: optical wireless communications
**MS**: mobile station
**IQ**: in-phase and quadrature
**TDD**: time-division multiplexing
**EESS**: Earth Exploration Satellite Service
**AGV**: automated guided vehicle
**D2D**: device-to-device
**IoNT**: Internet of Nano-Things
**SLAM**: simultaneous localization and mapping |
2304.09320 | Oriented Colouring Graphs of Bounded Degree and Degeneracy | This paper considers upper bounds on the oriented chromatic number
$\chi_o(G)$, of an oriented graph $G$ in terms of its $2$-dipath chromatic
number $\chi_2(G)$, degeneracy $d(G)$, and maximum degree $\Delta(G)$. In
particular, we show that for all graphs $G$ with $\chi_2(G) \leq k$ where $k
\geq 2$ and $d(G) \leq t$ where $t \geq \log_2(k)$, $\chi_o(G) = 33/10(k t^2
2^t)$. This improves an upper bound of MacGillivray, Raspaud, and Swartz of the
form $\chi_o(G) \leq 2^{\chi_2(G)} -1$ to a polynomial upper bound for many
classes of graphs, in particular, those with bounded degeneracy. Additionally,
we asymptotically improve bounds for the oriented chromatic number in terms of
maximum degree and degeneracy. For instance, we show that $\chi_o(G) \leq
(2\ln2 +o(1))\Delta^2 2^\Delta$ for all graphs, and $\chi_o(G) \leq
(2+o(1))\Delta d 2^d$ for graphs where degeneracy grows sublinearly in maximum
degree. Here the asypmtotics are in $\Delta$. The former improves the
asymptotics of a results by Kostochka, Sopena, and Zhu
\cite{kostochka1997acyclic}, while the latter improves the asymptotics of a
result by Aravind and Subramanian \cite{aravind2009forbidden}. Both
improvements are by a constant factor. | Alexander Clow, Ladislav Stacho | 2023-04-18T22:08:47Z | http://arxiv.org/abs/2304.09320v3 | # Oriented colouring graphs of bounded
###### Abstract.
This paper considers upper bounds on the oriented chromatic number, \(\chi_{o}\), of graphs in terms of their maximum degree \(\Delta\) and/or their degeneracy \(d\). In particular we show that asymptotically, \(\chi_{o}\leq\chi_{2}f(d)2^{d}\) where \(f(d)\geq(\frac{1}{\log_{2}(e)-1}+\varepsilon)d^{2}\) and \(\chi_{2}\leq 2^{\frac{f(d)}{d}}\). This improves a result of MacGillivray, Raspaud, and Swartz [8] of the form \(\chi_{o}\leq 2^{\chi_{2}}-1\). The rest of the paper is devoted to improving prior bounds for \(\chi_{o}\) in terms of \(\Delta\) and \(d\) by refining the asymptotic arguments involved.
## 1. Introduction
An _oriented graph_\(G\) is a directed graph whose underlying graph is simple. Throughout this paper every graph we consider is an oriented graph, so it should be understood that stating \(G\) is a graph is synonymous with stating \(G\) is an oriented graph. We identify parameters of the underlying graph of an oriented graph \(G\) as would be normally done given simple graphs and parameters of the orientation of \(G\) as is standard with directed graphs. For example \(deg(v)\) is the degree of the vertex \(v\) independent of orientation whereas \(deg^{+}(v)\) denotes the out-degree, and \(deg^{-}(v)\) the in-degree of the vertex \(v\). Similarly, a path \(p\) in the graph need not be a directed path. When a path is directed we will call it a dipath.
An _oriented colouring_ of a graph \(G=(V,E)\) is a proper vertex vertex colouring \(c:V\to\mathbb{N}\) such that if \((u,v),(x,y)\in E\), then
* \(c(u)=c(y)\) implies \(c(v)\neq c(x)\), and
* \(c(v)=c(x)\) implies \(c(u)\neq c(y)\).
If the image of \(c\) has cardinality \(k\), then we say \(G\) has an _oriented \(k\)-colouring_. For some examples consider Figure 1.
Equivalently \(G\) has an _oriented \(k\)-colouring_ if there exists a graph \(H\) of order \(k\) and function \(h:V(G)\to V(H)\) so that for all \((u,v)\in E(G)\), \((h(u),h(v))\in E(H)\). Such a map \(h\) is called an _oriented homomorphism_. See Figure.2 for an example.
Meanwhile, the _oriented chromatic number_ of a graph \(G\), denoted \(\chi_{o}(G)\), or simply \(\chi_{o}\) when the choice of \(G\) is obvious, is the least integer \(k\) so that \(G\) has an oriented \(k\)-colouring.
This parameter was first studied by Courcelle [3] as a means to encode a graph orientation as a vertex labelling. Since its inception \(\chi_{o}\) has been extensively studied with the first major results coming from
Figure 1. Three examples of oriented colourings (consider each component as its own graph).
Raspaud and Sopena [14], who proved \(\chi_{o}(G)\leq\chi_{a}(G)2^{\chi_{a}(G)-1}\) where \(\chi_{a}(G)\) is the acyclic chromatic number of \(G\). As Borodin [2] had previously shown the acyclic chromatic number of a planar graph is at most \(5\), Raspaud and Sopena in fact proved that the oriented chromatic number of a planar graph is at most \(80\). A bound that has not been improved in the nearly \(30\) years since its publication, despite many efforts to do so [17]. Moreover, it is unknown if there exists a planar graph that requires more than \(18\) colours in an oriented colouring [10, 11, 16, 17].
Bounding the oriented chromatic number in terms of the maximum degree, \(\Delta\), was first considered by Sopena [15] who showed \(\chi_{o}(G)\leq(2\Delta-1)4^{\Delta-1}\). This was later improved by Kostochka, Sopena, and Zhu [7] to \(\chi_{o}(G)\leq 2\Delta^{2}2^{\Delta}\), later being expanded on by Aravind and Subramanian [1] who showed \(\chi_{o}\leq 16\Delta d2^{d}\) where \(d\) is the degeneracy of the graph is question. More recently, this was slightly improved to \(\chi_{o}\leq(\Delta-1)^{2}2^{\Delta}+2\) by Das, Nandi,and Sen [4] in the more general context of connected \((m,n)\)-colouring mixed graphs. In the same paper the authors also show that if \(d(G)<\Delta(G)\), then the constant term can be dropped implying \(\chi_{o}\leq(\Delta-1)^{2}2^{\Delta}\). Meanwhile, attempts to lower this bound for small values of \(\Delta\) had seen some notable progress and remain an active area for research [5, 6, 18].
For a more complete picture of the literature with regard to oriented colouring we recommend Sopena's 2015 survey paper [17].
First proposed by Chen and Wang [12] a \(2\)_-dipath colouring_ of a graph \(G=(V,E)\) is a proper vertex colouring \(c:V\to\mathbb{N}\) such that, if \((u,v),(v,w)\in E\), then \(c(u)\neq c(w)\). Equivalently \(c:V\to\mathbb{N}\) is a \(2\)-dipath colouring if and only if \(c\) is a proper colouring of \(G^{2}\) where \(G^{2}\) is formed by squaring the adjacency matrix of \(G\). Recall that \(G\) is oriented and all the edges involved are directed. The \(2\)-dipath chromatic number of a graph \(G\), denoted \(\chi_{2}(G)\) or simply \(\chi_{2}\) when our choice of \(G\) is obvious, is the least integer \(k\) such that \(G\) admits a \(2\)-dipath \(k\)-colouring.
It should be clear from this definition alone that \(\chi_{o}\) and \(\chi_{2}\) are related parameters. In particular, we can view \(\chi_{2}\) as a localized version of \(\chi_{o}\) as every oriented colouring is a \(2\)-dipath colouring and \(2\)-dipath colourings must only satisfy local constraints unlike oriented colourings. Perhaps the best example of this local verses global behaviour is that for all graphs \(G\), \(\chi_{2}(G)=\max\{\chi_{2}(C):C\) is a connected component in \(G\}\) whereas there exists graphs \(H\) such that \(\chi_{o}(H)>\max\{\chi_{o}(C):C\) is a connected component in \(H\}\). For an example of such a graph \(H\) see Figure.3. This relationship between oriented and \(2\)-dipath colouring is underscored by a result of MacGillivray and Sherk [9], which characterizes if \(\chi_{2}\leq k\) in terms of oriented homomorphisms. The current best bounds relating \(\chi_{2}\) and \(\chi_{o}\) are \(\chi_{2}\leq\chi_{o}\leq 2^{\chi_{2}}-1\) where the lower bound is trivial and the upper bound is from [8].
In Section 2, we improve the upper bound from [8] (see Theorem 2.1) by applying an approach similar to that used in the literature to bound \(\chi_{o}\) in terms of \(\Delta\). Section 3 is dedicated to improving the upper bound \(\chi_{o}\leq 2\Delta^{2}2^{\Delta}\) from [7] for all graphs and the bound of \(\chi_{o}\leq(\Delta-1)^{2}2^{\Delta}+2\) for connected graphs (see Theorem 3.1). In Section 4, we improve the upper bound of \(\chi_{o}\leq 16\Delta d2^{d}\) from [1]. The improvements seen
Figure 3. A \(2\)-dipath colouring which is not an oriented colouring.
Figure 2. An example of an oriented homomorphism.
in Section 3 and Section 4 are by a constant factor and result from refinements of existing arguments rather than wholly new ideas on the part of the authors.
## 2. Relating \(\chi_{2}\) and \(\chi_{o}\)
In this section we will apply arguments similar to those used in the case of graphs with bounded degree to obtain an upper bound for \(\chi_{o}\) in terms of \(\chi_{2}\) (see Theorem 2.1). In order to do this we must define several pieces of notation.
Let \(G=(V,E)\) be a fixed but arbitrary graph. Let \(v\in V\) and let \(A\subset N(v)\). Suppose the vertices of \(G\) are ordered and let \(A=\{u_{1},u_{2},\ldots,u_{|A|}\}\). Then we define \(F(A,v,G)\in\{-1,1\}^{|A|}\) to be the vector with entry \(i\) equal to \(1\) if \((v,u_{i})\in E\) and equal to \(-1\) if \((u_{i},v)\in E\). The ordering on the vertices here has no significance beyond allowing us to define \(F\).
For positive integers \(t\) and \(k>1\) an orientations of the complete \(k\)-partite graph \(K:=K_{N,\ldots,N}=(P_{1},\ldots,P_{k},E)\) is _\((k,t,N)\)-full_ if for all \(i\in[k]\) and \(A\subset\cup_{j\neq i}P_{j}\) of cardinality at most \(t\) and vectors \(\mathbf{a}\in\{-1,1\}^{t}\), there exists a vertex \(v\in P_{i}\) where
\[F(A,v,K)=\mathbf{a}.\]
Observe that a \((k,t,N)\)-full graph has exactly \(kN\) vertices. Recall that \(d(G)\) or simply \(d\) denotes the degeneracy, that is the smallest integer \(k\) such that \(\delta(H)\leq k\) for all subgraphs \(H\) of \(G\).
**Theorem 2.1**.: _Let \(c=\frac{1}{\log_{2}(e)-1}+\varepsilon\) and \(f(d)\geq cd^{2}\). if \(\chi_{2}\leq 2^{\frac{f(d)}{d}}\), then for sufficiently large \(d\),_
\[\chi_{o}\leq\chi_{2}f(d)2^{d}.\]
_In particular, if \(\chi_{2}\leq 2^{cd}\), then \(\chi_{o}\leq c\chi_{2}d^{2}2^{d}\)._
To prove of Theorem 2.1 we show that a \((\chi_{2},d,N)\)-full graph is an universal target for graphs with \(2\)-dipath chromatic number \(\chi_{2}\) and degeneracy \(d\). As a first step towards this, we must show that \((k,t,N)\)-full graphs of a certain order exist. To do this we proceed by the first moment method.
**Lemma 2.2**.: _Let \(k\) and \(t\) be integers, let \(c=\frac{1}{\log_{2}(e)-1}+\varepsilon\) and let \(f(t)\geq ct^{2}\). If \(k\leq 2^{\frac{f(t)}{t}}\) and \(N=f(t)2^{t}\), then for sufficiently large \(t\) there exists a \((k,t,N)\)-full graph._
Proof.: Consider a random orientation of \(K=(P_{1},\ldots,P_{k},E)\) in which each edge's orientation is chosen independently and uniformly at random. For each fixed value \(i\in\{1,2,\ldots,k\}\) and a subset \(A\subset\cup_{j\neq i}P_{j}\) satisfying \(|A|=t\), let \(X_{i,A}\) be the random variable
\[X_{i,A}:=\sum_{\mathbf{a}\in\{-1,1\}^{t}}\mathds{1}_{\forall v\in P_{i},F(A,v,K)\neq\mathbf{a}}.\]
That is, \(X_{i,A}\) counts the number of vectors \(\mathbf{a}\in\{-1,1\}^{k}\) such that no vertex in \(P_{i}\) has orientation \(\mathbf{a}\) with respect to \(A\). Observe that the orientation of \(A\) to two distinct vertices in \(P_{i}\) is independent. Furthermore, observe that this implies \(X_{i,A}=0\) is equivalent to a random function from a domain of size \(N\) to a codomain of size \(2^{t}\) being surjective. Hence,
\[\mathbb{P}(X_{i,A}>0)\leq 2^{t}(1-2^{-t})^{N}\leq 2^{t}e^{-2^{-t}N}=2^{t}e^{-f( t)}\]
Applying the union bound,
\[\mathbb{P}(\bigcup_{i\in\{1,\ldots,k\}}\bigcup_{A\subset\cup_{j\neq i }P_{j},|A|=t}X_{i,A}>0)\leq k\binom{(k-1)N}{t}\mathbb{P}(X_{i,A}>0)\] \[\leq k\binom{(k-1)N}{t}2^{t}e^{-f(t)}\] \[\leq k\frac{(kN)^{t}}{t!}2^{t}e^{-f(t)}\] \[<k^{t+1}N^{t}e^{-f(t)}\] \[\leq k2^{f(t)}N^{t}e^{-f(t)}\]
now observe that if we apply a logarithm to the final term we are left with
\[\log_{2}(k)+f(t)+t\log_{2}(N)-\log_{2}(e)f(t)\] \[\leq\frac{f(t)}{t}+f(t)+t\log_{2}(f(t)2^{t})-\log_{2}(e)f(t)\] \[=\frac{f(t)}{t}+f(t)+t\log_{2}(f(t))+t^{2}-\log_{2}(e)f(t)\to-\infty.\]
by our choice of \(f(t)\). It follows that for all \(i\) and \(A\), \(X_{i,A}=0\) asymptotically almost surly.
With the existence of \((\chi_{2},d,N)\)-full graphs of a given order guaranteed we can proceed to prove that there is a homomorphism from \(G\) to any \((\chi_{2},d,N)\)-full graph.
Proof of Theorem 2.1.: Let \(G=(V,E)\) be a fixed but arbitrary graph and suppose \(f(d)\geq cd^{2}\) such that \(\chi_{2}(G)\leq 2^{\frac{f(d)}{2}}\). By Lemma 2.2 for sufficiently large \(d\) there is a \((\chi_{2},d,f(d)2^{d})\)-full graph \(K\). We will establish \(\chi_{o}(G)\leq\chi_{2}f(d)2^{d}\) by constructing an oriented homomorphism from \(G\) to \(K\).
Let \(c:V\to\{1,\ldots,\chi_{2}(G)\}\) be a \(2\)-dipath colouring of \(G\). We will build an oriented homomorphism \(h:G\to K\) satisfying \(h(u)\in P_{i}\) if and only if \(i=c(u)\).
Let \(v_{1},v_{2},\ldots,v_{n}\) be a degeneracy ordering of \(G\), that is \(|N(v_{j})\cap\{v_{1},\ldots,v_{j-1}\}|\leq d\) for all \(1\leq j\leq n\). Suppose we have defined \(h(v_{j})\) for all \(j<s\). The let \(A=N(v_{s})\cap\{v_{1},\ldots,v_{s-1}\}\). By our choice of ordering \(|A|\leq d\). Now consider \(h(A)\) the image of \(A\) under \(h\). We have \(|h(A)|\leq|A|\leq d\) and by construction \(h(u)\notin P_{c(v_{s})}\) for all \(u\in A\).
Also note that as \(c\) is a \(2\)-dipath colouring of \(G\), if \(h(u)=h(w)\) for \(u,w\in A\), then the orientation of edges \(\{u,w\}\) and \(\{u,v_{s}\}\) is the same. Let \(\mathbf{a}\in\{-1,1\}^{|A|}\) such that \(F(A,v_{s},G)=\mathbf{a}\). Then let \(\mathbf{b}\in\{-1,1\}^{|h(A)|}\) be defined by \(\mathbf{b}(h(u))=\mathbf{a}(u)\).
Given \(K\) is \((\chi_{2},d,f(d)2^{d})\)-full there exists a \(x\in P_{c(v_{s})}\) such that \(F(h(A),x,K)=\mathbf{b}\). Let \(h(v_{s})=x\).
## 3. Graphs of Bounded Degree
The primary result of this section are asymptotic improvements of the bound \(\chi_{o}\leq(\Delta-1)^{2}2^{\Delta}+2\) from [4] for connected graphs and \(\chi_{o}\leq 2\Delta^{2}2^{\Delta}\) from [7] for general graphs. The improvements are as follows.
**Theorem 3.1**.: _If \(G\) is a graph with \(d<\Delta\) or \(G\) is connected, then for all \(\varepsilon>0\) and sufficiently large \(\Delta\), \(\chi_{o}\leq(\ln 2+\varepsilon)(\Delta-\omega)^{2}2^{\Delta}\) for any \(\omega=o(\Delta)\). If \(G\) is a disconnected \(\Delta\)-regular graph, then \(\chi_{o}\leq 2(\ln 2+\varepsilon)(\Delta-\omega)^{2}2^{\Delta}\)._
We say a tournament is \((k,t)\)-comprehensive if for all \(U\subset V(T)\) where \(|U|=k\) and \(\mathbf{a}\in\{-1,1\}^{k}\) there exists at least \(t\) vertices \(z\in V(T)\) where \(F(U,z,T)=\mathbf{a}\). This notation is introduced here for convenience and because no standard notation for this property has been adopted in the literature, but it should be understood that this property has been used extensively in connection with upper bounds on the oriented chromatic number. Specifically, this idea feature prominently in [1, 4, 7].
In all of the former citations, bounds of the same order as Theorem 3.1 are proven by constructing homomorphisms to \((k,t)\)-comprehensive tournaments. Here \(k\) and \(t\) are functions of \(\Delta\) and \(d\). All proofs proceed similarly, where vertices are coloured in a degeneracy order subject to the condition that when a vertex is coloured it receives distinct colours from every already coloured vertex of distance at most \(2\) from it. This leaves \(k=d\) and \(t=\Delta d+1\) in [1] and \(k=\Delta\) and \(t=\Delta+1\) in [7]. The values of \(k\) and \(t\) in [4] are complicated by the fact that they prove a much more general colouring result, but are of the same order.
Also note that in all of these cases \((k,t)\)-comprehensiveness is formulated in a slightly different, but for our purposes, equivalent manner. That is prior papers considered the property that, for all \(U\subset V\), \(|U|=i\leq k\) and \(\mathbf{a}\in\{-1,1\}^{k}\), there are \(t(i)\) vertices \(z\in V(T)\) where \(F(U,z,T)=\mathbf{a}\). Notice that if for all \(0\leq j\leq k\), \(t(k-j)\leq 2^{j}t(k)\), then a tournament being \((k,t(k))\)-comprehensive implies the former property. It is because of this fact that we conclude the prior values of \(k\) and \(t\) are sufficient.
Our improvement of bound from [4] uses the same techniques as [7]. The improvement comes from a more careful consideration of the asymptotics of Lemma.1 in [7] along with a trick used in [4] that reduces the problem to the case where \(d<\Delta\) for connected graphs. So the first step is proving the existence of a \((\Delta-1,\Delta+1)\)-comprehensive tournament of the desired order. In Lemma 3.2 we consider a \((k-1,k+1)\)-comprehensive tournament with the understanding that we will take \(k=\Delta\) when later.
**Lemma 3.2**.: _For sufficiently large \(k\), there exists a \((k-1,k+1)\)-comprehensive tournament of order \((\ln 2+\varepsilon)(k-\omega)^{2}2^{k}\) for any \(\omega=o(k)\)._
Proof.: Let \(T=(V,E)\) be a random orientation of the complete graph on \(n\) vertices, where \(n\) is to be chosen later, such that each edge is assigned an orientation uniformly and independently. Let \(\mathbf{a}\in\{-1,1\}^{k-1}\) and \(U\subset V\) such that \(|U|=k-1\) be fixed but arbitrary. Let \(X_{U,\mathbf{a}}\) be the random variable which counts the number of vertices \(z\in V\setminus U\) such that \(F(U,z,T)=\mathbf{a}\). Then the expectation \(\mu:=\mathbb{E}(X_{U,\mathbf{a}})=(n-k+1)2^{1-k}\).
Letting \(\delta=1-\frac{k+1}{\mu}\), observe that a version of Chernoff's bound found in [13] (see Chapter 4) implies the following,
\[\mathbb{P}(X_{U,\mathbf{a}}<k+1)=\mathbb{P}(X_{U,\mathbf{a}}<(1-\delta)\mu)< exp(-\frac{\mu\delta^{2}}{2})=exp(\frac{-1}{2}(\mu-2(k+1)+\frac{(k+1)^{2}}{\mu}))\]
whenever \(\frac{k+1}{\mu}\leq 1\).
Let \(n=(\ln 2+\varepsilon)(k-\omega)^{2}2^{k}\), then for sufficiently large \(k\), \(\mu=(n-k+1)2^{1-k}=2(\ln 2+\varepsilon)(k-\omega)^{2}-o(1)\). Hence, for large \(k\),
\[\mathbb{P}(X_{U,\mathbf{a}}<k+1)<exp(-\frac{1}{2}(2(\ln 2+\varepsilon)(k- \omega)^{2}-2(k+1)+O(1)))=exp(-(\ln 2+\varepsilon-o(1))k^{2}).\]
Applying the union bound,
\[\mathbb{P}(\exists U,\mathbf{a},X_{U,\mathbf{a}}<k+1)\leq{n\choose k-1}2^{k-1 }\mathbb{P}(X_{U,\mathbf{a}}<k+1)<\frac{n^{k}}{k!}2^{k}exp(-(\ln 2+\varepsilon-o(1 ))k^{2})\]
\[\leq n^{k}exp((\ln 2+\varepsilon-o(1))k^{2})<k^{2k}exp(-(\ln 2+\varepsilon-o(1 ))k^{2}+\ln 2(k^{2}))=k^{2k}e^{-O(k^{2})}\to 0\]
as \(k\to\infty\). Thus, asymptotically almost surely a random tournament on \((\ln 2+\varepsilon)(k-\omega)^{2}2^{k}\) vertices is \((k-1,k+1)\)-comprehensive.
We can now proceed to the proof of Theorem 3.1.
Proof of Theorem 3.1.: Let \(G=(V,E)\) be a graph and let \(\omega\to\infty\) as \(\Delta\to\infty\) such that \(\omega=o(\Delta)\).
Case.1; \(d<\Delta\). Letting \(k=\Delta\), Lemma 3.2 implies that for sufficiently large \(\Delta\) there is a \((\Delta-1,\Delta+1)\)-comprehensive tournament of order \((\ln 2+\varepsilon)(\Delta-\omega)^{2}2^{\Delta}\). Let \(T\) be such a tournament. Let \(v_{1},v_{2},\ldots,v_{n}\) be
a fixed degeneracy ordering of \(V(T)\). We define \(h:V\to V(T)\) inductively on the degeneracy ordering of \(V\) satisfying
1. \(h|_{\{v_{1},\ldots,v_{i}\}}\) is a homomorphism from \(G[\{v_{1},\ldots,v_{i}\}]\) to \(T\), and
2. for all \(v_{j}\) where \(j>i\), \(h(v_{r})\neq h(v_{s})\) for all \(v_{r},v_{s}\in N(v_{j})\cap\{v_{1},\ldots,v_{i}\}\).
Suppose \(h|_{\{v_{1},\ldots,v_{i}\}}\) is already defined and let \(A=N(v_{i+1})\cap\{v_{1},\ldots,v_{i}\}\). By assumption \(|A|\leq\Delta-1\) and for all distinct \(v_{r},v_{s}\in A\), \(h(v_{r})\neq h(v_{s})\). Hence, \(|h(A)|=|A|\) implying that \(\mathbf{b}=h(\mathbf{a})\) where \(\mathbf{a}\in\{-1,1\}^{|A|}\) is well defined. By our choice of \(T\), if \(|A|=\Delta-1-a\), then there are at least \(2^{a}(\Delta+1)\) vertices \(z\in V(T)\) such that \(F(h(A),z,T)=\mathbf{b}\). As there are at most \((\Delta-|A|)(\Delta-1)\) vertices \(v_{j}\) where \(j\leq i\) and \(v_{j}\) has a common neighbour in \(\{v_{i+2},\ldots,v_{n}\}\), it follows that there is a vertex \(z\in V(T)\) as required such that for all such vertices \(v_{j}\), \(h(v_{j})\neq z\).
Let \(h(v_{i+1})=z\). Clearly, \(h|_{\{v_{1},\ldots,v_{i},v_{i+1}\}}\) is a homomorphism as required.
Case.2:\(d=\Delta\) and \(G\) is connected. Choose \(\omega_{0}\) to be a function satisfying \(\omega=o(\omega_{0})\) and \(\omega_{0}=o(\Delta)\). It is well known that a connected graph has \(d=\Delta\) if and only if it is \(\Delta\)-regular. Let \(e=(u,v)\in E\) be fixed but arbitrary and let \(H=G-e\). Then, \(H\) has \(d<\Delta\). So by the argument in Case.1, there is a homomorphism \(h_{0}\) from \(H\) to \(T_{0}\) where \(T_{0}\) is a \((\Delta-1,\Delta+1)\)-comprehensive tournament of order \((\ln 2+\varepsilon)(\Delta-\omega_{0})^{2}2^{\Delta}\).
Form \(T\) from \(T_{0}\) by adding two vertices \(x,y\) to \(T_{0}\) such that \(x\) is a twin (i.e has the same neighbourhood and the same orientation) to \(h(u)\) and \(y\) is a twin of \(h(v)\) and \((x,y)\in E(T)\). Next, let \(h:V\to V(T)\) be defined by \(h(u)=x\), \(h(v)=y\), \(h(w)=h_{0}(w)\) for all \(w\in V\setminus\{u,v\}\). Clearly, \(h\) is a homomorphism from \(G\) to \(T\). As \(T\) is of order \((\ln 2+\varepsilon)(\Delta-\omega_{0})^{2}2^{\Delta}+2\leq(\ln 2+\varepsilon)( \Delta-\omega)^{2}2^{\Delta}\) for large \(\Delta\), the result follows as \(\Delta\to\infty\).
Case.3:\(G\) is a disconnected, \(\Delta\) regular graph. By Lemma 3.2, letting \(k=\Delta+1\), there is a tournament \(T\) of order \(2(\ln 2+\varepsilon)(\Delta+1-\omega)^{2}2^{\Delta}\). Applying an argument similar to that as in Case.1, we can see that \(G\) has a homomorphism to \(T\) and by choosing \(\omega_{0}\) as in Case.2, we can refine our choice of \(T\) to give the desired bound.
## 4. Graphs with \(d<<\Delta\).
Given that the improvement in the previous section is only concerned with bounding \(\chi_{o}\) in terms of \(\Delta\), a natural question to ask is what happens when the degeneracy \(d\) is much smaller than the max degree \(\Delta\). This is particularly of interest to the authors as little attention has been paid to this particular topic beyond the work done in [1]. As before, we focus on optimizing the asymptotics of prior work, specifically [1], thereby improving their bound of \(\chi_{o}\leq 16kd2^{d}\) by a constant factor (see Theorem 4.1).
**Theorem 4.1**.: _Let \(d=\alpha\Delta\) for \(\alpha<1\). Then, for sufficiently large \(\Delta\), \(\chi_{o}\leq c\Delta d2^{d}-\omega\) for all \(c>2\alpha\ln 2+1\) and \(\omega=o(\Delta d2^{d})\). In particular this implies that if \(d=o(\Delta)\), then \(\chi_{o}=(1+o(1))\Delta d2^{d}-\omega\)._
Of course if, \(d=\Delta\), this bound is worse than that of Theorem 3.1, however it is easy to see that when \(d<<\Delta\) this is a vast improvement. The proof of Theorem 4.1 is identical to that of Theorem 5.1 in [1] except we show in Lemma 4.2 that there is a \((d,\Delta d+1)\)-comprehensive of the required order rather than of order \(16\Delta d2^{d}\). To see this letting \(k=\Delta\) and \(d=\alpha\Delta\) observe the following Lemma.
**Lemma 4.2**.: _Let \(\alpha<1\) with the possibility that \(\alpha=o(1)\). For sufficiently large \(k\), there exists a \((\alpha k,\alpha k^{2}+1)\)-comprehensive tournament of order \(c\alpha(k-\omega)^{2}2^{\alpha k}\) for all \(c>2\alpha\ln 2+1\) and \(\omega=o(k)\)._
Proof.: Let \(\alpha<1\) with the possibility that \(\alpha=o(1)\). As in Lemma 3.2 we suppose \(T\) is a random tournament, let \(|U|=\alpha k\) be fixed but arbitrary, let \(X_{U,\mathbf{a}}\) be the random variable which counts the number of vertices \(z\in V\setminus U\) such that \(F(U,z,T)=\mathbf{a}\) and \(\mu:=\mathbb{E}(X_{U,\mathbf{a}})=(n-\alpha k)2^{-\alpha k}\).
Applying Chernoff's inequality with \(\varepsilon=1-\frac{\alpha k^{2}}{\mu}\) as in Lemma 3.2, we see that
\[\mathbb{P}(X_{U,\mathbf{a}}<\alpha k^{2}+1)<exp(\frac{-1}{2}(\mu-2\alpha k^{2}+ \frac{(\alpha k^{2})^{2}}{\mu}))\]
whenever \(\frac{\alpha k^{2}}{\mu}\leq 1\). Let \(n=c\alpha(k-\omega)^{2}2^{\alpha k}\) for \(c>2\alpha\ln 2+1\), then \(\mu=(c\alpha(k-\omega)^{2}2^{\alpha k}-\alpha k)2^{-\alpha k}=c\alpha(k-\omega)^ {2}-o(1)\). Hence,
\[\mathbb{P}(X_{U,\mathbf{a}}<\alpha k^{2}+1)<exp(\frac{-1}{2}(c\alpha(k-\omega) ^{2}-2\alpha k^{2}+\frac{(\alpha k^{2})^{2}}{\alpha k^{2}}))=exp((\frac{1-c}{2} -o(1))\alpha k^{2}).\]
Applying the union bound,
\[\mathbb{P}(\exists U,\mathbf{a},X_{U,\mathbf{a}}<\alpha k^{2}+1)<\binom{n}{ \alpha k}2^{\alpha k}\mathbb{P}(X_{U,\mathbf{a}}<\alpha k^{2})<\frac{n^{ \alpha k}}{\alpha k!}2^{\alpha k}exp((\frac{1-c}{2}-o(1))\alpha k^{2})\]
\[<(ck^{2}2^{\alpha k})^{\alpha k}exp((\frac{1-c}{2}-o(1))\alpha k^{2})=(ck^{2}) ^{\alpha k}exp((\frac{1-c}{2}-o(1))\alpha k^{2}+\ln 2\alpha^{2}k^{2})\]
\[=(ck^{2})^{\alpha k}exp((\frac{1-c}{2}+\ln 2\alpha-o(1))\alpha k^{2})\]
by our choice of \(c\), \(\frac{1-c}{2}+\ln 2\alpha<0\), hence, \(\mathbb{P}(\exists U,\mathbf{a},X_{U,\mathbf{a}})\to 0\) as \(k\to\infty\).
See Table 4 for the resulting bounds from Theorem 4.1 when \(d\) is bounded by several natural functions that are \(o(\Delta)\). In the table suppose \(0<\alpha<1\) and \(k\) is constant.
## 5. Future Work
We improved bounds for the oriented chromatic number in terms of \(\chi_{2}\), \(\Delta\), and \(d\). In every case, we colour vertices inductively on a degeneracy ordering, where the existence of a colour for the next vertex in guaranteed by a special property of a large graphs which act as a universal target. Hence, our arguments hinge on establishing the existence of such graphs (either \((k,t,N)\)-full or \((k,t)\)-comprehensive) of a given order, which we prove using a direct applications of the probabilistic method. Some natural questions immediately follow from this.
Perhaps most obvious is, how close are the upper bounds of this paper from being tight? That is, prove or disprove the existence of graphs \(G\) with oriented chromatic number near our bounds. At time of writing little is known about this in the literature, with the only result of this kind being from [7] which shows that there are graphs of maximum degree \(\Delta\) and \(\chi_{o}\approx 2^{\frac{\Delta}{2}}\). As a result it seems that progress on lower bounds of this kind would require new and creative ideas that might provide new insights into the oriented chromatic number.
Alternately, one might ask if there are \((k,t,N)\)-full or \((k,t)\)-comprehensive graphs whose order is lower than we have shown? Again, little is known in this regard but there seems to be the potential for great improvements. Arguments that employ a direct use of probabilistic method seem to have hit their natural ceiling, while more involved probabilistic tools such as the local lemma are challenging to apply as variables are seldom independent. Thus, it seems that different approaches are warranted. Even if this is not feasible for large \(k\) and \(t\) work such as [5, 6, 18] demonstrate that there is interest in improved bounds for \(\chi_{o}\) when \(\Delta\), \(d\), or \(\chi_{2}\) are small, meaning that there is significant room for computational, algebraic, and classical arguments to build on this research.
Finally, as demonstrated by the proof that every planar graph has \(\chi_{o}\leq 80\) in [14] or that \(\chi_{o}\leq 2^{O(g^{\frac{1}{2}+\varepsilon})}\) in [1] where \(g\) is the genus of graph; many of the best upper bounds for \(\chi_{o}\) come from relating \(\chi_{o}\) to other
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(d\leq\) & \(\chi_{o}\leq\) & \(\chi_{o}=O(-)\) \\ \hline \(\alpha\Delta\) & \(\alpha(2\alpha\ln 2+1+\varepsilon)\Delta^{2}2^{\alpha\Delta}\) & \(O(\Delta^{2}2^{\alpha\Delta})\) \\ \hline \(\Delta^{\alpha}\) & \((1+o(1))\Delta^{1+\alpha}2^{\Delta^{\alpha}}\) & \(O(\Delta^{1+\alpha}2^{\Delta^{\alpha}})\) \\ \hline \(\log_{2}\Delta\) & \((1+o(1))\Delta^{2}\log_{2}\Delta\) & \(O(\Delta^{2}\log\Delta)\) \\ \hline \(k\) & \((k2^{k}+o(1))\Delta\) & \(O(\Delta)\) \\ \hline \end{tabular}
\end{table}
Table 1. Bounds from Theorem 4.1 where \(d<<\Delta\).
parameters. Thus, Theorem 2.1 should be of particular interest to future research as it presents another tool for reducing the study of \(\chi_{o}\) to analysing better behaved graph colouring parameters. Given the relativity small literature regarding 2-dipath colouring and the large improvement between Theorem 2.1 and prior results this is a particularly promising area for study.
## Acknowledgements
We would like to thank Dr.Peter Bradshaw (University of Illinois Urbana-Champaign) for providing feedback during the process of drafting this paper. We would also like to acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) for support through the Canadian Graduate Scholarship - Master program, and supported in part by the NSERC Discovery Grant R611368.
|
2305.18104 | Tamed loops: A way to obtain finite loop results without UV divergences | For loops with UV divergences, finite physical results obtained via $\infty -
\infty$ mean the physical transition amplitudes of loops are not well-defined.
In this paper, a presumption that the physical contributions of loops from UV
regions are insignificant is proposed, and a method of UV-free scheme described
by an equation is introduced to derive loop results without UV divergences.
This scheme provides a new perspective to an open question of the hierarchy
problem of Higgs mass, i.e., an alternative interpretation without fine-tuning
within the standard model. | Lian-Bao Jia | 2023-05-29T14:17:49Z | http://arxiv.org/abs/2305.18104v3 | # Tamed loops: A way to obtain finite loop results without UV divergences
###### Abstract
For loops with UV divergences, finite physical results obtained via \(\infty-\infty\) mean the physical transition amplitudes of loops are not well-defined. In this paper, a presumption that the physical contributions of loops from UV regions are insignificant is proposed, and a new method of UV-free scheme described by an equation is introduced to derive finite loop results without UV divergences. This scheme gives a solution to the hierarchy problem of Higgs mass without fine-tuning.
## I Introduction
In quantum field theory, Feynman diagrams are used to describe perturbative contributions to the transition amplitudes of particle interactions, including tree and loop diagrams. For a loop diagram, the four-momentum of particles in the loop is not uniquely determined by the conservation of energy and momentum, and there is a free momentum \(k^{\mu}\) in the loop. All possibilities contribute equally, and the evaluation is often ultraviolet (UV) divergence when we directly integrate over all possible \(k^{\mu}\) that could travel around the loop. Hence, infinities from loop integrals at large energy and momentum regions (\(k^{\mu}\to\infty\)) indicate that constructions of loop contributions are not well-defined.
The actual physics is obscured by infinities. How to make sense of infinities and get physical quantities when evaluating loop integrals? The first step of a paradigm approach is to make divergences mathematically expressed through regularization, followed by canceling divergences by renormalization with counterterms introduced. In Pauli-Villars regularization [1], massive fictitious particles are involved to cancel out divergences at large momenta. A popular method is dimensional regularization [2], and a fictitious fractional number of spacetime dimensions is introduced into the integral (see e.g. Refs. [3; 4; 5; 6; 7; 8] for more methods). In the scheme of regularization followed by renormalization, the actual physics is extracted from infinities via \(\infty-\infty=\) finite physical results with divergences mathematically expressed. With this method, for example, the electron anomalous magnetic moment predicted by the standard model (SM) [9; 10; 11; 12] agrees with the value measured by experiments [13; 14; 15] at an accuracy of \(10^{-12}\).
There are generally two types of UV divergences, i.e., logarithmic divergence and power-law divergence. Despite the comparative success of the regularization and renormalization procedure, the feeling remains that there ought to be a more economic way to acquire loop contributions. If we believe physical contributions from loops are finite, then an open question is how to find an appropriate way to directly obtain physically finite results without UV divergences. This is of our concern in this paper. A new method is explored here to obtain finite loop contributions without UV divergences, and applications of the new method in specific processes are discussed.
## II New method for loops
As described in the Introduction, the UV divergences of loop integrals indicate that the transition amplitudes directly obtained by are not well-defined in these cases. For this issue, a presumption on loops is proposed, i.e. the physical contributions of loops are finite with contributions from UV regions being insignificant. Hence, we assume that the physical transition amplitude \(\mathcal{T}_{\text{P}}\) with propagators can be described by an equation of
\[\mathcal{T}_{\text{P}}\!=\!\left[\int\!d\xi_{1}\cdots d\xi_{i}\frac{\partial \mathcal{T}_{\text{F}}(\xi_{1},\cdots,\xi_{i})}{\partial\xi_{1}\cdots\partial \xi_{i}}\right]_{\{\xi_{1},\cdots,\xi_{i}\}\to 0}+C\,, \tag{1}\]
where a Feynman-like amplitude \(\mathcal{T}_{\text{F}}(\xi_{1},\cdots,\xi_{i})\) is introduced, which is written by Feynman rules just with parameters \(\xi_{1},\cdots,\xi_{i}\) added into denominators of propagators. \(C\) is a boundary constant related to the transition process. If Eq. (1) is applied to tree-level and loop-level processes without UV divergences, \(C=0\) is adopted. For loop processes with UV divergences, \(C\) can be set by renormalization conditions, symmetries and naturalness. For the integral over \(\xi\), here we introduce a definition of the primary antiderivative \([\int\!d\xi_{1}\cdots d\xi_{i}\frac{\partial\mathcal{T}_{\text{F}}(\xi_{1}, \cdots,\xi_{i})}{\partial\xi_{1}\cdots\partial\xi_{i}}]\) with the constant term being absorbed into \(C\) (for example, for the integral \(\int\!xdx=\frac{x^{2}}{2}+C\), the primary antiderivative is \([\int\!xdx]=\frac{x^{2}}{2}\)). After integration, \(\mathcal{T}_{\text{P}}\) will be obtained in the limit of parameters \(\xi_{1}\to 0\), \(\cdots\), \(\xi_{i}\to 0\). The number of the parameter \(\xi_{i}\) introduced is as few as possible in the case of the loop integral becoming UV-converged. For a loop with UV divergences, one parameter \(\xi\) is introduced for logarithmic divergence, and two \(\xi\) parameters are introduced for quadratic divergence (three \(\xi\) parameters needed at most for a loop being converged). For multi-loops, a set of \(\xi\) parameters is introduced for each loop. The new method above is UV-free scheme.
## III Applications
Here, the new method is applied to specific processes as examples (see the Appendix for additional examples), and a solution to the hierarchy problem of the Higgs mass is described in UV-free scheme.
### Some examples
#### iii.1.1 The \(\phi^{4}\) theory
Let's first apply this new method to the \(\phi^{4}\) theory. The Lagrangian of \(\phi^{4}\) theory is
\[{\cal L}=\frac{1}{2}(\partial_{\mu}\phi)^{2}-\frac{1}{2}m^{2}\phi^{2}-\frac{ \lambda}{4!}\phi^{4}\,. \tag{2}\]
The one-loop diagrams of two-particle scatterings in s, t and u channels are shown in Fig. 1, and the scattering amplitude has logarithmic UV divergences when evaluating loop integrals. Taking the approach described in Eq. (1), the Feynman-like scattering amplitude \({\cal T}_{\rm F}(\xi)\) in s channel can be written as
\[{\cal T}_{\rm F}(\xi)\!=\!\frac{(-i\lambda)^{2}}{2}\!\!\int\!\!\frac{d^{4}k}{( 2\pi)^{4}}\frac{i}{k^{2}\!-\!m^{2}\!+\!\xi}\frac{i}{(k+q)^{2}\!-\!m^{2}}\,, \tag{3}\]
where \(q\) is the momentum transfer in the scattering process, with \(q^{2}\) being equal to the Mandelstam \(s\). The physical scattering amplitude \({\cal T}_{\rm P}(s)\) in this channel is
\[{\cal T}_{\rm P}(s) = \Bigl{[}\int\!d\xi\frac{\partial{\cal T}_{\rm F}(\xi)}{\partial \xi}\Bigr{]}_{\xi\to 0}\!+C_{1}\] \[= \Bigl{[}\frac{-\lambda^{2}}{2}\!\!\int\!d\xi\!\!\int\!\frac{d^{4 }k}{(2\pi)^{4}}\frac{-i}{(k^{2}\!-\!m^{2}\!+\!\xi)^{2}}\frac{i}{(k\!+\!q)^{2} \!-\!m^{2}}\Bigr{]}_{\xi\to 0}\] \[+C_{1}\,,\]
and it is UV-converged when evaluating the integral of the loop momentum \(k\). After integral, one has
\[{\cal T}_{\rm P}(s)\!=\frac{-i\lambda^{2}}{32\pi^{2}}\!\!\!\int_{0}^{1}\!\!dx \log[m^{2}-x(1-x)s]+\!C_{1}\,. \tag{5}\]
Considering the renormalization conditions, the amplitudes are taken to be zero at \(s=4m^{2}\), \(t=u=0\). Thus, the constant \(C_{1}\) here is
\[C_{1}=\frac{i\lambda^{2}}{32\pi^{2}}\!\!\!\int_{0}^{1}\!\!dx\log[m^{2}-4m^{2} x(1-x)]\,. \tag{6}\]
For t and u channels, similar results can be obtained for \({\cal T}_{\rm P}(t)\) and \({\cal T}_{\rm P}(u)\), with \(s\) in Eq. (5) replaced by \(t\) and \(u\) respectively. The total one-loop physical amplitude \({\cal T}_{\rm P}\) is
\[{\cal T}_{\rm P} = {\cal T}_{\rm F}(s)+{\cal T}_{\rm F}(t)+{\cal T}_{\rm P}(u)\] \[= \frac{-i\lambda^{2}}{32\pi^{2}}\!\!\int_{0}^{1}\!\!dx\Bigl{[}\log \frac{m^{2}-x(1-x)s}{m^{2}-4m^{2}x(1-x)}\] \[+\log\frac{m^{2}-x(1-x)t}{m^{2}}+\log\frac{m^{2}-x(1-x)u}{m^{2}} \Bigr{]}\,.\]
We can see that the same finite result is obtained with new method here as the procedure of dimensional regularization and renormalization, and there is no trouble-some UV divergence in calculations. From another point of view, it gives an explanation why universal constant parts (\(\gamma_{E}\), \(\log(4\pi)\)) should be subtracted along with infinity in \(\overline{\rm MS}\).
#### iii.1.2 The axial anomaly
The axial vector current \(j^{\mu 5}\) is not conserved for massless fermions, with
\[\partial_{\mu}j^{\mu 5}=-\frac{e^{2}}{16\pi^{2}}e^{\alpha\beta\mu\nu}F_{ \alpha\beta}F_{\mu\nu}\,. \tag{8}\]
This equation is the Adler-Bell-Jackiw anomaly [16; 17; 18]. In addition, the axial anomaly can be checked by the transition of axial vector current \(\to\) two photons being nonzero. The one-loop diagrams contributing to the two-photon matrix element of the divergence of axial vector current are shown in Fig. 2. The physical transition amplitude \({\cal T}_{\rm P}^{\mu\nu\lambda}\) to the divergence of the axial current can be written as
\[iq_{\mu}{\cal T}_{\rm P}^{\mu\nu\lambda} = iq_{\mu}\Bigl{(}\Bigl{[}\int\!d\xi_{1}\frac{\partial{\cal T}_{ \rm F}^{\mu\nu\lambda}(\xi_{1})}{\partial\xi_{1}}\Bigr{]}_{\xi_{1}\to 0}+C_{1}^{\mu\nu\lambda}\] \[+[\nu\leftrightarrow\lambda,p_{1}\leftrightarrow p_{2}]\Bigr{)}\] \[= iq_{\mu}(-ie)^{2}(-i)\Bigl{(}\Bigl{[}\int\!d\xi_{1}\!\int\!\! \frac{d^{4}k}{(2\pi)^{4}}\] \[\times\mbox{tr}\bigl{(}\gamma^{\mu}\gamma^{5}\frac{\not{k}-\not{p} _{2}}{((k-p_{2})^{2}\!+\!\xi_{1})^{2}}\gamma^{\lambda}\frac{\not{k}^{2}}{k^{2 }}\gamma^{\nu}\frac{\not{k}+\not{p}_{1}}{(k\!+\!p_{1})^{2}}\bigr{)}\Bigr{]}_{ \xi_{1}\to 0}\] \[+C_{1}^{\mu\nu\lambda}+[\nu\leftrightarrow\lambda,p_{1}\leftrightarrow p _{2}]\Bigr{)}\,.\]
Figure 2: The one-loop diagrams contributing to the divergence of axial vector current.
Figure 1: The one-loop diagrams of two-particle scatterings in \(\phi^{4}\) theory.
Taking the trace of \(\gamma-\)matrices and evaluating the integrals, one has
\[iq_{\mu}{\cal T}_{\rm P}^{\mu\nu\lambda} = \!\!\!\!\frac{(-ie)^{2}}{4\pi^{2}}\biggl{(}\!\int_{0}^{1}\!\!dx_{1} dx_{2}dx_{3}\delta(1\!-\!x_{1}\!-\!x_{2}\!-\!x_{3}) \tag{10}\] \[\times\bigl{[}6(1-\frac{x_{1}+x_{3}}{2})\log\frac{1}{2x_{1}x_{3} p_{1}\cdot p_{2}}\] \[+(x_{1}+x_{3}-2)+C_{1}\bigr{]}\varepsilon^{\alpha\lambda\beta p}p _{1\alpha}p_{2\beta}\] \[+[\nu\leftrightarrow\lambda,p_{1}\leftrightarrow p_{2}]\biggr{)}\,.\] \[= \!\!\!\frac{(-ie)^{2}}{4\pi^{2}}\!\int_{0}^{1}\!\!dx_{1}dx_{2}dx_{ 3}\delta(1\!-\!x_{1}\!-\!x_{2}\!-\!x_{3})\] \[\times\bigl{[}6(1-\frac{x_{1}+x_{3}}{2})\log\frac{1}{2x_{1}x_{3} p_{1}\cdot p_{2}}\] \[+(x_{1}+x_{3}-2)+C_{1}\bigr{]}2\varepsilon^{\alpha\lambda\beta p} p_{1\alpha}p_{2\beta}\,.\]
Note that the term \((x_{1}+x_{3}-2)\) is originally finite. Suppose the axial anomaly is independent of the energy scale, and the term \(C_{1}\) can be written as
\[C_{1} = 6(1\!-\!\frac{x_{1}+x_{3}}{2})\log(2x_{1}x_{3}p_{1}\!\cdot\!p_{2} )-C_{0}\,, \tag{11}\]
with \(C_{0}\) being a constant. In this case, we have
\[iq_{\mu}{\cal T}_{\rm P}^{\mu\nu\lambda} = \!\!\!\frac{(-ie)^{2}}{4\pi^{2}}\!\int_{0}^{1}\!\!dx_{1}dx_{2}dx_{ 3}\delta(1\!-\!x_{1}\!-\!x_{2}\!-\!x_{3})\] \[\times\bigl{[}x_{1}+x_{3}-2-C_{0}\bigr{]}2\varepsilon^{\alpha \lambda\beta\nu}p_{1\alpha}p_{2\beta}\] \[= \!\!\!-\frac{(-ie)^{2}}{2\pi^{2}}\bigl{(}\frac{2}{3}+\frac{C_{0}} {2}\bigr{)}\varepsilon^{\alpha\lambda\beta\nu}p_{1\alpha}p_{2\beta}\,.\]
Now, the result is
\[\partial_{\mu}j^{\mu 5} = \!\!\!iq_{\mu}{\cal T}_{\rm P}^{\mu\nu\lambda}\epsilon_{\nu}^{*}( p_{1})\epsilon_{\lambda}^{*}(p_{2})\] \[= \!\!\!-\frac{e^{2}}{16\pi^{2}}(\frac{2}{3}+\frac{C_{0}}{2}) \varepsilon^{\alpha\nu\beta\lambda}F_{\alpha\nu}F_{\beta\lambda}\,.\]
The value of \(C_{0}\) is of order one estimated by naturalness. If Eq. (8) is considered as a relation that the axial vector current should follow, the value \(C_{0}=\frac{2}{3}\) is obtained with SM being a self-consistent theory. Moreover, the values \(\frac{2}{3}\), \(\frac{1}{3}\) are equal to the charge values of quarks, and it is not known whether it is a coincidence or there may be some correlation between them.
### The hierarchy problem
With the discovery of the Higgs boson with a mass of 125 GeV at LHC [19; 20], the Higgs mass that is not too heavy accentuates the hierarchy problem, i.e. the naturalness of the fine-tuning originating from the radiative corrections to the Higgs mass. The one-loop radiative corrections to the Higgs mass are power-law divergences, as depicted in Fig. 3. What prevents the Higgs mass getting quantum corrections from very high energy scale (the Grand Unification or the Planck scale)? Here we try to give an answer in UV-free scheme.
The the radiative corrections from the Higgs boson in the first diagram of Fig. 3 is
\[{\cal T}_{\rm P}^{H1} = \!\!\!\Bigl{[}\int\!d\xi_{1}d\xi_{2}\frac{\partial{\cal T}_{\rm F }^{H1}(\xi_{1},\xi_{2})}{\partial\xi_{1}\partial\xi_{2}}\Bigr{]}_{\{\xi_{1}, \xi_{2}\}\to 0}\!+C\] \[= \!\!\!\Bigl{[}(-3i)\frac{m_{H}^{2}}{2v^{2}}\!\!\!\int\!d\xi_{1}d \xi_{2}\!\!\int\!\frac{d^{4}k}{(2\pi)^{4}}\] \[\times\frac{2i}{(k^{2}-m_{H}^{2}+\xi_{1}+\xi_{2})^{3}}\Bigr{]}_{\{ \xi_{1},\xi_{2}\}\to 0}\!+C\,.\]
After integral, one has
\[{\cal T}_{\rm P}^{H1} = \!\!\!i\frac{3m_{H}^{4}}{32\pi^{2}v^{2}}(\log\frac{1}{m_{H}^{2}}+ 1)+C\] \[= \!\!\!i\frac{3m_{H}^{4}}{32\pi^{2}v^{2}}(\log\frac{\mu^{2}}{m_{H}^ {2}}+1)\,.\]
Now we turn to the loop of vector boson V (V=W,Z) shown in the first diagram of Fig. 3. In unitary gauge, the corresponding superficial degree of divergence is increased to 4. The radiative corrections with these quartic divergences can be calculated in UV-free scheme, with
\[{\cal T}_{\rm P}^{V1} = \!\!\!\Bigl{[}\int\!d\xi_{1}d\xi_{2}d\xi_{3}\frac{\partial{\cal T }_{\rm F}^{V1}(\xi_{1},\xi_{2},\xi_{3})}{\partial\xi_{1}\partial\xi_{2} \partial\xi_{3}}\Bigr{]}_{\{\xi_{1},\xi_{2},\xi_{3}\}\to 0}\!+C\] \[= \!\!\!\Bigl{[}i\frac{2m_{V}^{2}}{v^{2}s\!\!\int}\!d\xi_{1}d\xi_{2} d\xi_{3}\!\!\int\!\frac{d^{4}k}{(2\pi)^{4}}g_{\mu\nu}\] \[\times\frac{6i(g^{\mu\nu}-k^{\mu}k^{\nu}/m_{V}^{2})}{(k^{2}-m_{V} ^{2}+\xi_{1}+\xi_{2}+\xi_{3})^{4}}\Bigr{]}_{\{\xi_{1},\xi_{2},\xi_{3}\}\to 0}\!+C\,\]
where the symmetry factor \(s_{V}\) is \(s_{V}=1\), 2 for W, Z respectively. After integral, one has
\[{\cal T}_{\rm P}^{V1} = \!\!\!i\frac{2m_{V}^{2}}{v^{2}s_{V}}\frac{m_{V}^{2}}{16\pi^{2}}(3 \log\frac{1}{m_{V}^{2}}+\frac{5}{2})\!+C\] \[= \!\!\!i\frac{2m_{V}^{2}}{v^{2}s_{V}}\frac{3m_{V}^{2}}{16\pi^{2}}( \log\frac{\mu^{2}}{m_{V}^{2}}+\frac{5}{6})\.\]
The top quark loop is shown in the second diagram of Fig. 3, and the corresponding radiative correction is
\[{\cal T}_{\rm P}^{t} = \!\!\!\!\Bigl{[}\int\!d\xi_{1}d\xi_{2}\frac{\partial{\cal T}_{\rm F }^{t}(\xi_{1},\xi_{2})}{\partial\xi_{1}\partial\xi_{2}}\Bigr{]}_{\{\xi_{1},\xi_{2 }\}\to 0}\!+C\] \[= \!\!\!\Bigl{[}\frac{3m_{t}^{2}}{v^{2}}\!\!\int\!d\xi_{1}d\xi_{2} \!\!\int\!\frac{d^{4}k}{(2\pi)^{4}}\] \[\times{\rm tr}\bigl{(}\frac{2i(\!\not\!k+m_{t})}{(k^{2}\!-\!m_{t}^ {2}+\xi_{1}\!+\!\xi_{2})^{3}}\frac{i(\!\not\!p+\!k+m_{t})}{(p+k)^{2}\!-\!m_{t}^ {2}}\bigr{)}\Bigr{]}_{\{\xi_{1},\xi_{2}\}\to 0}\!+C\,,\]
with \(p\) being the external momentum. After integral, one has
\[{\cal T}_{\rm P}^{t} = -\frac{3m_{t}^{2}}{v^{2}}\frac{i}{4\pi^{2}}\!\!\int_{0}^{1}\!dx[m_{t }^{2}-p^{2}x(1-x)]\] \[\times(3\log\frac{1}{m_{t}^{2}-p^{2}x(1-x)}+2)+C\] \[= -\frac{3m_{t}^{4}}{v^{2}}\frac{3i}{4\pi^{2}}\!\int_{0}^{1}\!dx[1- \frac{p^{2}}{m_{t}^{2}}x(1-x)]\] \[\times(\log\frac{\mu^{2}}{m_{t}^{2}-p^{2}x(1-x)}+\frac{2}{3})\,.\]
The radiative correction of Higgs loop shown in the third diagram of Fig. 3 is
\[{\cal T}_{\rm P}^{H3} = \Bigl{[}\int\!d\xi_{1}\frac{\partial{\cal T}_{\rm F}^{H3}(\xi_{1} )}{\partial\xi_{1}}\Bigr{]}_{\xi_{1}\to 0}+C\] \[= \Bigl{[}(-3i)^{2}\frac{m_{t}^{4}}{2v^{2}}\!\!\!\int\!d\xi_{1}\! \!\int\!\frac{d^{4}k}{(2\pi)^{4}}\] \[\times\frac{-i}{(k^{2}-m_{H}^{2}+\xi_{1})^{2}}\frac{i}{(k+p)^{2}- m^{2}}\Bigr{]}_{\xi_{1}\to 0}+C\,.\]
After integral, one has
\[{\cal T}_{\rm P}^{H3} = \frac{9m_{H}^{4}}{2v^{2}}\frac{i}{16\pi^{2}}\!\int_{0}^{1}\!dx\, \log\frac{1}{m_{H}^{2}-x(1-x)p^{2}}+C \tag{21}\] \[= i\frac{9m_{H}^{4}}{32\pi^{2}v^{2}}\!\int_{0}^{1}\!dx\,\log\frac{ \mu^{2}}{m_{H}^{2}-x(1-x)p^{2}}\,.\]
The radiative correction of vector boson V loop shown in the third diagram of Fig. 3 is
\[{\cal T}_{\rm P}^{V3} = \Bigl{[}\int\!d\xi_{1}d\xi_{2}d\xi_{3}\frac{\partial{\cal T}_{ \rm F}^{V3}(\xi_{1},\xi_{2},\xi_{3})}{\partial\xi_{1}\partial\xi_{2}\partial \xi_{3}}\Bigr{]}_{\{\xi_{1},\xi_{2},\xi_{3}\}\to 0}+C\] \[= \Bigl{[}-\frac{4m_{V}^{4}}{v^{2}s_{V}}\!\!\int\!\!d\xi_{1}d\xi_{2 }d\xi_{3}\!\!\int\!\!\frac{d^{4}k}{(2\pi)^{4}}\frac{6i(g^{\mu\nu}-k^{\mu}k^{ \nu}/m_{V}^{2})}{(k^{2}-m_{V}^{2}+\xi_{1}+\xi_{2}+\xi_{3})^{4}}\] \[\times\frac{-i(g_{\mu\nu}\!-\!(k\!+\!p)_{\mu}(k\!+\!p)_{\nu}/m_{V} ^{2})}{(k+p)^{2}-m_{V}^{2}}\Bigr{]}_{\{\xi_{1},\xi_{2},\xi_{3}\}\to 0}+C\,.\]
After integral, one has
\[{\cal T}_{\rm P}^{V3} = \frac{4m_{V}^{4}}{v^{2}s_{V}}\frac{6i}{16\pi^{2}}\!\int_{0}^{1} \!dx\bigl{(}\bigl{[}\frac{1}{2}-\frac{p^{2}}{m_{V}^{2}}(x-x^{2}+\frac{1}{12}) \tag{23}\] \[+\frac{1}{12}-\frac{p^{2}}{12m_{V}^{2}}(22x(1-x)-1)\] \[-\frac{p^{4}x(1-x)}{12m_{V}^{4}}(-21x(1-x)+1)\bigr{)}+C\] \[= \frac{m_{V}^{4}}{v^{2}s_{V}}\frac{3i}{2\pi^{2}}\!\int_{0}^{1}\!dx \bigl{(}\bigl{[}\frac{1}{2}\!-\!\frac{p^{2}}{m_{V}^{2}}(x-x^{2}+\frac{1}{12})\] \[+\frac{p^{4}}{m_{V}^{4}}\frac{x(1\!-\!x)(20x\!-\!20x^{2}\!-\!1)}{1 2}\bigr{]}\log\frac{\mu^{2}}{m_{V}^{2}\!-\!x(1\!-\!x)p^{2}}\] \[+\frac{1}{12}-\frac{p^{2}(22x(1\!-\!x)\!-\!1)}{12m_{V}^{2}}-\frac {p^{4}x(1\!-\!x)(-\!21x(1\!-\!x)\!+\!1)}{12m_{V}^{4}}\bigr{)}.\]
Considering the typical energy scale \(\mu\) in the electroweak scale, hence, the above corrections (multiplied by \(i\)) to the Higgs mass without fine-tuning are not very large. Moreover, if the on-shell renormalization conditions are adopted, the results can be written as
\[{\cal T}_{\rm P}^{H1}={\cal T}_{\rm P}^{V1}=0\,, \tag{24}\]
\[{\cal T}_{\rm P}^{t} = -\frac{3m_{t}^{4}}{v^{2}}\frac{3i}{4\pi^{2}}\!\!\int_{0}^{1}\!dx[1 -\frac{p^{2}}{m_{t}^{2}}x(1-x)]\] \[\times\log\frac{m_{t}^{2}-m_{H}^{2}x(1-x)}{m_{t}^{2}-p^{2}x(1-x) }\,,\]
\[{\cal T}_{\rm P}^{H3}=i\frac{9m_{H}^{4}}{32\pi^{2}v^{2}}\!\int_{0}^{1}\!dx\, \log\frac{m_{H}^{2}-m_{H}^{2}x(1-x)}{m_{H}^{2}-x(1-x)p^{2}}\,, \tag{26}\]
\[{\cal T}_{\rm P}^{V3} = \frac{m_{V}^{4}}{v^{2}s_{V}}\frac{3i}{2\pi^{2}}\!\int_{0}^{1}\!dx \bigl{(}\bigl{[}\frac{1}{2}-\frac{p^{2}}{m_{V}^{2}}(x-x^{2}+\frac{1}{12})\] \[+\frac{p^{4}}{m_{V}^{4}}\frac{x(1\!-\!x)(20x\!-\!20x^{2}\!-\!1)}{1 2}\bigr{]}\log\!\frac{m_{V}^{2}\!-\!x(1\!-\!x)m_{H}^{2}}{m_{V}^{2}\!-\!x(1\!-\!x)p ^{2}}\] \[-\frac{(p^{2}-m_{H}^{2})(22x(1-x)-1)}{12m_{V}^{2}}\] \[-\frac{(p^{4}-m_{H}^{4})x(1-x)(-21x(1-x)+1)}{12m_{V}^{4}}\bigr{)}.\]
## IV Conclusion and discussion
The UV divergences of loops with finite physical results obtained via \(\infty-\infty\) indicate that transition amplitudes directly obtained are not always well-defined, as pointed out by Dirac [21] and Feynman [22]. If we go forward, the transition amplitude directly obtained by Feynman rules is taken as physical input, and the physical result is taken as physical output. Thus, the physical output depends on the physical input, but not directly equals to the physical input. In this paper, a presumption of the physical contributions of loops from UV regions being insignificant is proposed. With this presumption, we find that the finite physical output can be described by Eq. (1) with a new method of UV-free scheme, i.e. there are a series of integral forms with the same physical output. For the gauge invariance when performing a change on a gauge field propagator, the gauge invariance can be considered as the physical input required, or being formally restored after taking the \(\xi\) integrals. In UV-free scheme, finite results of loops can be obtained without UV divergences, the \(\gamma^{5}\) matrix remains the original form, and the unitary gauge can be adopted for gauge bosons with masses. In addition, the hierarchy problem of Higgs mass has a solution without fine-tuning. Moreover, if SM is considered as an effective field theory at low energy scale, loop corrections from possible new physics at very high energy scale (e.g. the Planck scale) are insignificant.
Here we give a brief discussion about loops in different schemes. The usual procedure for UV divergences of loops is regularization (e.g. the cutoff regularization, Pauli-Villars regularization and dimensional regularization) and renormalization, and this paradigm is based on the Bogoliubov-Parasiuk-Hepp-Zimmermann (BPHZ) renormalization scheme [23], i.e. all UV divergences can be removed by the corresponding counterterms for a renormalizable quantum field theory. In this paper, a new framework of UV-free scheme described by Eq. (1) is introduced to obtain loop results. Since it is not yet possible to calculate all order loops to compare different schemes, let's look at it from another perspective, the divergences. For logarithmic divergences, both a suitable regulator with the BPHZ scheme and Eq. (1) can cure UV divergences and obtain the finite loop results. For power-law divergences (e.g. loop corrections of the Higgs mass), the results are fine-tuned for regulators with BPHZ scheme [24], while finite loop results can be obtained in UV-free scheme without fine-tuning. The UV-free scheme seems an alternative way to describe loop transitions, especially for the case with power-law divergences.
###### Acknowledgements.
This work was partly supported by the open project of the theoretical physics academic exchange platform of Chongqing University.
## Appendix A Additional Examples
#### a.1 The gauge field propagator
If the new method is applied to a gauge field propagator without free loop momentum, e.g., the photon propagator \(\frac{-ig_{\mu\nu}}{p^{2}+i\epsilon}\), the result can be written as \(\mathcal{T}_{\rm F}(\xi)\!=\!\frac{-ig_{\mu\nu}}{p^{2}+\xi+i\epsilon}\), \(\frac{\partial\mathcal{T}_{\rm F}(\xi)}{\partial\xi}=\frac{-ig_{\mu\nu}(-1)} {(p^{\prime}+\xi+i\epsilon)^{2}}\), \([\int\!d\xi\frac{\partial\mathcal{T}_{\rm F}(\xi)}{\partial\xi}]=\frac{-ig_{ \mu\nu}}{p^{2}+\xi+i\epsilon}\), with the boundary constant \(C=0\) adopted without free loop momentum. The final result is \(\left[\int\!d\xi\frac{\partial\mathcal{T}_{\rm F}(\xi)}{\partial\xi}\right]_ {\xi\to 0}=\frac{-ig_{\mu\nu}}{p^{2}+i\epsilon}\), with the gauge field propagator restored.
#### a.2 The electron self-energy
Now, we turn to the electron self-energy. The one-loop diagram is shown in Fig. 4, and the transition amplitude has logarithmic UV divergence when evaluating the loop integral. The physical transition amplitude \(\mathcal{T}_{\rm P}\) is
\[\mathcal{T}_{\rm P} = \left[\int\!d\xi_{1}d\xi_{2}\frac{\partial\mathcal{T}_{\rm F}^{ \mu\nu}(\xi_{1},\xi_{2})}{\partial\xi_{1}\partial\xi_{2}}\right]_{\xi\gets 0 }\!\!+C^{\mu\nu}\] \[= \left[\!(-ie)^{2}\!\!\!\int\!\!d\xi\!\!\int\!\!\frac{d^{4}k}{(2\pi )^{4}}\gamma^{\mu}\frac{-i(\not\!k+m)}{(k^{2}\!-\!m^{2}\!+\!\xi\!+\!i\epsilon) ^{2}}\gamma_{\mu}\right.\] \[\left.\times\frac{-i}{(p\!-\!k)^{2}\!+\!i\epsilon}\right]_{\xi \to 0}+C\,.\]
After integral, one has
\[\mathcal{T}_{\rm P} = -i\frac{\alpha}{2\pi}\!\!\int_{0}^{1}\!\!\!dx(2m\!-\!x\!\not{p}) \!\log\!\frac{1}{(1\!-\!x)(m^{2}\!-\!xp^{2})}\,. \tag{10}\]
If \(C\) is absorbed into the log term in the form of a typical energy scale (renormalization scale) \(\mu^{2}\) to make the log term dimensionless, the result is
\[\mathcal{T}_{\rm P}\!=\!-i\frac{\alpha}{2\pi}\!\!\int_{0}^{1}\!\!\!dx(2m-x\! \not{p})\!\log\!\frac{\mu^{2}}{(1\!-\!x)(m^{2}\!-\!xp^{2})}\,. \tag{11}\]
If the on-shell renormalization conditions are adopted for this process, the result is
\[\mathcal{T}_{\rm P}\!=\!-i\frac{\alpha}{2\pi}\!\!\int_{0}^{1}\!\!dx(2m-x\! \not{p})\!\log\!\frac{(1-x)m^{2}}{m^{2}-xp^{2}}\,. \tag{12}\]
#### a.3 The vacuum polarization
The one-loop diagram of the vacuum polarization is shown in Fig. 5, and the superficial degree of divergence is 2. The transition amplitude is UV divergent when evaluating the loop integral. The physical transition amplitude \(\mathcal{T}_{\rm P}^{\mu\nu}\) of this process is
\[\mathcal{T}_{\rm P}^{\mu\nu} = \left[\int\!d\xi_{1}d\xi_{2}\frac{\partial\mathcal{T}_{\rm F}^{ \mu\nu}(\xi_{1},\xi_{2})}{\partial\xi_{1}\partial\xi_{2}}\right]_{\{\xi_{1}, \xi_{2}\}\to 0}\!+C^{\mu\nu}\] \[= \left[\!(-ie)^{2}(-1)\!\!\int\!\!d\xi_{1}d\xi_{2}\!\!\int\!\frac{ d^{4}k}{(2\pi)^{4}}\right.\] \[\left.\times\mathrm{tr}\!\left(\gamma^{\mu}\frac{2i(\not\!k+m)}{(k^ {2}\!-\!m^{2}\!+\!\xi_{1}\!+\!\xi_{2})^{3}}\gamma^{\nu}\frac{i(\not\!p\!+\!k \!+\!m)}{(p\!+\!k)^{2}\!-\!m^{2}}\right)\right]_{\{\xi_{1},\xi_{2}\}\to 0}\] \[+C^{\mu\nu}\,.\]
Figure 5: The one-loop diagram of vacuum polarization.
Taking the trace of \(\gamma-\)matrices, one has
\[{\cal T}^{\mu\nu}_{\rm P} = \Big{[}-8e^{2}\!\!\int\!d\xi_{1}d\xi_{2}\!\!\int\!\frac{d^{4}k}{(2 \pi)^{4}}\] \[\times\frac{k^{\mu}(k\!+\!p)^{\nu}\!+\!k^{\nu}(k\!+\!p)^{\mu}\!-\!g ^{\mu\nu}(k\!\cdot\!(k\!+\!p)\!-\!m^{2})}{(k^{2}-m^{2}+\xi_{1}+\xi_{2})^{3}((p +k)^{2}-m^{2})}\Big{]}_{\{\xi_{1},\xi_{2}\}\to 0}\] \[+C^{\mu\nu}\,.\]
After integral, one has
\[{\cal T}^{\mu\nu}_{\rm P} = -\frac{ie^{2}}{4\pi^{2}}\!\!\int_{0}^{1}\!dx\] \[\times\Big{[}2x(1-x)(-g^{\mu\nu}p^{2}+p^{\mu}p^{\nu})\log(m^{2}-p ^{2}x(1-x))\] \[-g^{\mu\nu}(m^{2}-p^{2}x(1-x))\Big{]}+C^{\mu\nu}\,.\]
Consider the Ward identity being preserved, and a physical choice is
\[{\cal T}^{\mu\nu}_{\rm P} = -\frac{ie^{2}}{4\pi^{2}}\!\!\int_{0}^{1}\!dx(-g^{\mu\nu}p^{2}+p^{ \mu}p^{\nu})x(1-x)\] \[\times\Big{[}2\log(m^{2}-p^{2}x(1-x))-1+C\Big{]}\,.\]
The contribution is zero at \(p^{2}=0\), and in this case, the result is
\[i{\cal T}^{\mu\nu}_{\rm P} = -\frac{2\alpha}{\pi}\!\int_{0}^{1}\!dx(-g^{\mu\nu}p^{2}+p^{\mu}p^ {\nu})x(1-x)\] \[\times\log(\frac{m^{2}}{m^{2}-p^{2}x(1-x)})\,.\]
#### ii.2.4 The electron vertex function
The one-loop contribution to the electron vertex function is shown in Fig. 6. The physical transition amplitude \({\cal T}^{\mu}_{\rm P}\) of this process is
\[{\cal T}^{\mu}_{\rm P} = \Big{[}\!\int\!d\xi\frac{\partial{\cal T}^{\mu}_{\rm F}(\xi)}{ \partial\xi}\Big{]}_{\xi\to 0}\!+\!C^{\mu}\] \[= \Big{[}\!(-ie)^{3}\!\!\int\!d\xi\!\int\!\!\frac{d^{4}k}{(2\pi)^{4 }}\frac{-ig_{\nu\rho}(-1)}{((k-p_{1})^{2}+\xi+i\epsilon)^{2}}\bar{u}(p_{2}) \gamma^{\nu}\] \[\times\frac{i(\not{k}+\not{q}+m)}{(k\!+\!q)^{2}\!-\!m^{2}\!+\!i \epsilon}\gamma^{\mu}\frac{i(\not{k}+m)}{\!k^{2}\!-\!m^{2}\!+\!i\epsilon} \gamma^{\rho}u(p_{1})\Big{]}_{\xi\to 0}\!+\!C^{\mu}\,.\]
After a bit of algebra, one has
\[{\cal T}^{\mu}_{\rm P} = \Big{[}\!(-ie)^{3}\!\!\!\int_{0}^{1}\!\!dx_{1}dx_{2}dx_{3}\!\! \int\!d\xi\!\!\int\!\!\frac{d^{4}k}{(2\pi)^{4}}\frac{\delta(1\!-\!x_{1}\!-\!x _{2}\!-\!x_{3})}{(k^{2}-\Delta+i\epsilon)^{4}}x_{3} \tag{11}\] \[\times 12i\bar{u}(p_{2})[\gamma^{\mu}(-\frac{k^{2}}{2}+(1\!-\!x_{1})(1 \!-\!x_{2})q^{2}\!+\!(1\!-\!4x_{3}\!+\!x_{3}^{2})m^{2})\] \[+\frac{i\sigma^{\mu\nu}q_{\nu}}{2m}2m^{2}x_{3}(1\!-\!x_{3})]u(p_{ 1})\Big{]}_{\xi\to 0}+C^{\mu}\,,\]
where the parameter \(\Delta\) is \(\Delta=(1-x_{3})^{2}m^{2}-x_{1}x_{2}q^{2}-x_{3}\xi\).
In this case, the form factor \(F_{1}(q^{2})\) is
\[F_{1}(q^{2}) = 1\!+\!\Big{(}\!\Big{[}\!(-ie)^{2}\!\!\!\int_{0}^{1}\!\!\!dx_{1}dx_ {2}dx_{3}\!\!\int\!\!d\xi\!\int\!\!\frac{d^{4}k}{(2\pi)^{4}}\frac{\delta(1\!-\!x _{1}\!-\!x_{2}\!-\!x_{3})}{(k^{2}\!-\!\Delta\!+\!i\epsilon)^{4}} \tag{12}\] \[\times 12x_{3}i[-\frac{k^{2}}{2}\!+\!(1\!-\!x_{1})(1\!-\!x_{2})q^{2}\! +\!(1\!-\!4x_{3}\!+\!x_{3}^{2})m^{2}]\Big{]}_{\xi\to 0}\] \[+C\Big{)}+{\cal O}(\alpha^{2})\,,\]
and hence
\[F_{1}(q^{2}) = 1\!+\!\Big{(}\frac{\alpha}{2\pi}\Big{[}\!\int_{0}^{1}\!\!\!dx_{ 1}dx_{2}dx_{3}\delta(1\!-\!x_{1}\!-\!x_{2}\!-\!x_{3})\] \[\times[\log\frac{1}{\Delta}\!+\!\frac{(1\!-\!x_{1})(1\!-\!x_{2}) q^{2}\!+\!(1\!-\!4x_{3}\!+\!x_{3}^{2})m^{2}}{\Delta}]\Big{]}_{\xi\to 0}\] \[+C\Big{)}+{\cal O}(\alpha^{2})\,.\]
Considering the one-loop correction to \(F_{1}\) being zero at \(q^{2}=0\), the form factor \(F_{1}(q^{2})\) can be rewritten as
\[F_{1}(q^{2}) = 1\!+\!\frac{\alpha}{2\pi}\Big{[}\!\int_{0}^{1}\!\!\!dx_{1}dx_{2} dx_{3}\delta(1\!-\!x_{1}\!-\!x_{2}\!-\!x_{3})\] \[\times[\log\frac{(1-x_{3})^{2}m^{2}-x_{3}\eta}{\Delta}+\frac{(1 \!-\!x_{1})(1\!-\!x_{2})q^{2}}{\Delta}\] \[+\frac{(1\!-\!x_{3}\!+\!x_{3}^{2})m^{2}}{\Delta}\frac{x_{1}x_{2} q^{2}}{(1\!-\!x_{3})^{2}m^{2}\!-\!x_{3}\xi}]\Big{]}_{\xi\to 0}+{\cal O}(\alpha^{2})\,,\]
with a trick of \(\eta\) parameter equal to the value of \(\xi\) introduced in the limit \(\xi\!\to\!0\). The form factor \(F_{2}(q^{2})\) is
\[F_{2}(q^{2}) = \Big{[}\!(-ie)^{2}\!\!\int_{0}^{1}\!\!\!dx_{1}dx_{2}dx_{3}\!\!\int \!d\xi\!\!\int\!\!\frac{d^{4}k}{(2\pi)^{4}}\frac{\delta(1\!-\!x_{1}\!-\!x_{2}\!- \!x_{3})}{(k^{2}-\Delta+i\epsilon)^{4}} \tag{15}\] \[\times 24im^{2}x_{3}^{2}(1\!-\!x_{3})\Big{]}_{\xi\to 0}+{\cal O}( \alpha^{2})\] \[= \frac{\alpha}{2\pi}\Big{[}\!\int_{0}^{1}\!\!\!dx_{1}dx_{2}dx_{3} \delta(1\!-\!x_{1}\!-\!x_{2}\!-\!x_{3})\frac{2m^{2}}{\Delta}x_{3}(1\!-\!x_{3}) \!\Big{]}_{\xi\to 0}\] \[+{\cal O}(\alpha^{2})\,.\]
At \(q^{2}=0\), one has
\[F_{2}(0) = \frac{\alpha}{2\pi}\!\!\int_{0}^{1}\!\!\!dx_{1}dx_{2}dx_{3}\delta( 1\!-\!x_{1}\!-\!x_{2}\!-\!x_{3})\frac{2x_{3}}{(1\!-\!x_{3})}\!+\!{\cal O}( \alpha^{2}) \tag{16}\] \[= \frac{\alpha}{2\pi}+{\cal O}(\alpha^{2})\,.\]
Figure 6: The one-loop contribution to the electron vertex function.
#### a.0.5 A two-loop example
Here the new method is applied to a two-loop transition with overlapping divergences in the \(\phi^{4}\) theory, as shown in Fig. 7. There are two-free loop momenta \(k_{A}\) and \(k_{B}\), and the physical transition amplitude \({\cal T}_{\rm P}\) is
\[{\cal T}_{\rm P} = \left[\int\!d\xi\frac{\partial{\cal T}_{\rm F}(\xi)}{\partial\xi} \right]_{\xi\to 0}+C\] \[= \left[\frac{(-i\lambda)^{3}}{2}\!\!\int\!\!d\xi\!\!\int\!\!\!\frac {d^{4}k_{A}\;d^{4}k_{B}}{(2\pi)^{4}(2\pi)^{4}}\frac{i}{k_{A}^{2}-m^{2}}\frac{ i}{(k_{A}\!+\!q)^{2}\!-\!m^{2}}\right.\] \[\left.\times\frac{-i}{(k_{B}^{2}\!-\!m^{2}\!+\!\xi)^{2}}\frac{i}{( k_{B}+k_{A}\!+\!p_{3})^{2}\!-\!m^{2}}\right]_{\xi\to 0}+C\,,\]
with \(q\!=\!p_{1}\!+\!p_{2}\). After the \(k_{B}\) integral, one has
\[{\cal T}_{\rm P} = \Big{[}\frac{(-i\lambda)^{3}}{2}\!\!\int_{0}^{1}\!\!\!dx\!\!\int \!\!\!d\xi\!\!\int\!\!\!\frac{d^{4}k_{A}}{(2\pi)^{4}k_{A}^{2}\!-\!m^{2}}\frac{ i}{(k_{A}\!+\!q)^{2}\!-\!m^{2}}\] (A.18) \[\times\frac{x}{16\pi^{2}(k_{A}\!+\!p_{3})^{2}x(1\!-\!x)\!-\!m^{2 }\!+\!x\xi}\Big{]}_{\xi\to 0}+C\,.\]
The expression can be rewritten as
\[{\cal T}_{\rm P} = \Big{[}\frac{(-i\lambda)^{3}}{2}\!\int_{0}^{1}\!\!\!dx\!\int_{0} ^{1}\!\!\!dy\!\int\!\!\!d\xi\!\!\int\!\!\!\frac{d^{4}k_{A}}{(2\pi)^{4}(k_{A}^ {2}\!+\!2yk_{A}\!\cdot\!q\!+\!yq^{2}\!-\!m^{2})^{2}}\] \[\times\frac{x}{16\pi^{2}(k_{A}\!+\!p_{3})^{2}x(1\!-\!x)\!-\!m^{2} \!+\!x\xi}\Big{]}_{\xi\to 0}+C\] \[= \Big{[}\frac{(-i\lambda)^{3}}{2}\!\int_{0}^{1}\!\!\!dx\!\int_{0} ^{1}\!\!\!dy\!\int_{0}^{1}\!\!\!dz\!\int\!\!\!d\xi\!\!\int\!\!\!\frac{d^{4}k_ {A}}{(2\pi)^{4}}\frac{-i}{16\pi^{2}(1\!-\!x)}\] \[\times\frac{2(1-z)}{[zD_{B}\!+\!(1\!-\!z)D_{A}]^{3}}\Big{]}_{\xi \to 0}+C\,,\]
with \(D_{A}\!=\!k_{A}^{2}\!+\!2yk_{A}q\!+\!yq^{2}\!-\!m^{2}\), \(D_{B}\!=\!(k_{A}\!+\!p_{3})^{2}\!-\!m^{2}/x(1\!-\!x)+\!\xi/(1\!-\!x)\). After evaluating the \(k_{A}\) integral, one has
\[{\cal T}_{\rm P} = \Big{[}\frac{(-i\lambda)^{3}}{2}\!\int_{0}^{1}\!\!\!dx\!\int_{0} ^{1}\!\!\!dy\!\int_{0}^{1}\!\!\!dz\!\int\!\!\!d\xi\!\!\frac{-ix}{16\pi^{2}} \frac{-i(1\!-\!z)}{16\pi^{2}(\Delta\!-\!xz\xi)}\Big{]}_{\xi\to 0}+C\] (A.19) \[= \Big{(}\frac{-i\lambda)^{3}}{2(4\pi)^{4}}\!\int_{0}^{1}\!\!\!dx\! \int_{0}^{1}\!\!\!dy\!\int_{0}^{1}\!\!\!dz\frac{(1\!-\!z)}{z}\!\log\Delta+C\,,\]
with \(\Delta\!=\![(y(1\!-\!z)q\!+\!zp_{3})^{2}\!-\!(yq^{2}\!-\!m^{2})(1\!-\!z)\!-\!p _{3}^{2}z]x(1\!-\!x)+m^{2}z\). Considering the renormalization conditions that the corrections should be zero at \(q^{2}=4m^{2}\), the result can be written as
\[{\cal T}_{\rm P} = \frac{(-i\lambda)^{3}}{2(4\pi)^{4}}\!\!\int_{0}^{1}\!\!\!dx\!\int _{0}^{1}\!\!\!dy\!\int_{0}^{1}\!\!\!dz\frac{1}{z}\Big{[}(1\!-\!z)\log\Delta\] \[-\log[(y^{2}q^{2}\!-\!yq^{2}\!+\!m^{2})x(1\!-\!x)]\Big{]}\!-\!C_ {0}\] \[= \frac{(-i\lambda)^{3}}{2(4\pi)^{4}}\!\!\int_{0}^{1}\!\!\!dx\! \int_{0}^{1}\!\!\!dy\!\int_{0}^{1}\!\!\!dz\frac{1}{z}\Big{[}(1\!-\!z)\!\log \frac{\Delta}{\Delta_{0}}\] \[-\log\frac{y^{2}q^{2}\!-\!yq^{2}\!+\!m^{2}}{(4y^{2}\!-\!4y\!+\!1) m^{2}}\Big{]}\,,\]
with \(\Delta_{0}=[y(1\!-\!z)(y+z-yz)4m^{2}\!+\!(z^{2}\!-\!z)p_{3}^{2}\!-\!(4y\!-1)m^{2} (1\!-\!z)]x(1\!-\!x)\!+\!m^{2}z\).
|
2303.15976 | Polynomial Bounds in Koldobsky's Discrete Slicing Problem | In 2013, Koldobsky posed the problem to find a constant $d_n$, depending only
on the dimension $n$, such that for any origin-symmetric convex body
$K\subset\mathbb{R}^n$ there exists an $(n-1)$-dimensional linear subspace
$H\subset\mathbb{R}^n$ with \[
|K\cap\mathbb Z^n| \leq d_n\,|K\cap H\cap \mathbb
Z^n|\,\mathrm{vol}(K)^{\frac 1n}. \] In this article we show that $d_n$ is
bounded from above by $c\,n^2\,\omega(n)/\log(n)$, where $c$ is an absolute
constant and $\omega(n)$ is the flatness constant. Due to the recent best known
upper bound on $\omega(n)$ we get a ${c\,n^3\log(n)^2}$ bound on $d_n$. This
improves on former bounds which were exponential in the dimension. | Ansgar Freyer, Martin Henk | 2023-03-28T13:47:30Z | http://arxiv.org/abs/2303.15976v2 | # Polynomial bounds in Koldobsky's discrete slicing problem
###### Abstract.
In 2013, Koldobsky posed the problem to find a constant \(d_{n}\), depending only on the dimension \(n\), such that for any origin-symmetric convex body \(K\subset\mathbb{R}^{n}\) there exists an \((n-1)\)-dimensional linear subspace \(H\subset\mathbb{R}^{n}\) with
\[|K\cap\mathbb{Z}^{n}|\leq d_{n}\,|K\cap H\cap\mathbb{Z}^{n}|\,\operatorname{ vol}(K)^{\frac{1}{n}}.\]
In this article we show that \(d_{n}\) is bounded from above by \(c\,n^{2}\,\omega(n)\), where \(c\) is an absolute constant and \(\omega(n)\) is the flatness constant. Due to the best known upper bound on \(\omega(n)\) this gives a \(c\,n^{10/3}\log(n)^{d}\) bound on \(d_{n}\) where \(a\) is another absolute constant. This bound improves on former bounds which were exponential in the dimension.
## 1. Introduction
By a convex body we mean a non-empty convex compact set \(K\subset\mathbb{R}^{n}\). The class of convex bodies in \(\mathbb{R}^{n}\) is denoted by \(\mathcal{K}^{n}\) and the subclass of convex bodies that are origin-symmetric is denoted by \(\mathcal{K}^{n}_{os}\).
The classical and central slicing problem in convex geometry due to Bourgain [4, 5] asks for the optimal constant \(b_{n}>0\) such that for any \(K\in\mathcal{K}^{n}_{os}\) there exists a hyperplane \(H\) such that
\[\operatorname{vol}(K)\leq b_{n}\operatorname{vol}_{n-1}(K\cap H)\,\operatorname {vol}(K)^{\frac{1}{n}}. \tag{1.1}\]
Here \(\operatorname{vol}(S)\) denotes the volume, i.e., \(n\)-dimensional Lebesgue measure of \(S\subset\mathbb{R}^{n}\), and the \(d\)-dimensional volume of a set \(S\) contained in a \(d\)-dimensional affine plane is denoted by \(\operatorname{vol}_{d}(S)\).
It is conjectured that \(b_{n}\) in (1.1) is an absolute constant and the current best known bound due to a very recent announced result of Klartag [19] is of order \(O(\sqrt{\log(n)})\). This conjecture is equivalent to a multitude of other problems in Convex Geometry and Geometric Analysis such as the isotropic constant conjecture. It is considered to be one of the major open problems in Convex Geometry and for more information we refer to [6, 7, 20, 27].
Koldobsky considered generalizations of (1.1) to arbitrary measures (see, e.g., [23, 24]). For instance, in [21, 22] it is shown that the best-possible constant \(k_{n}>0\) such that for any measure \(\nu\) with non-negative even continuous density on \(K\) there exists a hyperplane \(H\subset\mathbb{R}^{n}\) with
\[\nu(K)\leq k_{n}\,\nu(K\cap H)\operatorname{vol}(K)^{\frac{1}{n}} \tag{1.2}\]
is of order \(O(\sqrt{n})\). While the measures considered in (1.2) are continuous, Koldobsky also asked for a discrete variant in a similar spirit. Here the problem is to determine the best possible constant \(d_{n}>0\) such that for any \(K\in\mathcal{K}^{n}_{os}\) with \(\dim K\cap\mathbb{Z}^{n}=n\) there exists a central hyperplane \(H\subset\mathbb{R}^{n}\), i.e., a hyperplane passing through the origin, with
\[\operatorname{G}(K)\leq d_{n}\operatorname{G}(K\cap H)\operatorname{vol}(K)^{ \frac{1}{n}},\]
where \(\mathrm{G}(K)=|K\cap\mathbb{Z}^{n}|\) is the lattice point enumerator. In [1] it was shown \(d_{n}\in O(n2^{n})\) and the best known lower bound is of order \(\Omega(n)\)[1, Theorem 1.6]. The main reason for this exponential gap is the unfortunate circumstance that, even though \(K\) is origin-symmetric, the maximal (with respect to lattice points) hyperplane section does not need to pass through the origin. In fact, given a direction \(y\neq 0\) in \(\mathbb{R}^{n}\) the maximal affine hyperplane section of \(K\) orthogonal to \(y\) might contain \(2^{n}\) times as many lattice points as the parallel section through the origin (see, e.g., [11, Lemma 1.3]).
On the other hand it is known [11, Theorem 1.4] that for \(K\in\mathcal{K}_{os}^{n}\) there always exists an affine hyperplane \(A\) such that
\[\mathrm{G}(K)^{(n-1)/n}\leq O(n)\mathrm{G}(K\cap A),\]
and in this paper we show that there exists a (not necessarily parallel) central hyperplane \(H\) such that \(\mathrm{G}(K\cap H)\) does not deviate too much from \(\mathrm{G}(K\cap A)\). To this end we have to distinguish between between "large" and "small" affine sections \(K\cap A\), measured with respect to the covering radius.
The covering radius in turn is related to the well-known flatness constant \(\omega(n)\), which is one of the main ingredients of our main result. For precise definitions we refer to Section 2. In order to get a polynomial bound on \(d_{n}\), we need, in particular, the following bound on \(\omega(n)\) (see [3, 31])
\[\omega(n)\leq O(n^{4/3}\log(n)^{a}), \tag{1.3}\]
where \(a>0\) is an absolute constant.
**Theorem 1.1**.: _Let \(K\in\mathcal{K}^{n}\), \(\dim K=n\), with centroid at the origin and let \(k\in\{1,\dots,n-1\}\). There exists a \(k\)-dimensional central plane \(L\subset\mathbb{R}^{n}\) such that_
\[\mathrm{G}(K)^{\frac{k}{n}}\leq O(\omega(n))^{n-k}O\left(\max\left\{\left( \frac{n+1}{k+1}\right)^{k},\omega(k)\,k\,n\right\}\right)\mathrm{G}(K\cap L).\]
As a special case of our investigation and (1.3), we obtain the desired polynomial upper bound for \(d_{n}\) in Koldobsky's discrete slicing problem (1.2).
**Corollary 1.2**.: _Let \(K\in\mathcal{K}_{os}^{n}\) with \(\dim K\cap\mathbb{Z}^{n}=n\). There exists a central hyperplane \(H\subset\mathbb{R}^{n}\) such that_
\[\mathrm{G}(K)\leq O(n^{2}\,\omega(n))\mathrm{G}(K\cap H)\operatorname{vol}(K) ^{\frac{1}{n}}.\]
_In particular,_
\[\mathrm{G}(K)\leq O(n^{10/3}\log(n)^{a})\mathrm{G}(K\cap H)\operatorname{vol}( K)^{\frac{1}{n}}. \tag{1.4}\]
It is quite likely that the right order is linear in the dimension which would also coincide with a result of Regev [29] where by a randomized construction it is shown \(d_{n}\in O(n)\) provided the volume of \(K\) is at most \(c^{n^{2}}\), where \(c\) is an absolute constant.
## 2. Preliminaries
Here we provide further necessary concepts and results from Convex Geometry and Geometry of Numbers which we need for our proof. For more information we refer to the books [2, 13, 14, 15, 32].
Regarding the volume of convex bodies, we will need two classical inequalities. First, we make use of the following bound due to Kuperberg [25] in the so called reverse Blaschke-Santalo inequality
\[\frac{\pi^{n}}{n!}<\operatorname{vol}(K^{\star})\operatorname{vol}(K), \tag{2.1}\]
where \(K\in\mathcal{K}^{n}_{os}\) and \(K^{\star}=\{y\colon x\cdot y\leq 1,\ \text{for all }x\in K\}\) denotes the polar body of \(K\). The famous Mahler conjecture states that the optimal bound is \(4^{n}/n!\) (see, e.g., [10]).
Secondly, we utilize a well-known result by Rogers and Shephard [30] which allows us to compare the volume of \(K\in\mathcal{K}^{n}\) to the volume of its difference body \(K-K\in\mathcal{K}^{n}_{os}\):
\[\operatorname{vol}(K-K)\leq\binom{2n}{n}\operatorname{vol}(K). \tag{2.2}\]
The bound is attained if and only if \(K\) is a simplex, and we note that \(\binom{2n}{n}<4^{n}\).
We recall that a lattice \(\Lambda\subset\mathbb{R}^{n}\) is a discrete subgroup of \(\mathbb{R}^{n}\). If \(\dim\Lambda=d\) and \(K\in\mathcal{K}^{n}_{os}\) is contained in the linear hull of \(\Lambda\), i.e, \(K\subset\operatorname{lin}\Lambda\), then for \(1\leq i\leq d\), the \(i\)th successive minimum of \(K\) with respect to \(\Lambda\) is given by
\[\lambda_{i}(K,\Lambda)=\min\{\lambda\geq 0\colon\dim(\lambda K\cap\Lambda) \geq i\}.\]
The successive minima are related to the volume by Minkowski's second theorem [14, Theorem 23.1]:
\[\lambda_{1}(K,\Lambda)\cdots\lambda_{d}(K,\Lambda)\operatorname{vol}_{d}(K) \leq 2^{d}\det\Lambda, \tag{2.3}\]
where \(\det\Lambda\) is the determinant of the lattice, i.e., the \(d\)-dimensional volume of a fundamental domain of the action of \(\Lambda\) on \(\operatorname{lin}\Lambda\). For \(K\in\mathcal{K}^{n}\) we denote by
\[\operatorname{G}_{\Lambda}(K)=|K\cap\Lambda|\]
the number of its lattice points, and in the case \(\Lambda\subset\mathbb{Z}^{n}\) we just write \(\operatorname{G}(K)\). In order to bound \(\operatorname{G}_{\Lambda}(K)\) of a convex body \(K\subset\operatorname{lin}\Lambda\) in terms of the number of lattice points of lower dimensional sections we need its lattice width with respect to \(\Lambda\) which is given by
\[\operatorname{w}_{\Lambda}(K)=\min_{y\in\Lambda^{\star}\setminus\{0\}}\max_{ x_{1},x_{2}\in K}(x_{1}-x_{2})\cdot y. \tag{2.4}\]
It can be also expressed as
\[\operatorname{w}_{\Lambda}(K)=\lambda_{1}((K-K)^{\star},\Lambda^{\star}),\]
where \(\Lambda^{\star}=\{y\in\operatorname{lin}\Lambda\colon x\cdot y\in\mathbb{Z}\text { for all }x\in\Lambda\}\) is the polar lattice of \(\Lambda\). In particular, we have \((\mathbb{Z}^{n})^{\star}=\mathbb{Z}^{n}\).
A \(k\)-dimensional plane will be called a lattice plane if it contains \(k+1\) affinely independent points of \(\Lambda\). The orthogonal complement \(L^{\perp}\) of a \(k\)-dimensional lattice plane \(L\) containing the origin is an \((n-k)\)-dimensional lattice plane of \(\Lambda^{\star}\). The \((\dim\Lambda-1)\)-dimensional lattice planes are called lattice hyperplanes and can be parameterized via the primitive vectors of \(\Lambda^{\star}\), i.e., such a lattice hypperlane \(H\) is given by
\[H(y,\beta)=\{x\in\operatorname{lin}\Lambda\colon x\cdot y=\beta\}\]
where \(y\in\Lambda^{\star}\setminus\{0\}\) is a generator of the \(1\)-dimensional lattice \(\mathrm{lin}\{y\}\cap\Lambda^{\star}\) and \(\beta\in\mathbb{Z}\). The lattice \(\Lambda\) can be decomposed as
\[\Lambda=\bigcup_{\beta\in\mathbb{Z}}\left(\Lambda\cap H(y,\beta)\right),\]
where none of the sections is empty. From (2.4), we can see that \(\mathrm{w}_{\Lambda}(K)\) describes, up to a rounding, the minimum number of parallel lattice planes intersecting \(K\). In particular, we have
\[\mathrm{G}_{\Lambda}(K)\leq(\mathrm{w}_{\Lambda}(K)+1)\mathrm{G}_{\Lambda}(K \cap H(\overline{y}^{\star},\overline{\beta})), \tag{2.5}\]
where \(\overline{y}^{\star}\in\lambda_{1}((K-K)^{\star},\Lambda^{\star})(K-K)^{\star}\) and \(\overline{\beta}\) is chosen such that \(\mathrm{G}_{\Lambda}(K\cap H(\overline{y}^{\star},\beta))\) is maximized among \(\beta\in\mathbb{Z}\). If \(\mathrm{dim}(K\cap\Lambda)=\mathrm{dim}\,\Lambda\) we have \(\mathrm{w}_{\Lambda}(K)\geq 1\) and, thus, (2.5) yields
\[\mathrm{G}_{\Lambda}(K)\leq 2\mathrm{w}_{\Lambda}(K)\mathrm{G}_{\Lambda}(K \cap H(\overline{y}^{\star},\overline{\beta})) \tag{2.6}\]
for such convex bodies.
The final lattice parameter that we take into account is the covering radius \(\mu_{\Lambda}(K)\) of \(K\). It is commonly defined as
\[\mu_{\Lambda}(K)=\mathrm{min}\{\mu\geq 0\colon\mu K+\Lambda=\mathrm{lin}\,\Lambda\}.\]
Due to a result of Khinchine [18], there exists a constant depending only on the dimension \(d\) of \(\Lambda\) that bounds the product \(\mathrm{w}_{\Lambda}(K)\mu_{\Lambda}(K)\) from above for all convex bodies \(K\subset\mathrm{lin}\,\Lambda\). The smallest number \(\omega(d)\) with
\[\mathrm{w}_{\Lambda}(K)\mu_{\Lambda}(K)\leq\omega(d) \tag{2.7}\]
is the so-called flatness constant. To this day, the best known upper bound on \(\omega(d)\) stated in (1.3) follows from a result of Rudelson [31] which in turn builds upon [3]. On the other hand, it is easy to see that \(\omega(d)\geq d\) and the current best lower bound is due to a recent result of Mayrhofer, Schade and Weltge [26], showing that \(\omega(d)\geq 2d-o(d)\). Moreover, \(\omega(d)\) is monotonous in \(d\) as can be seen by extending a \(d\)-dimensional lattice \(\Lambda\) to a \((d+1)\)-dimensional lattice \(\overline{\Lambda}\) via \(\overline{\Lambda}=\Lambda\oplus\mathbb{Z}e_{d+1}\) and replacing \(K\subseteq\mathrm{lin}\,\Lambda\) by \(\overline{K}=K\times[0,\omega(d-1)]\cdot e_{d+1}\): it is \(\mathrm{w}_{\overline{\Lambda}}(\overline{K})=\mathrm{w}_{\Lambda}(K)\) and \(\mu_{\overline{\Lambda}}(\overline{K})=\mu_{\Lambda}(K)\).
Since the lattice width and covering radius are translation invariant, their definition and properties extend naturally to affine lattice \(\Lambda+t\), where \(\Lambda\subset\mathbb{R}^{n}\) is a lattice and \(t\in\mathbb{R}^{n}\), together with convex bodies \(K\subset t+\mathrm{lin}\,\Lambda\).
Another key ingredient of our proofs is the following result that has been obtained recently in [12, Proposition 1.6] (where the lower bound was proven independently by Dadush [8]):
\[(1-\mu_{\Lambda}(K))^{d}\frac{\mathrm{vol}_{d}(K)}{\det\Lambda}\leq\mathrm{G} _{\Lambda}(K)\leq\frac{\mathrm{vol}_{d}(K)}{\det\Lambda}(1+\mu_{\Lambda}(K))^{ d}. \tag{2.8}\]
For the lower bound it is necessary to assume \(\mu_{\Lambda}(K)\leq 1\). Although (2.8) is stated in [12] only for the case \(\Lambda=\mathbb{Z}^{n}\), the above generalization follows easily by applying a linear isomorphism that maps \(\mathbb{Z}^{d}\) to \(\Lambda\).
For the sake of brevity, if \(\Lambda=\mathbb{Z}^{n}\), we write \(\mu_{\Lambda}(K)=\mu(K)\) and, likewise, \(\lambda_{i}(K,\Lambda)=\lambda_{i}(K)\) and \(\mathrm{w}(K)=\mathrm{w}_{\mathbb{Z}^{n}}(K)\). Affine planes containing the origin will be called central planes.
## 3. Affine slices
In [11] it was already shown that for any \(K\in\mathcal{K}^{n}\) there exists an affine hyperplane \(A\subset\mathbb{R}^{n}\) such that
\[\mathrm{G}(K)^{\frac{n-1}{n}}\leq O(n^{2})\mathrm{G}(K\cap A).\]
Here we refine this inequality by replacing the constant \(O(n^{2})\) with \(O(\omega(n))\). More generally, we show the following.
**Theorem 3.1**.: _Let \(K\in\mathcal{K}^{n}\), \(n\geq 2\), and let \(k\in\{1,\ldots,n-1\}\). There exists a \(k\)-dimensional affine plane \(A\subset\mathbb{R}^{n}\) such that_
\[\mathrm{G}(K)^{\frac{k}{n}}\leq O(\omega(n))^{n-k}\,\mathrm{G}(K\cap A). \tag{3.1}\]
Before we come to its proof, we remark that Rabinowitz [28] settled the case \(k=1\) as he showed that for any convex body \(K\in\mathcal{K}^{n}\) there exists a line \(\ell\subset\mathbb{R}^{n}\) with
\[\mathrm{G}(K)^{\frac{1}{n}}\leq\mathrm{G}(K\cap\ell). \tag{3.2}\]
We also need the following lemma.
**Lemma 3.2**.: _Let \(n\geq 2\), \(K\in\mathcal{K}^{n}\) with \(\dim K=n\) and let \(m\in\{1,\ldots,n-1\}\). Then we have_
\[\left(\lambda_{1}((K-K)^{\star})\cdots\lambda_{m}((K-K)^{\star})\right)^{n} \leq\left(n!\left(\frac{8}{\pi}\right)^{n}\mathrm{vol}(K)\right)^{m}\]
Proof.: Let \(\lambda_{i}^{\star}=\lambda_{i}((K-K)^{\star})\), \(1\leq i\leq n\). As
\[(\lambda_{1}^{\star}\cdots\lambda_{m}^{\star})^{n}\leq(\lambda_{1}^{\star} \cdots\lambda_{n}^{\star})^{m},\]
Minkowski's second theorem (2.3) yields
\[(\lambda_{1}^{\star}\cdots\lambda_{m}^{\star})^{n}\leq\frac{2^{n}}{\mathrm{ vol}((K-K)^{\star})}.\]
The bound now follows by first applying Kuperbergs reverse Blaschke-Santalo inequality (2.1) to \(\mathrm{vol}((K-K)^{\star})\), followed by the Rogers-Shephard inequality (2.2).
Proof of Theorem 3.1.: We prove (3.1) by induction on the ambient dimension \(n\) and so we may assume \(\dim(K\cap\mathbb{Z}^{n})=n\). In view of (3.2), we may also assume \(k\geq 2\). We distinguish two cases depending on the lattice width of \(K\).
First we assume that \(\mathrm{w}(K)\leq\omega(n)+n\). Let \(H=H(\overline{\mathcal{Y}}^{\star},\overline{\beta})\) as in (2.6). Then we have
\[\mathrm{G}(K)\leq O(\mathrm{w}(K))\mathrm{G}(K\cap H)=O(\omega(n))\mathrm{G}( K\cap H). \tag{3.3}\]
If \(k=n-1\) we are done, so we can assume \(k<n-1\). By induction, there exists a \(k\)-dimensional affine plane \(A\subset H\) with
\[\mathrm{G}(K\cap H)^{\frac{k}{n-1}}\leq O(\omega(n-1))^{n-1-k}\mathrm{G}(K \cap A). \tag{3.4}\]
Substituting (3.4) into (3.3) gives
\[\mathrm{G}(K) \leq O(\omega(n))^{1+\frac{(n-1)(n-k-1)}{k}}\mathrm{G}(K\cap A)^ {\frac{n-1}{k}}\] \[\leq O(\omega(n))^{\frac{n(n-1)}{k}}\mathrm{G}(K\cap A)^{\frac{n} {k}},\]
where we used the monotonicity of the flatness constant. Taking powers, we obtain
\[\mathrm{G}(K)^{\frac{k}{n}}\leq O(\omega(n))^{n-k}\mathrm{G}(K\cap A)\]
as desired.
Next, we assume that \(\mathrm{w}(K)\geq\omega(n)+n\). In this case, (2.7) implies
\[\mu(K)\leq\frac{\omega(n)}{\omega(n)+n}<1. \tag{3.5}\]
Let \(y_{1},\ldots,y_{n-k}\in\mathbb{Z}^{n}\setminus\{0\}\) be linearly independent with
\[y_{i}\in\lambda_{i}((K-K)^{\star})\,(K-K)^{\star}\cap\mathbb{Z}^{n},1\leq i \leq n-k,\]
and let \(L=\lim\{y_{1},\ldots,y_{n-k}\}\). Moreover, let \(\widetilde{K}=K|L\) the orthogonal projection of \(K\) onto \(L\) and we also consider the lattice \(\widetilde{\Lambda}=\mathbb{Z}^{n}|L\).
As pointed out in [17, Proposition 4.1] we have
\[\mathrm{G}_{\widetilde{\Lambda}}(\widetilde{K})\leq\prod_{i=1}^{n-k}\left( \frac{1}{2}\lambda_{i}((\widetilde{K}-\widetilde{K})^{\star},\widetilde{ \Lambda}^{\star})+1\right)\leq\prod_{i=1}^{n-k}\lambda_{i}((\widetilde{K}- \widetilde{K})^{\star},\widetilde{\Lambda}^{\star}), \tag{3.6}\]
where for the last inequality we exploit the fact that \(\mathrm{w}(K)\geq 1/2\). The polarity operations above on \(\widetilde{K}\) and \(\widetilde{\Lambda}\) are carried out within the space \(L\) and so we have
\[(\widetilde{K}-\widetilde{K})^{\star}=(K-K)^{\star}\cap L,\qquad\widetilde{ \Lambda}^{\star}=\mathbb{Z}^{n}\cap L.\]
Hence, by the choice of \(L\) we have \(\lambda_{i}((\widetilde{K}-\widetilde{K})^{\star},\widetilde{\Lambda}^{\star })=\lambda_{i}((K-K)^{\star})\) for \(1\leq i\leq n-k\) and therefore, (3.6) and Lemma 3.2 yields
\[\mathrm{G}_{\widetilde{\Lambda}}(\widetilde{K})\leq O(n)^{n-k}\mathrm{vol}(K )^{\frac{n-k}{n}}. \tag{3.7}\]
In view of (3.5) we can apply the volume approximation by the covering radius (2.8) and obtain
\[\mathrm{vol}(K)\leq(1-\mu(K))^{-n}\mathrm{G}(K)\leq\left(\frac{\omega(n)+n}{ n}\right)^{n}\mathrm{G}(K).\]
Combining this with (3.7) gives
\[\mathrm{G}_{\widetilde{\Lambda}}(\widetilde{K})\leq O(\omega(n))^{n-k} \mathrm{G}(K)^{\frac{n-k}{n}}.\]
Hence we obtain
\[\mathrm{G}(K) =\sum_{x\in\widetilde{K}\cap\widetilde{\Lambda}}\mathrm{G}(K \cap(x+L^{\perp}))\] \[\leq\mathrm{G}_{\widetilde{\Lambda}}(\widetilde{K})\max_{x\in \widetilde{\Lambda}}\mathrm{G}(K\cap(x+L^{\perp}))\] \[\leq O(\omega(n))^{n-k}\mathrm{G}(K)^{\frac{n-k}{n}}\max_{x\in \widetilde{\Lambda}}\mathrm{G}(K\cap(x+L^{\perp})).\]
Thus
\[\mathrm{G}(K)^{\frac{k}{n}}\leq O(\omega(n))^{n-k}\mathrm{G}(K\cap A),\]
where \(A=\widetilde{x}+L^{\perp}\) for some \(\widetilde{x}\in\widetilde{\Lambda}\) is a \(k\)-dimensional lattice plane such that \(\mathrm{G}(K\cap A)=\max_{x\in\widetilde{\Lambda}}\mathrm{G}(K\cap(x+L^{\perp }))\)
## 4. From affine to central slices
For an origin-symmetric convex body \(K\in\mathcal{K}^{n}_{os}\) the classical concavity principle of Brunn states that for any \(k\)-dimensional plane \(A\)
\[\operatorname{vol}_{k}(K\cap A)\leq\operatorname{vol}_{k}(K\cap(A-A)), \tag{4.1}\]
where \(A-A\) is the central plane parallel to \(A\) passing through the origin (see, e.g., [2, Theorem 1.2.1]). If the centroid of a convex body \(K\in\mathcal{K}^{n}\) is at the origin, i.e., we have \(0=\operatorname{vol}(K)^{-1}\int_{K}x\operatorname{d}x\), we will call the body \(K\) centered. For those bodies the following analogue of (4.1) has been obtained by Grunbaum [16] for \(k=n-1\) and by Fradelizi [9] for general \(k\):
\[\operatorname{vol}_{k}(K\cap A)\leq\left(\frac{n+1}{k+1}\right)^{k} \operatorname{vol}_{k}(K\cap(A-A)). \tag{4.2}\]
For \(k=n-1\), the constant in the above inequality is bounded from above by \(\operatorname{e}\). In the discrete setting, however, this factor must be replaced by \(2^{n-1}\) (even in the symmetric case) as the example \(K=\operatorname{conv}(\pm([0,1]^{n-1}\times\{1\})\) with \(A=\{x\in\mathbb{R}^{n}\colon x_{n}=1\}\) shows (see, e.g., [1, 11]). Nonetheless, we will show that a central plane containing "many" lattice points does still exist.
**Proposition 4.1**.: _Let \(n\geq 2,\ k\in\{1,\ldots,n-1\}\), and let \(A\subset\mathbb{R}^{n}\) be a \(k\)-dimensional plane._
* _Let_ \(K\in\mathcal{K}^{n}\)_,_ \(\dim K=n\)_, be centered. Then there exists a_ \(k\)_-dimensional central plane_ \(L\subset\mathbb{R}^{n}\) _such that_ \[\operatorname{G}(K\cap A)\leq O\left(\max\left\{\left(\frac{n+1}{k+1}\right)^ {k},k\,n\,\omega(k)\right\}\right)\operatorname{G}(K\cap L).\]
* _Let_ \(K\in\mathcal{K}^{n}_{os}\)_,_ \(\dim K=n\)_. Then there exists a_ \(k\)_-dimensional central plane_ \(L\subset\mathbb{R}^{n}\) _such that_ \[\operatorname{G}(K\cap A)\leq O\left(\omega(k)k\right)\operatorname{G}(K\cap L).\]
In both bounds, if \(k\in O(1)\), the asymptotic order of the constant is the same as in the corresponding continuous inequalities (4.1) and (4.2), respectively. Moreover, for \(k=n-1\) the maximum in i) is of order \(O(\omega(n)n^{2})\).
Proof of Proposition 4.1.: Let \(\Lambda=\mathbb{Z}^{n}\cap A\). We may assume that \(\Lambda\) is a \(k\)-dimensional (affine) lattice.
For i), we first assume that
\[\mu_{\Lambda}(K\cap A)\leq\tfrac{1}{(k+1)(n+2)}. \tag{4.3}\]
Let \(L=A-A\) and let \(\Lambda_{0}=\mathbb{Z}^{n}\cap L\). Then \(\Lambda=t+\Lambda_{0}\) for some \(t\in\mathbb{R}^{n}\) and as \(K\) is centered we have \((K-K)\subseteq(n+1)K\) (cf. [32, Lemma 2.3.3]). Hence,
\[(K\cap A)-(K\cap A)\subseteq(n+1)(K\cap L), \tag{4.4}\]
and as the covering radius is translation invariant and homogeneous of degree (-1) we obtain
\[\mu_{\Lambda_{0}}(K\cap L)\leq(n+1)\mu_{\Lambda}(K\cap A)<1. \tag{4.5}\]
Next we apply again the volume approximation via the covering radius (2.8) and together with (4.2) and on account of (4.5) and \(\det\Lambda=\det\Lambda_{0}\) we obtain
\[\begin{split}\mathrm{G}(K\cap A)&\leq\frac{\mathrm{ vol}_{k}(K\cap A)}{\det\Lambda}(1+\mu(K\cap A,\Lambda))^{k}\\ &\leq\left(\frac{n+1}{k+1}\right)^{k}\frac{\mathrm{vol}(K\cap L )}{\det\Lambda}(1+\mu(K\cap A,\Lambda))^{k}\\ &\leq\left(\frac{n+1}{k+1}\right)^{k}\left(\frac{1+\mu_{\Lambda} (K\cap A)}{1-\mu_{\Lambda_{0}}(K\cap L)}\right)^{k}\mathrm{G}(K\cap L)\\ &\leq\left(\frac{n+1}{k+1}\right)^{k}\left(\frac{1+\mu_{\Lambda} (K\cap A)}{1-(n+1)\mu_{\Lambda}(K\cap A)}\right)^{k}\mathrm{G}(K\cap L).\end{split} \tag{4.6}\]
With (4.3) we find
\[\begin{split}\left(\frac{1+\mu_{\Lambda}(K\cap A)}{1-(n+1)\mu_{ \Lambda}(K\cap A)}\right)^{k}&\leq\left(\frac{(k+1)(n+2)+1}{(k+1 )(n+2)-(n+1)}\right)^{k}\\ &=\left(\frac{(k+1)(n+2)+1}{k(n+2)+1}\right)^{k}\leq\left(\frac{ k+1}{k}\right)^{k}\leq\mathrm{e}.\end{split}\]
Combining this with (4.6) yields
\[\mathrm{G}(K\cap A)\leq O\left(\left(\frac{n+1}{k+1}\right)^{k}\right)\mathrm{ G}(K\cap L). \tag{4.7}\]
Now suppose that (4.3) does not hold. Then (2.7) gives
\[\mathrm{w}(K\cap A)\leq\omega(k)\left(k+1\right)(n+2).\]
In the special case that \(\dim(K\cap A\cap\mathbb{Z}^{n})<k\), it suffices to consider the central plane \(L=\lim(K\cap A\cap\mathbb{Z}^{n})\) whose dimension is at most \(k\). For this choice of \(L\) we clearly have \(\mathrm{G}(K\cap A)\leq\mathrm{G}(K\cap L)\). We are done after extending \(L\) to a linear \(k\)-space, if necessary.
So we can assume that \(K\cap A\) contains \(k\) affinely independent lattice points. Thus, it follows from (2.6) that there exists an affine \((k-1)\)-dimensional lattice plane \(\widetilde{A}\subseteq A\) such that
\[\begin{split}\mathrm{G}(K\cap A)&\leq O(\mathrm{w}( K\cap A))\mathrm{G}(K\cap\widetilde{A})\\ &\leq O\left(\omega(k)\,k\,n\right)\mathrm{G}(K\cap\lim\widetilde {A}).\end{split} \tag{4.8}\]
If \(\mathbf{0}\in\widetilde{A}\subset A\), then \(A\) is a linear space itself and the statement of the proposition is obvious. Otherwise, \(\lim\widetilde{A}\) is an \(k\)-dimensional central plane. Taking the maximum of the upper bounds in (4.7) and (4.8) yields the claim in the centered case.
The argument for ii) follows the same lines. Here we replace the threshold value \(\frac{1}{(n+2)(k+1)}\) in the case distinction (4.3) by \(\frac{1}{3(k+1)}\). Since \(K\) is symmetric, we can improve the inclusion bound (4.4) to
\[\mu_{\Lambda_{0}}(K\cap L)\leq 2\mu_{\Lambda}(K\cap A).\]
Moreover, we can estimate the volume of the affine section against the volume of the central section using Brunn's concavity principle (4.1) instead of Fradelizi's bound (4.2). With these improvements, the bound in (4.7) becomes
\[\mathrm{G}(K\cap A)\leq O(1)\mathrm{G}(K\cap L).\]
On the other hand, due to the new threshold value in (4.3), the bound in (4.8) is
\[\operatorname{G}(K\cap A)\leq O\left(\omega(k)\,k\right)\operatorname{G}(K\cap L).\qed\]
From here on, the proofs of Theorem 3.1 and Corollary 1.2 can be obtained easily by first considering a large affine slice and then estimate it against a central one using Proposition 4.1.
Proof of Theorem 1.1.: With the help of Theorem 3.1 we obtain an affine plane \(A\subset\mathbb{R}^{n}\) of dimension \(k\) such that
\[\operatorname{G}(K)^{\frac{k}{n}}\leq O(\omega(n))^{n-k}\operatorname{G}(K \cap A).\]
Proposition 4.1 i) yields a linear \(k\)-dimensional central plane \(L\subset\mathbb{R}^{n}\) with
\[\operatorname{G}(K\cap A)\leq O\left(\max\left\{\left(\frac{n+1}{k+1}\right) ^{k},\omega(k)\,k\,n\right\}\right)\operatorname{G}(K\cap L).\]
The theorem is proven.
Proof of Corollary 1.2.: Lemma 3.2 with \(m=1\) tells us that
\[\operatorname{w}(K)\leq O(n)\operatorname{vol}(K)^{\frac{1}{n}}\]
Hence, (2.6) gives the existence of an affine hyperplane \(A\) such that
\[\operatorname{G}(K)\leq O(\operatorname{w}(K))\operatorname{G}(K\cap A)\leq O (n)\operatorname{G}(K\cap A)\operatorname{vol}(K)^{\frac{1}{n}}. \tag{4.9}\]
Applying Proposition 4.1 ii) to \(\operatorname{G}(K\cap A)\) shows that there exists a central hyperplane \(H\) with
\[\operatorname{G}(K\cap A) \leq O\left(n\omega(n)\right)\operatorname{G}(K\cap H)\] \[\leq O\left(n^{7/3}\log(n)^{\alpha}\right)\operatorname{G}(K\cap H),\]
where for the last inequality we used the bound (1.3).
Clearly, any strengthening of the flatness theorem (1.3) directly yields an improvement of (1.4) in Corollary 1.2. On the other hand, the affine estimate (4.9) is sharp as the cross-polytope \(C_{n}^{\star}=\operatorname{conv}\{\pm e_{1},\dots,\pm e_{n}\}\), where \(e_{i}\) denotes the \(i\)th standard basis vector, shows. \(C_{n}^{\star}\) contains \(2n+1\) lattice points, its vertices together with the origin. Its volume is \(2^{n}/n!\). Moreover, it is easy to check that any hyperplane section of \(C_{n}^{\star}\) can contain at most \(2n-1\) lattice points of \(C_{n}^{\star}\). This value is attained by the coordinate sections \(C_{n}^{\star}\cap e_{i}^{\perp}\). Hence,
\[\frac{\operatorname{G}(C_{n}^{\star})}{\max_{A}\operatorname{G}(C_{n}^{\star} \cap A)\operatorname{vol}(C_{n}^{\star})^{\frac{1}{n}}}=O(n),\]
where \(A\) ranges over all affine hyperplanes. This shows that the linear order in (4.9) cannot be improved.
## Acknowledgement
Ansgar Freyer is partially supported by the Austrian Science Fund (FWF) Project P34446-N. |
2305.17237 | Asymptotically locally flat and AdS higher-dimensional black holes of
Einstein-Horndeski-Maxwell gravity in the light of EHT observations: shadow
behavior and deflection angle | Unification of gravity with other interactions, achieving the ultimate
framework of quantum gravity, and fundamental problems in particle physics and
cosmology motivate to consider extra spatial dimensions. The impact of these
extra dimensions on the modified theories of gravity has attracted a lot of
attention. One way to examine how extra dimensions affect the modified
gravitational theories is to analytically investigate astrophysical phenomena,
such as black hole shadows. In this study, we aim to investigate the behavior
of the shadow shapes of higher-dimensional charged black hole solutions
including asymptotically locally flat (ALF) and asymptotically locally AdS
(ALAdS) in Einstein-Horndeski-Maxwell (EHM) gravitational theory. We utilize
the Hamilton-Jacobi method to find photon orbits around these black holes as
well as the Carter approach to formulate the geodesic equations. We examine how
extra dimensions, negative cosmological constant, electric charge, and coupling
constants of the EHM gravity affect the shadow size of the black hole. Then, we
constrain these parameters by comparing the shadow radius of these black holes
with the shadow size of M87* supermassive black hole captured by the Event
Horizon Telescope (EHT) collaborations. We discover that generally the presence
of extra dimensions within the EHM gravity results in reducing the shadow size
of higher-dimensional ALF and ALAdS charged black holes, whereas the impact of
electric charge on the shadow of these black holes is suppressible.... | Kourosh Nozari, Sara Saghafi | 2023-05-26T19:51:45Z | http://arxiv.org/abs/2305.17237v2 | Asymptotically locally flat and AdS higher-dimensional black holes of Einstein-Horndeski-Maxwell gravity in the light of EHT observations: shadow behavior and deflection angle
###### Abstract
Unification of gravity with other interactions, achieving the ultimate framework of quantum gravity, and fundamental problems in particle physics and cosmology motivate to consider extra spatial dimensions. The impact of these extra dimensions on the modified theories of gravity has attracted a lot of attention. One way to examine how extra dimensions affect the modified gravitational theories is to analytically investigate astrophysical phenomena, such as black hole shadows. In this study, we aim to investigate the behavior of the shadow shapes of higher-dimensional charged black hole solutions including asymptotically locally flat (ALF) and asymptotically locally AdS (ALAdS) in Einstein-Horndeski-Maxwell (EHM) gravitational theory. We utilize the Hamilton-Jacobi method to find photon orbits around these black holes as well as the Carter approach to formulate the geodesic equations. We examine how extra dimensions, negative cosmological constant, electric charge, and coupling constants of the EHM gravity affect the shadow size of the black hole. Then, we constrain these parameters by comparing the shadow radius of these black holes with the shadow size of M87* supermassive black hole captured by the Event Horizon Telescope (EHT) collaborations. We discover that generally the presence of extra dimensions within the EHM gravity results in reducing the shadow size of higher-dimensional ALF and ALAdS charged black holes, whereas the impact of electric charge on the shadow of these black holes is suppressible. Interestingly, we observe that decreasing the negative cosmological constant, i.e., increasing its absolute value, leads to increase the shadow size of the ALAdS charged higher-dimensional black hole in the EHM gravity. Surprisingly, based on the constraints from EHT observations, we discover that only the shadow size of the four dimensional ALF charged black hole lies in the confidence levels of EHT data, whereas owing to the presence of the negative cosmological constant, the shadow radius of the four, five, and seven dimensional ALAdS charged black holes lie within the EHT data confidence levels.
Black Hole Shadow, Extra Dimensions, Cosmological Constant, Deflection Angle, Horndeski Gravity, EHT, M87*. pacs: 04.50.Kd, 04.70.-s, 04.20.Ha, 04.25.dg, 04.50.-h, 04.50.Gh, 97.60.Lf
###### Contents
* I Introduction
* II EHM gravity with arbitrary dimensions and its black hole solutions
* II.1 ALF black hole with extra dimensions in EHM gravity
* II.2 ALAdS black hole with extra dimensions in EHM gravity
**III General formalism for shadow and deflection angle of higher-dimensional black holes and shadow observables**:
* Null geodesics
* Geometrical shapes of shadow
* Energy emission rate
* Deflection angle
* Shadow observables
* **IV Shadow and deflection angle of the higher-dimensional ALF and ALAdS black holes in the EHM gravity**:
* ALF charged black hole with extra dimensions
* Effective potential
* Geometrical shapes of shadow
* Energy emission rate
* Deflection angle
* Constraints from EHT observations of M87*
* ALAdS charged black hole with extra dimensions
* Effective potential
* Geometrical shapes of shadow
* Energy emission rate
* Deflection angle
* Constraints from EHT observations of M87*
* Summary and Conclusions
## I Introduction
The breakthrough successes in capturing the first images of shadows of the supermassive black holes M87* [1; 2; 3; 4; 5; 6; 7; 8] and Sgr A* [9; 10; 11; 12; 13; 14] by the Event Horizon Telescope (EHT) collaboration shed light on the physics of black holes and open a wide gate to a deeper understanding of these mysterious celestial objects. The event horizon of black holes, i.e., the boundary of no return for any crossing matter or radiation, is not directly observable, since it emits no light. Instead, what we can observe is the black hole "shadow", which is the dark region on a light background that appears around the event horizon due to the gravitational lensing phenomenon [15; 16; 17]. Since releasing the shadow images of M87* and Sgr A*, many efforts have been devoted to improving measurements to reach higher-resolution images [18]. As a result, theoretical efforts in investigating black hole physics, particularly shadow behavior in various gravitational theories, become significant in producing the desired resolution. In this regard, analytic and numerical studies and examinations of the apparent geometrical shape of various black hole spacetimes supply new theoretical shadow templates for future observations [19; 20; 21; 22; 23]. The shape and size of the shadow is determined by the black hole parameters, i.e., mass, electric charge, and angular momentum [24] in addition to spacetime properties [25; 26] and the position of the observer. For non-rotating black holes, shape of the shadow is a perfect circle. The angular momentum parameter can, however, cause rotating black holes to have non-trivial shadow shapes. [27].
In 1914, Nordstrom first proposed the idea of extra dimensions [28]. According to his idea, one can unify the electromagnetic and gravitational fields by treating four-dimensional spacetime as a surface in a five-dimensional spacetime. Today, unifying gravitational and gauge interactions of elementary particles, quantizing gravitational interaction, the Higgs mass hierarchy problem, and the cosmological constant problem are the main motivations for the enormous quantity of studies on the extra dimensions. In this regard, the Kaluza-Klein (KK) theory [29; 30] utilizing Einstein's General Theory of Relativity (GR) introduces a compact space constructed by compact extra dimensions with a certain compactification scale to unify gravitational interaction and electromagnetic or even non-Abelian gauge fields characterizing weak and strong interactions. Furthermore, the string theory (M-theory) as the well-known candidate theory for quantum gravity possesses eleven compact extra spatial dimensions or more [31; 32]. In addition to these compact extra dimensions with the extension up to the order of the Planck length, there are also ideas for large extra dimensions to the order of millimeter. This new gate to the topic of extra dimensions has opened by the Arkani-Hamed-Dimopoulos-Dvali (ADD) braneworld model [33; 34] to address the Higgs mass hierarchy problem via employing large extra dimensions. It is worth noting that the dramatic feature of these large extra dimensions is that their impacts can be detectable in future accelerator, astrophysical and tabletop experiments. Surprisingly, the ADD model can be incorporated in string theory [35]. Besides the compact and large extra dimensions, Randall-Sundrum (RS) braneworld model [36; 37] suggests the warped extra dimensions to address the Higgs mass hierarchy problem. In addition to these types of extra dimensions, there are also theories with infinite volume extra dimensions like Dvali-Gabadadze-Porrati (DGP) braneworld scenario [38] in which even in very low energies, the spacetime is not four-dimensional and the extra dimensions are neither compact nor warped. Such theories are the candidates for addressing the cosmological constant problem [39; 40] since in these theories, gravity is modified at large distances thanks to presence of infinite volume extra dimensions. Some detailed reviews on higher-dimensional models can be seen in Refs. [41; 42; 43; 44]. Within the framework of black hole physics, different methods and approaches have been employed to extend and investigate various black hole models in arbitrary dimensions [45; 46], such as Tangherlini method [47] for generalizing the Schwarzschild solution to \(n\) dimensions.
Detecting extra dimensions is a priority for physicists in high-energy or particle experiments. The Large Hadron Collider (LHC) at CERN and future colliders become some promising tools for exploring such extra dimensions and effects of strong gravity regimes corresponding with higher-dimensional black holes [48; 49; 50; 51; 52; 53; 54]. Moreover, the presence of Hydrogen atom in higher dimensions [55; 56; 57], spectroscopy experiments [58; 59; 60], and the ideas to address the proton radius puzzle [61; 62; 63] support the existence of extra dimensions. On the other hand, two recent achievements towards discovering black hole strong field regimes are the detection of Gravitational Waves (GW) via the LIGO/Virgo collaborations [64], and the above-mentioned images captured by the EHT collaborations. There are some traces of extra dimensions in the detection of GW, possessing certain information about the associate amplitude and the dynamics of fluctuation modes. Hence, many works have been focused on revealing such physics [65; 66; 67; 68; 69] and for a detailed review, see Ref. [70]. Now EHT has provided new possibilities to continue explorations for extra dimensions. Recently, in seminal works [71; 72; 73], the authors found noteworthy constraints from EHT observations on warped and compact extra dimensions within the RS model and M-theory. Therefore, one can utilize the EHT data to explore all types of extra dimensions, generally, and see whether they can be detected, as we aim to do that in this study. In this regard, it seems that extra dimensions affect the shadows of black holes by reducing the shadow size in various black hole models and gravitational theories [74; 75; 76; 77; 78; 79]. However, the exact effect of extra dimensions on black hole shadows is still an area of active research and is not yet fully understood. Apparently, the impacts of large and infinite volume extra dimensions have more chance to be detected in the future.
Besides, the size and shape of black hole shadows (similar to other astrophysical phenomena [80; 81; 82; 83; 84]) may differ in extended theories of gravity through additional degrees of freedom arose from these theories. Therefore, investigating the size and shape of the black hole shadows may aid in evaluating parameters of black hole metrics and testing alternative theories of gravity. Theoretical motivations [85] or dark energy, dark matter, and cosmological modeling
[86; 87; 88; 89; 90; 91; 92] are only a few examples of the many hypotheses that make up the enormous field of extended theories of gravity beyond GR. Among them, some novel ghost-free special classes of theories have developed, such as \(f(R)\) theories [93; 94], Lovelock theories [95; 96], and the scalar-tensor theories initially formulated by Horndeski [97] (for detailed reviews, see Refs. [98; 99]). The Horndeski theory is the most general scalar-tensor gravitational theory possessing second-order derivatives in the equations of motion. There are a lot of studies in the literature focused on the examining the Einstein-Horndeski scalar-tensor modified theory of gravity in various astrophysical issues and cosmological modeling [100; 101; 102; 103; 104; 105; 106; 107; 108; 109]. To obtain the stable black hole solutions of Einstein-Horndeski gravity, it has received significant attention to consider the action containing a non-minimal kinetic coupling of one scalar field to the Einstein tensor field. Spherically symmetric solutions with non-minimal derivative coupling without cosmological constant has been investigated in Ref. [110], and considering a negative cosmological constant studied in Ref. [111]. The asymptotically locally flat and asymptotically locally anti de Sitter (AdS) black hole solutions in Einstein-Horndeski gravity were also first found in Ref. [112]. The asymptotically locally flat and asymptotically locally AdS black hole solutions in the Einstein-Horndeski-Maxwell (EHM) gravitational theory with four and extra dimensions were obtained in Ref. [113]. The thermodynamics of the later solution is also studied in Refs. [114; 115].
A vast number of works have been focused on the issue of black hole shadow to find what and how degrees of freedom, arose from extended theories of gravity other than black holes parameters, affect the shadow behavior [116]. Some examples are as follows: the shadow behavior of the Kerr-Newman family of solutions of the Einstein-Maxwell equations is investigated in Refs. [117; 118; 119]; the shadow of a black hole with NUT-charges [120; 121]; the black hole shadows in Einstein-Maxwell-dilaton gravity [122; 123], in Chern-Simons modified gravity [124]; the apparent shape of the Sen black hole [125; 126; 127]; shadows of colliding and multi-black holes [128; 129]; shadow behavior of rotating black holes in \(f(R)\) gravity [130], conformal Weyl gravity [131], and Einstein-dilaton-Gauss-Bonnet black holes [132]; shadow behavior of the non-commutative geometry inspired, quantum-corrected, and magnetically charged black holes [133; 134; 135; 136]; shadow behavior of Einstein-Born-Infeld black holes [137]; shadow behavior of Ayon-Beato-Garcia black hole and also, rotating Hayward and rotating Bardeen regular black holes [138] and hairy black holes [139; 140; 141]; chaotic shadow of a non-Kerr rotating compact objects with quadrupole mass moment and a magnetic dipole [142; 143], and black holes with exotic matter [144; 145; 146; 147; 148; 149]; and also, shadow behavior of wormholes and naked singularities [150; 151; 152].
In this study, we aim to investigate the shadow behavior and deflection angle of the asymptotically locally flat (ALF) and asymptotically locally AdS (ALAdS) charged black hole solutions in EHM gravity with extra dimensions and also, estimate the energy emission rate associated with these black holes. We want to examine how extra dimensions together with electric charge and negative cosmological constant within the EHM gravity affect the shadow and deflection angle of the black holes to gain a new template of black hole shadow for future theoretical and observational applications. Additionally, we want to constrain extra dimensions, the electric charge, negative cosmological constant, and the coupling constants of EHM gravity by comparing the shadow size of the higher-dimensional ALF and ALAdS charged black holes in EHM gravity with the shadow size of M87* supermassive black hole captured by EHT. This paper is organized as follows. In Section II we first briefly introduce the EHM gravitational theory with arbitrary dimensions and then describe the line elements of the higher-dimensional ALF and ALAdS charged black holes in the theory. In Section III, we provide the general formalism to study the shadow behavior of the higher-dimensional black holes by utilizing the Hamilton-Jacobi approach and Carter method to formulate the null geodesic equations. We specify the shadow shape of the black holes on the observer's sky in celestial coordinates, and estimate the energy emission rate and deflection angle formulas in higher dimensions. Also, we introduce the black hole shadow observables. In Section IV, utilizing the framework introduced in the previous section, we study the shadow behavior, deflection angle, and energy emission rates of the ALF and ALAdS charged black holes in EHM gravity with extra dimensions. We analyze the significant impacts of the electric charge, cosmological constant, extra dimensions, and the coupling constants of EHM gravity on the shadow and deflection angle of the black holes within the setup and then, we constrain these parameters by EHT data. Finally, Section V is devoted to discussing and concluding our main results.
EHM gravity with arbitrary dimensions and its black hole solutions
The action of the higher-dimensional Einstein-Hormdeski gravity, which is minimally coupled to a Maxwell field to construct EHM gravity with arbitrary dimensions, has the following form [113; 114]
\[I=\frac{1}{16\pi}\int d^{n}x\sqrt{-\tilde{g}}\mathcal{L}\,, \tag{1}\]
in which \(n\) counts the number of spacetime dimensions, and the Lagrangian is to the form of
\[\mathcal{L}=R-2\Lambda-\frac{1}{4}F_{ab}F^{ab}-\frac{1}{2}\left(\alpha g^{ab}- \gamma G^{ab}\right)\partial_{a}\chi\,\partial_{b}\chi\,, \tag{2}\]
where \(\alpha\) and \(\gamma\) are the coupling constants, \(F_{ab}=\partial_{a}A_{b}-\partial_{b}A_{a}\) is the electromagnetic field strength with gauge potential \(A\), and \(G_{ab}\equiv R_{ab}-\frac{1}{2}R\,g_{ab}\) is the Einstein tensor in which \(R_{ab}\) is the Ricci tensor, \(R\) is Ricci scalar, and \(g_{ab}\) is the metric tensor with determinant \(\tilde{g}\). The Lagrangian possesses the derivatives of the axionic scalar field \(\chi\). This makes the Lagrangian invariant under the transformation \(\chi\rightarrow\chi+C\). Here, this symmetry, however, does not utilize to yield the non-minimally coupled Einstein-vector gravity [153]. The strength of the non-minimal kinetic coupling to Einstein tensor field is governed by \(\gamma\).
By varying the action (1) with respect to the metric tensor, axionic scalar field, and the gauge potential, one can find the corresponding equations of motion in the EHM gravity [113; 114]. In order to find the static charged black hole solutions of the setup, one can take into account the following general spherically symmetric ansatz with arbitrary dimensions as the line element (metric tensor) of the background spacetime
\[ds^{2}=-h(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{n-2}^{2}\,, \tag{3}\]
where \(d\Omega_{n-2}^{2}=d\theta_{1}^{2}+\sin^{2}[\theta_{1}]d\theta_{2}^{2}+\ldots+ \prod_{i=1}^{n-3}\sin^{2}[\theta_{i}]d\theta_{n-2}^{2}\) is the metric of the unit \(S^{n-2}\) hypersphere, which has the volume
\[\omega_{n-2}=\frac{2\pi^{\frac{n-1}{2}}}{\Gamma[\frac{n-1}{2}]}\,, \tag{4}\]
where \(\Gamma\) is the gamma function. By this ansatz, one can solve the equations of motion in EHM gravity to obtain two classes of higher-dimensional black hole solutions, which are ALF and ALAdS black holes as constructed and reviewed in Refs. [113; 114].
### ALF black hole with extra dimensions in EHM gravity
Setting \(\alpha=\Lambda=0\) and also \(\gamma<0\) (for a real scalar field outside the event horizon), the equations of motion of the EHM gravity result in the higher-dimensional ALF charged black hole solution for which we have (for more details, see Refs. [113; 114])
\[f(r)=\frac{16(n-2)^{2}(n-3)^{2}r^{4n}}{(q^{2}r^{6}-4(n-2)(n-3)r^{2n})^{2}}h(r)\,, \tag{5}\]
\[h(r)=1-\frac{\mu}{r^{n-3}}+\frac{q^{2}}{2(n-2)(n-3)r^{2(n-3)}}-\frac{q^{4}}{4 8(n-2)^{2}(n-3)^{2}r^{4(n-3)}}\,, \tag{6}\]
where \(\mu\) and \(q\) are two non-trivial parameters, which parameterise the mass and the electric charge, respectively in such a way that
\[M=\frac{1}{16\pi}(n-2)\mu\,\omega_{n-2}\,,\qquad Q=\frac{1}{8\pi}q\,\omega_{n -2}\sqrt{2(n-2)(n-3)}\,. \tag{7}\]
It is worth noting that the parameter \(q\) in the form as introduced in [114] is not correct and we provided its correct form in Eq. (7).
The higher-dimensional ALF charged black hole solution possesses two curvature singularities at \(r=0\) and \(r=r_{*}\), respectively, so that \(r_{*}\) can be obtained through the following equation
\[4(n-2)(n-3)r_{*}^{2n-6}-q^{2}=0\,. \tag{8}\]
On the other hand, the event horizon of the higher-dimensional ALF charged black hole is located at \(r=r_{eh}\), which is the largest root of \(h(r)=0\). Furthermore, the higher-dimensional ALF charged black hole satisfies the condition \(r_{eh}>r_{*}\), which implies [113; 114]
\[\frac{\mu}{q}>\frac{4}{3\sqrt{(n-2)(n-3)}}\,. \tag{9}\]
The Hawking temperature associated with the higher-dimensional ALF charged black hole can be found as follows [114]
\[T_{ALF}=\frac{4(n-2)(n-3)r_{eh}^{2n-6}-q^{2}}{16\pi(n-2)r_{eh}^{2n-5}}\,. \tag{10}\]
This temperature is always positive, i.e., \(T_{ALF}>0\) due to the above-mentioned condition based on which the curvature singularity \(r_{*}\) must be inside the event horizon \(r_{eh}\). Therefore, the Hawking temperature of the higher-dimensional ALF charged black hole can approach zero, but can never reach this vanishing value. This feature is more in agreement with the behavior of physical systems respecting the third law of thermodynamics, and it cannot be seen in Reissner-Nordstrom black hole.
### ALAdS black hole with extra dimensions in EHM gravity
Assuming \(\alpha\neq 0\) and \(\Lambda\neq 0\) so that \((\alpha+\gamma\Lambda)<0\) (to have a real scalar field outside the event horizon), the EHM gravity field equations result in the higher-dimensional ALAdS charged black hole solution for which we have (for more details, see Refs. [113; 114])
\[f(r)=\frac{(n-2)^{2}(4+\beta\gamma)^{2}\left((n-1)g^{2}r^{2}+n-3\right)^{2}} {\left((n-2)(n-1)(4+\beta\gamma)g^{2}r^{2}+4(n-2)(n-3)-q^{2}r^{2(3-n)}\right)^ {2}}h(r)\,, \tag{11}\]
\[h(r)=\bar{h}(r)+h_{q}(r)\,, \tag{12}\]
where
\[\begin{split} h_{q}(r)&=\frac{2q^{2}}{(n-2)(n-3)(4+ \beta\gamma)r^{2n-6}}-\frac{2\beta\gamma(n-3)q^{2}}{g^{2}(n-1)^{2}(n-2)(4+ \beta\gamma)^{2}r^{2n-4}}\\ &+\frac{2\beta\gamma(n-3)^{2}q^{2}}{g^{4}(n+1)(n-1)^{2}(n-2)(4+ \beta\gamma)^{2}r^{2n-2}}{}_{2}{\rm F}_{1}\left[1,\frac{n+1}{2};\frac{n+3}{2} ;\frac{3-n}{(n-1)g^{2}r^{2}}\right]\\ &-\frac{q^{4}}{g^{2}(n-1)(n-2)^{2}(3n-7)(4+\beta\gamma)^{2}r^{2 (2n-5)}}{}_{2}{\rm F}_{1}\left[1,\frac{3n-7}{2};\frac{3n-5}{2};\frac{3-n}{(n-1) g^{2}r^{2}}\right]\end{split} \tag{13}\]
where \({}_{2}{\rm F}_{1}\) is the hypergeometric function, which is well-defined for \(n\geq 4\). Furthermore, when the dimension number \(n\) is even, the function \(\bar{h}(r)\) is to the following form
\[\bar{h}_{\rm even}(r)=-\frac{\mu}{r^{n-3}}+\frac{8g^{2}r^{2}(2+\beta\gamma)+1 6}{(4+\beta\gamma)^{2}}+\frac{\beta^{2}\gamma^{2}g^{2}r^{2}}{(4+\beta\gamma)^ {2}}{}_{2}{\rm F}_{1}\left[1,\frac{1-n}{2};\frac{3-n}{2};\frac{3-n}{(n-1)g^{2 }r^{2}}\right]\,. \tag{14}\]
The function \(\bar{h}_{even}(r)\) is divergent for odd integers of dimension number. When \(n\) is odd (\(n\geq 5\)), the function \(\bar{h}(r)\) is as follows
\[\bar{h}_{\rm odd}(r)=-\frac{\mu}{r^{n-3}}+\frac{8g^{2}r^{2}(2+\beta\gamma)+16}{( 4+\beta\gamma)^{2}}+\frac{(n-1)\beta^{2}\gamma^{2}g^{4}r^{4}}{(n-3)(4+\beta \gamma)^{2}}{}_{2}{\rm F}_{1}\left[1,\frac{n+1}{2};\frac{n+3}{2};\frac{(n-1)g ^{2}r^{2}}{3-n}\right]\,. \tag{15}\]
\(\alpha\) and \(\gamma\) must possess the same sign to achieve the ALAdS spacetime [113; 114]. In Eqs. (11)-(15) two parameters \(g\) and \(\beta\) are substituted for \(\alpha\) and the cosmological constant \(\Lambda\) so that
\[\alpha=\frac{1}{2}(n-1)(n-2)g^{2}\gamma\,,\qquad\Lambda=-\frac{1}{4}(n-1)(n-2 )g^{2}(2+\beta\gamma)\,. \tag{16}\]
Again, \(\mu\) and \(q\) parameterise the mass and the electric charge as follows
\[M=\frac{1}{64\pi}(n-2)(4+\beta\gamma)\mu\,\omega_{n-2}\,,\qquad Q=\frac{1}{8 \pi}q\,\omega_{n-2}\sqrt{2(n-2)(n-3)}\,. \tag{17}\]
The higher-dimensional ALAdS charged black hole solution has two curvature singularities at \(r=0\) and \(r=r_{*}\), respectively. The curvature singularity \(r_{*}\) is the roots of
\[(n-2)(n-1)(4+\beta\gamma)g^{2}r^{2}+4(n-2)(n-3)-q^{2}r^{2(3-n)}=0 \tag{18}\]
and located within the event horizon of the black hole \(r_{eh}\), which is a root of \(h(r)=0\). Moreover, the Hawking temperature of the higher-dimensional ALAdS charged black hole can be found as follows [114]
\[T_{ALAdS}=\frac{(n-1)g^{2}r_{eh}}{4\pi}+\frac{4(n-2)(n-3)r_{eh}^{2n-6}-q^{2}}{4 \pi(n-2)(4+\beta\gamma)r_{eh}^{2n-5}}\,. \tag{19}\]
III General formalism for shadow and deflection angle of higher-dimensional black holes and shadow observables
When a black hole is in front of a light source, part of the light is deflected by the gravitational field of black hole and reaches the observer. However, some photons may fall into the black hole, creating a dark zone known as the shadow, and the apparent shape of the black hole is the boundary of the shadow. In this section, we present the general formulas required to obtain the shape of the shadow, energy emission rate, and deflection angle for the general ansatz (3) with higher dimensions, which necessitates the study of the motion of a test particle in the spacetime.
### Null geodesics
We start with the Lagrangian of the test particle, which is to the form of
\[\tilde{\mathcal{L}}=\frac{1}{2}g_{ab}\dot{x}^{a}\dot{x}^{b}\,, \tag{20}\]
where an over dot shows the derivative with respect to the affine parameter \(\tau\). The components of canonically conjugate momentum corresponding with the general ansatz (3) can be found as follows
\[P_{t}=h(r)\dot{t}=E\,, \tag{21}\]
\[P_{r}=\frac{1}{f(r)}\dot{r}\,, \tag{22}\]
\[P_{\theta_{i}}=r^{2}\sum_{i=1}^{n-3}\prod_{n=1}^{i-1}\sin^{2}[\theta_{n}]\dot{ \theta}_{i}\,, \tag{23}\]
\[P_{\theta_{n-2}}=r^{2}\prod_{i=1}^{n-3}\sin^{2}[\theta_{i}]\dot{\theta}_{n-2}=L\,, \tag{24}\]
where \(i=1,2,\cdots,n-3\) and also, \(E\) and \(L\) are the energy and angular momentum of the test particle, respectively.
We utilize the Hamilton-Jacobi method to analyze photon orbits around the black hole, in addition to the Carter approach to investigate the geodesic equations [154]. In this regard, we generalize these methods to higher dimensions. Consequently, in higher dimensions, the Hamilton-Jacobi method reads
\[\frac{\partial S}{\partial\tau}=-\frac{1}{2}g^{ab}\frac{\partial S}{\partial x ^{a}}\frac{\partial S}{\partial x^{b}}\,, \tag{25}\]
where \(S\) is the Jacobi action of the test particle. Inserting the general ansatz (3) with arbitrary dimensions into Eq. (25), one can yield
\[-2\frac{\partial S}{\partial\tau}=-\frac{1}{h(r)}\left(\frac{ \partial S_{t}}{\partial t}\right)^{2}+f(r)\left(\frac{\partial S_{r}}{ \partial r}\right)^{2}+\sum_{i=1}^{n-3}\frac{1}{\left(r^{2}\prod_{n=1}^{i-1} \sin^{2}[\theta_{n}]\right)}\left(\frac{\partial S_{\theta_{i}}}{\partial \theta_{i}}\right)^{2}+\frac{1}{\left(r^{2}\prod_{i=1}^{n-3}\sin^{2}[\theta_{ i}]\right)}\left(\frac{\partial S_{\theta_{n-2}}}{\partial\theta_{n-2}}\right)^{2}\,. \tag{26}\]
Taking into account a separable solution for Jacobi action allows one to express the action as
\[S=\frac{1}{2}m^{2}\tau-Et+L\theta_{n-2}+S_{r}(r)+\sum_{i=1}^{n-3}S_{\theta_{ i}}(\theta_{i})\,, \tag{27}\]
where \(m\) is the rest mass of the test particle. Since in studying shadow behavior of black holes, the test particle is photon, we set \(m=0\). Therefore, applying the Jacobi action (27) on Eq. (26) results in the following expression
\[\begin{split} 0&=\left\{\frac{E^{2}}{h(r)}-f(r) \left(\frac{\partial S_{r}}{\partial r}\right)^{2}-\frac{1}{r^{2}}\left(\frac{ L^{2}}{\prod_{i=1}^{n-3}\sin^{2}[\theta_{i}]}+\mathcal{K}-\prod_{i=1}^{n-3}L^{2} \cot^{2}[\theta_{i}]\right)\right\}\\ &-\left\{\frac{1}{r^{2}}\left(\sum_{i=1}^{n-3}\frac{1}{\prod_{n=1 }^{i-1}\sin^{2}[\theta_{n}]}\left(\frac{\partial S_{\theta_{i}}}{\partial \theta_{i}}\right)^{2}-\mathcal{K}+\prod_{i=1}^{n-3}L^{2}\cot^{2}[\theta_{i}] \right)\right\}\,,\end{split} \tag{28}\]
where \(\mathcal{K}\) is the Carter constant. After some manipulations, one can obtain the following set of equations
\[r^{4}f^{2}(r)\left(\frac{\partial S_{r}}{\partial r}\right)^{2}=r^{4}\frac{f( r)}{h(r)}E^{2}-r^{2}\left(L^{2}+\mathcal{K}\right)f(r)\,, \tag{29}\]
\[\sum_{i=1}^{n-3}\frac{1}{\prod_{n=1}^{i-1}\sin^{2}[\theta_{n}]}\left(\frac{ \partial S_{\theta_{i}}}{\partial\theta_{i}}\right)^{2}=\mathcal{K}-\prod_{i=1 }^{n-3}L^{2}\cot^{2}[\theta_{i}]\,. \tag{30}\]
Finally, employing Eqs. (29) and (30) and the components of the canonically conjugate momentum (21)-(24), the complete equations of motion for photon, i.e., the null geodesics within the higher-dimensional spacetime (3) can be read as follows
\[\dot{t}=\frac{E}{f(r)}\,, \tag{31}\]
\[r^{2}\dot{r}=\pm\sqrt{\mathcal{R}}\,, \tag{32}\]
\[r^{2}\sum_{i=1}^{n-3}\prod_{n=1}^{i-1}\sin^{2}[\theta_{n}]\dot{\theta}_{i}=\pm \sqrt{\Theta_{i}}\,, \tag{33}\]
\[\dot{\theta}_{n-2}=\frac{L}{r^{2}\prod_{i=1}^{n-3}\sin^{2}[\theta_{i}]}\,, \tag{34}\]
where "\(+\)" and "\(-\)" signs denote the outgoing and ingoing radial directions of the motion of photon, respectively. Furthermore, we have
\[\mathcal{R}=r^{4}\frac{f(r)}{h(r)}E^{2}-r^{2}\left(L^{2}+\mathcal{K}\right)f(r )\,,\quad\Theta_{i}=\mathcal{K}-\prod_{i=1}^{n-3}L^{2}\cot^{2}[\theta_{i}]\,. \tag{35}\]
The motion of photon in the spacetime is governed by Eqs. (31)-(34).
It is critical to discuss the effective potential for determining the boundary of the shadow of black holes. The effective potential can be calculated by rewriting the radial null geodesic equation (32) as follows
\[\left(\frac{dr}{d\tau}\right)^{2}+V_{eff}=0\,, \tag{36}\]
in which the effective potential is to the following form
\[V_{eff}=\frac{f(r)}{r^{2}}\left(\mathcal{K}+L^{2}\right)-\frac{f(r)}{h(r)}E^{ 2}\,. \tag{37}\]
The unstable circular orbits of photons determine the boundary of apparent shape of the black hole. They correspond with the maximum value of the effective potential, which occurs at a distance, known as photon sphere radius \(r_{0}\) satisfying the following equations
\[V_{eff}\big{|}_{r_{0}}=\frac{dV_{eff}}{dr}\bigg{|}_{r_{0}}=0\,,\quad\mathcal{ R}\big{|}_{r_{0}}=\frac{d\mathcal{R}}{dr}\bigg{|}_{r_{0}}=0\,. \tag{38}\]
Consequently, the photon sphere radius \(r_{0}\) associated with the maximum of the effective potential for black hole in the spacetime (3) with arbitrary dimensions is the smallest value of the roots of the following equation
\[r_{0}h^{\prime}(r_{0})-2h(r_{0})=0\,, \tag{39}\]
where a prime stands for radial derivative.
### Geometrical shapes of shadow
In this section we aim to find the shadow shape and size of the black holes in the spacetime (3) with arbitrary dimensions. To do this, we begin with the definition of two impact parameters \(\xi\) and \(\eta\). These impact parameters as functions of the constants of motion \(E\), \(L\), and \(\mathcal{K}\) can characterize the properties of photons near black holes. They define as follows
\[\xi=\frac{L}{E}\,,\qquad\eta=\frac{\mathcal{K}}{E^{2}}\,. \tag{40}\]
Therefore, one can rewrite the effective potential and also, the function \(\mathcal{R}\) in terms of these impact parameters as
\[V_{eff}=E^{2}\left\{\frac{f(r)}{r^{2}}\left(\eta+\xi^{2}\right)-\frac{f(r)}{ h(r)}\right\}\,,\qquad\mathcal{R}=E^{2}\left\{r^{4}\frac{f(r)}{h(r)}-r^{2}f(r) \left(\eta+\xi^{2}\right)\right\}\,. \tag{41}\]
Finally, by inserting Eq. (41) into Eq. (38) one can find the following equation for two unknowns \(\xi\) and \(\eta\)
\[\eta+\xi^{2}=\frac{r_{0}^{2}}{2f(r_{0})+r_{0}f^{\prime}(r_{0})}\left\{4\left( \frac{f(r_{0})}{h(r_{0})}\right)+r_{0}\left(\frac{f^{\prime}(r_{0})h(r_{0})-f( r_{0})h^{\prime}(r_{0})}{h^{2}(r_{0})}\right)\right\}\,. \tag{42}\]
Therefore, the photon sphere radius achieved from Eq. (39) yields the quantity \(\eta+\xi^{2}\) using Eq. (42). One can see that \(r_{0}\) has the dimension of the length and the quantity \(\eta+\xi^{2}\) has the dimension of the length square.
The celestial coordinates \(\lambda\) and \(\psi\)[155] are employed to characterize the geometrical shape of the shadow as seen on the observer's frame. Fig. 1 is a schematic of the celestial coordinates used in this paper. These coordinates can be read as follows
\[\lambda=\lim_{r_{o}\to\infty}\left(\frac{r_{o}^{2}P^{(\theta_{n-2})}}{P^{(t)}} \right)\,,\qquad\psi=\lim_{r_{o}\to\infty}\left(\frac{r_{o}^{2}P^{(\theta_{i}) }}{P^{(t)}}\right)\,, \tag{43}\]
where \(\left[P^{(t)},P^{(\theta_{n-2})},P^{(\theta_{i})}\right]\) are its vi-tetrad momentum elements and \(r_{o}\) is the distance between the observer and the black hole. On the equatorial plane, one finds \(\lambda=-\xi\) and \(\psi=\pm\sqrt{\eta}\). Therefore, we can gain the following outcome
\[R_{s}^{2}\equiv\eta+\xi^{2}=\lambda^{2}+\psi^{2}\,, \tag{44}\]
in which \(R_{s}\) is the shadow radius in celestial coordinates. For non-rotating (static) black holes, the geometrical shape of the shadow is circle with radius \(R_{s}\).
### Energy emission rate
Black holes can radiate through the phenomenon known as Hawking radiation. At very high energy, the absorption cross-section generally oscillates around a limiting constant \(\sigma_{lim}\). For a very far distant observer, however, the absorption cross-section advances toward the black hole shadow [79; 123]. One can prove that \(\sigma_{lim}\) is approximately equal to the photon sphere area, which in arbitrary dimensions can be represented as follows [156; 157; 123]
\[\sigma_{lim}\approx\frac{\pi^{\frac{n-2}{2}}}{\Gamma\left[\frac{n}{2}\right]} R_{s}^{n-2}\,. \tag{45}\]
Thus, the complete form of the energy emission rate of higher-dimensional black holes can be read as
\[\frac{d^{2}E(\varpi)}{d\varpi dt}=\frac{2\pi^{2}\sigma_{lim}}{e^{\frac{\pi}{ 2}}-1}\varpi^{n-1}\,, \tag{46}\]
Figure 1: _The schematic of the celestial coordinates on the far observer’s sky in which \(r_{o}\) is the spatial separation between the far distant observer and the black hole, and \(\tilde{\theta}_{o}\) is the angular coordinate of the far observer. Therefore, the location of the far observer is characterized with \((r_{o},\tilde{\theta}_{o})\). The coordinates \((\lambda,\psi)\) are the apparent perpendicular distance of the image as seen from the axis of symmetry, and from its projection on the equatorial plane, respectively._
where \(\varpi\) is the emission frequency and \(T\) is the Hawking temperature of the black hole.
### Deflection angle
Here we aim to provide the framework for studying the deflection angle of higher-dimensional black holes in the spacetime (3). In this regard, we want to utilize the Gauss-Bonnet theorem [158; 159]. We first should find the optical metric on the equatorial hyperplane \(\theta_{i}=\pi/2\) in the spacetime (3). Then, on this hyperplane, we set \(d\theta_{n-2}^{2}=d\phi^{2}\) to find
\[ds^{2}=-h(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\phi^{2}\,. \tag{47}\]
Then, for the considered null geodesics for which \(ds^{2}=0\), the optical metric reads as follows
\[dt^{2}=\frac{dr^{2}}{h(r)f(r)}+\frac{r^{2}}{h(r)}d\phi^{2}\,. \tag{48}\]
For this optical metric, we can calculate the Gaussian optical curvature \(K=\frac{\bar{R}}{2}\) in which \(\bar{R}\) is the Ricci scalar of the metric (48) as follows
\[K=\frac{2\,rh(r)f(r)h^{\prime\prime}(r)-2\,rf(r)h^{\prime}(r)^{2}+h(r)h^{ \prime}(r)\left\{rf^{\prime}(r)+2f(r)\right\}-2f^{\prime}(r)h(r)^{2}}{2\,rh(r )}\,. \tag{49}\]
In order to calculate the deflection angle, one should consider a non-singular manifold \(\mathcal{D}_{\tilde{R}}\) with a geometrical size \(\tilde{R}\) to employ the Gauss-Bonnet theorem, so that [158; 159]
\[\int\int_{\mathcal{D}_{\tilde{R}}}KdS+\oint_{\partial\mathcal{D}_{\tilde{R}}} kdt+\sum_{i}\varphi_{i}=2\pi\zeta(\mathcal{D}_{\tilde{R}})\,, \tag{50}\]
where \(dS=\sqrt{\bar{g}}drd\phi\) and \(dt\) are the surface and line element of the optical metric (48), respectively, \(\bar{g}\) is the determinant of the optical metric, \(k\) denotes the geodesic curvature of \(\mathcal{D}_{\tilde{R}}\), and \(\varphi_{i}\) is the jump (exterior) angle at the \(i\)-th vertex, and also, \(\zeta(\mathcal{D}_{\tilde{R}})\) is the Euler characteristic number of \(\mathcal{D}_{\tilde{R}}\). One can set \(\zeta(\mathcal{D}_{\tilde{R}})=1\). Then, considering a smooth curve \(y\), which has the tangent vector \(\dot{y}\) and acceleration vector \(\ddot{y}\), the geodesic curvature \(k\) of \(y\) can be defined as follows where the unit speed condition \(\tilde{g}\left(\dot{y},\dot{y}\right)=1\) is employed
\[k=\tilde{g}\left(\nabla_{\dot{y}\ddot{y},\ddot{y}}\right)\,, \tag{51}\]
which is a measure of deviations of \(y\) from being a geodesic. In the limit \(\tilde{R}\rightarrow\infty\), two jump angles \(\varphi_{s}\) (of source) and \(\varphi_{o}\) (of observer) will become \(\pi/2\), i.e, \(\varphi_{s}+\varphi_{o}\rightarrow\pi\). Considering \(C_{\tilde{R}}:=r(\phi)\), we have \(k(C_{\tilde{R}})=|\nabla_{\dot{C}_{\tilde{R}}}\dot{C}_{\tilde{R}}|\stackrel{{ \tilde{R}}}{{\rightarrow}}\stackrel{{\infty}}{{\sim}}1/\tilde{R}\) and therefore, we can find \(\lim_{\tilde{R}\rightarrow\infty}dt=\tilde{R}d\phi\). Hence, \(k(C_{\tilde{R}})dt=d\phi\). Consequently, the Gauss-Bonnet theorem will reduce to the following form
\[\int\int_{\mathcal{D}_{\tilde{R}}}KdS+\oint_{C_{\tilde{R}}}kdt^{\tilde{R}} \stackrel{{\infty}}{{=}}\stackrel{{\infty}}{{\sim}} \int\int_{\mathcal{D}_{\infty}}KdS+\int_{0}^{\pi+\Theta}d\phi=\pi\,. \tag{52}\]
Finally, using the straight light ray approximation \(r(\phi)=\xi/\sin[\phi]\), the Gauss-Bonnet theorem results in the following expression to calculate the deflection angle (for more details, see Refs. [158; 159] and references therein)
\[\Theta=\pi-\int_{0}^{\pi+\Theta}d\phi=-\int_{0}^{\pi}\int_{\frac{\xi}{\sin[ \phi]}}^{\infty}KdS\,. \tag{53}\]
### Shadow observables
Black hole shadow observables can provide strong evidence for the existence of black holes. These observables refer to the features of the shadow casted by a black hole on its surrounding bright accretion disk. They are obtained from the images of the event horizon of a black hole, which can be captured currently utilizing EHT. Studying black hole shadow observables can provide us with valuable information about the properties of black holes [160; 161; 162; 163]. The size and shape of the shadow can give us insights into the black hole parameters, which are expected to be constrained from the EHT data. Overall, studying black hole shadow observables is an important tool for understanding the mysterious and fascinating phenomena of black holes. To introduce the shadow observables, we propose that the observer is at the equatorial plane, where the angular coordinate of the observer or the inclination angle is \(\tilde{\theta}_{o}=\pi/2\).
Hioki and Maeda [164] suggested two characterized observables, \(\tilde{R}_{s}\) and \(\delta_{s}\) in order to investigate the size and distortion of black hole shadows. Based on Hioki-Maeda method, one can approximately describe the shadow of the black hole by a reference circle with the radius \(\tilde{R}_{s}\) so that \(\delta_{s}\) is the deviation of the left edge of the real shape of the shadow from the boundary of this reference circle [164]. In other words, \(\tilde{R}_{s}\) is the shadow size and \(\delta_{s}\) indicates the deformation of shadow shape from this circle of reference. The shadow reference circle in the celestial coordinates at the top, bottom, right, and left edges are located at \((\lambda_{t},\psi_{t})\), \((\lambda_{b},\psi_{b})\), \((\lambda_{r},0)\), and \((\lambda_{l}^{\prime},0)\). Moreover, the leftmost edge of the shadow is located at \((\lambda_{l},0)\). We note that the indices \(t\), \(b\), \(r\), and \(l\) stand for the top, bottom, right, and left edge of the shadow. Figure 2 is the schematic of the shadow reference circle in the celestial coordinates. With these preliminaries, one can define these observables as
\[\tilde{R}_{s}=\frac{(\lambda_{t}-\lambda_{r})^{2}+\psi_{t}^{2}}{2\left|\lambda _{r}-\lambda_{t}\right|}\,, \tag{54}\]
and
\[\delta_{s}=\frac{\left|\lambda_{l}-\lambda_{l}^{\prime}\right|}{\tilde{R}_{s} }\,. \tag{55}\]
Kumar and Ghosh [165] proposed that the shadow of some irregular black holes cannot be correctly characterized by \(\tilde{R}_{s}\) and \(\delta_{s}\) due to certain symmetry requirements in shadow shapes. Furthermore, due to noisy data, the shadow form may not be perfectly circular. Therefore, they introduced two new characterized observables, the shadow area \(A_{s}\) and oblateness \(D_{s}\) to describe haphazard shadows of any shape (not just circular shape), which are defined as
Figure 2: _Illustration of the shadow reference circle in the celestial coordinates._
follows
\[A_{s}=2\int\psi(r_{0})\,d\lambda(r_{0})=2\int_{r_{0}^{-}}^{r_{0}^{+}}\left(\psi(r_ {0})\frac{\lambda(r_{0})}{dr_{0}}\right)dr_{0}\,, \tag{56}\]
and
\[D_{s}=\frac{\lambda_{r}-\lambda_{l}}{\psi_{t}-\psi_{b}}\,, \tag{57}\]
where \(r_{0}^{\pm}\) are retrograde and prograde orbits at the equatorial plane, respectively.
For non-rotating (spherically symmetric) black holes, as in the present study, one can verify that the shadow distortion can be eliminated so that \(\delta_{s}=0\) and the shadow oblateness equals unity, i.e., \(D_{s}=1\)[162; 166]. This indicates that for non-rotating black holes, the shadow shape is a perfect circle. Additionally, the retrograde and prograde orbits are not accessible for non-rotating black holes [162; 166]. In the subsequent section, however, we compare the shadow size of M87* supermassive black hole with the ALF and ALAdS charged higher-dimensional black holes in EHM gravity to constrain the electric charge and cosmological constant together with coupling constants of the EHM theory by following the procedure introduced by the EHT collaborations in Ref. [167].
IV Shadow and deflection angle of the higher-dimensional ALF and ALAdS black holes in the EHM gravity
In this section, we aim to study the shadow and deflection angle of the higher-dimensional ALF and ALAdS black holes in the EHM gravity utilizing the general framework expressed in the previous section. To do this end, we apply the line element of these black holes in EHM gravity on the formulas deduced in the framework to investigate how dimensionality, electric charge, and cosmological constant in EHM gravity affect the shadow and deflection angle behavior. In this regard, we will see whether the shadow behavior of black holes are dependent to dimensionality, electric charge and cosmological constant as the spacetime features in addition to the black hole parameters.
### ALF charged black hole with extra dimensions
To study the shadow and deflection angle of the ALF black hole in EHM gravity with extra dimensions, we arbitrarily consider the electric charge values as \(Q=0.1,0.5,1,1.5\), and \(2\). Also, we take into account that the extra dimensions count \(n=4,5,\ldots,11\) (note that \(n=4\) stands for one temporal in addition to three spatial dimensions as usual).
#### iv.1.1 Effective potential
First, we want to check the behavior of the effective potential for the ALF black hole with extra dimensions in the EHM gravity. Inserting Eqs. (5) and (6) into Eq. (37) results in the effective potential for the higher-dimensional ALF charged black hole as follows
\[\begin{split} V_{eff}&=\frac{16(n-2)^{2}(n-3)^{2}r ^{4n-2}}{\left(q^{2}r^{6}-4(n-2)(n-3)r^{2n}\right)^{2}}\\ &\times\left\{\left(\mathcal{K}+L^{2}\right)\left(1-\frac{\mu}{r ^{n-3}}+\frac{q^{2}}{2(n-2)(n-3)r^{2(n-3)}}-\frac{q^{4}}{48(n-2)^{2}(n-3)^{2}r ^{4(n-3)}}\right)-E^{2}r^{2}\right\}.\end{split} \tag{58}\]
Fig. 3 depicts the behavior of the effective potential for the ALF charged black hole with extra dimensions as a function of radial coordinate \(r\) for different values of \(n\) and \(Q\). In this figure, the effective potential peaks at the photon
sphere radius \(r_{0}\) associated with each value of \(n\) and \(Q\) and in the limit \(r\to\infty\), the effective potential approaches a constant value. From Fig. 2(a), we see that for a fixed value of \(Q\), the effective potential for the higher-dimensional ALF charged black hole increases by growing \(n\). Also, we find from Fig. 2(b) that increasing the value of \(Q\) for \(n=4\) leads to amplification of the effective potential for the black hole. However, form Fig. 2(c) we find that for \(n\geq 5\), although this amplifying of the effective potential continues but each curve of the effective potential corresponding to different values of \(Q\) finally coincide. This fact shows that the impact of higher dimensions dominates the effect of the electric charge in the ALF black hole with higher dimensions. Since the location of the maximum of the effective potential for the black hole, i.e., the photon sphere radius \(r_{0}\), characterizes the shadow boundary of the black hole, Fig. 3 shows that how \(n\) and \(Q\) affect the shadow boundary of the ALF black hole in the EHM gravity with extra dimensions.
#### iv.2.2 Geometrical shapes of shadow
Now we are going to illustrate the geometrical shape of shadow of the ALF charged black hole with extra dimensions on the observer's sky in the celestial coordinates introduced in previous section. In this regard, we first should collect some numerical data for \(r_{*},r_{eh},r_{0}\), and \(\sqrt{\eta+\xi^{2}}\) associated with the black hole. Inserting Eq. (6) into Eq. (39) yields the photon sphere radius for the higher-dimensional ALF black hole in the EHM gravity. Moreover, applying Eqs. (5) and (6) into Eq. (42) leads to find the radius of shadow circles for the higher-dimensional ALF black hole.
Figure 3: _The graph of the radial evolution of the effective potential for the ALF charged black hole with extra dimensions in EHM gravity for different values of \(n\) and \(Q\) in which we set \(M=1\)._
charged black hole in celestial coordinates. In Table 1 we collect the numerical data associated with \(r_{*}\), \(r_{eh}\), and \(r_{0}\) for \(n=4,5,\cdots,11\) and some different values of \(Q\).
\begin{table}
\begin{tabular}{|c||c|c|c|c||c|c|c||c|c|c||c|c|c|c|} \hline & \multicolumn{4}{|c||}{\(Q=0.1\)} & \multicolumn{4}{|c||}{\(Q=0.5\)} & \multicolumn{4}{|c||}{\(Q=1\)} & \multicolumn{4}{|c||}{\(Q=1.5\)} & \multicolumn{4}{|c|}{\(Q=2\)} \\ \cline{2-13} \(n\) & \(r_{*}\) & \(r_{eh}\) & \(r_{0}\) & \(r_{*}\) & \(r_{eh}\) & \(r_{0}\) & \(r_{*}\) & \(r_{eh}\) & \(r_{0}\) & \(r_{*}\) & \(r_{eh}\) & \(r_{0}\) & \(r_{*}\) & \(r_{eh}\) & \(r_{0}\) \\ \hline \hline \(n=4\) & 0.03 & 1.99 & 2.99 & 0.17 & 1.96 & 2.95 & 0.35 & 1.86 & 2.82 & 0.53 & 1.66 & 2.56 & 0.70 & 1.23 & 2.05 \\ \hline \(n=5\) & 0.08 & 0.92 & 1.302 & 0.19 & 0.919 & 1.301 & 0.27 & 0.91 & 1.29 & 0.33 & 0.90 & 1.28 & 0.38 & 0.89 & 1.27 \\ \hline \(n=6\) & 0.14 & 0.782 & 1.061 & 0.24 & 0.781 & 1.06 & 0.30 & 0.779 & 1.059 & 0.34 & 0.777 & 1.057 & 0.38 & 0.774 & 1.054 \\ \hline \(n=7\) & 0.19 & 0.755 & 0.993 & 0.29 & 0.754 & 0.992 & 0.34 & 0.753 & 0.992 & 0.38 & 0.752 & 0.991 & 0.41 & 0.751 & 0.99 \\ \hline \(n=8\) & 0.24 & 0.761 & 0.978 & 0.33 & 0.76 & 0.976 & 0.38 & 0.759 & 0.975 & 0.42 & 0.758 & 0.975 & 0.44 & 0.757 & 0.975 \\ \hline \(n=9\) & 0.29 & 0.78 & 0.98 & 0.38 & 0.778 & 0.979 & 0.43 & 0.777 & 0.979 & 0.46 & 0.777 & 0.979 & 0.48 & 0.776 & 0.979 \\ \hline \(n=10\) & 0.34 & 0.8011 & 0.993 & 0.42 & 0.801 & 0.993 & 0.47 & 0.80 & 0.992 & 0.50 & 0.80 & 0.992 & 0.52 & 0.80 & 0.992 \\ \hline \(n=11\) & 0.38 & 0.828 & 1.0114 & 0.47 & 0.827 & 1.0114 & 0.51 & 0.826 & 1.0113 & 0.54 & 0.826 & 1.0113 & 0.56 & 0.826 & 1.0112 \\ \hline \end{tabular}
\end{table}
Table 1: Values of \(r_{*}\), \(r_{eh}\), and \(r_{0}\) of the higher-dimensional ALF charged black hole for different values of \(Q\) and \(n\).
Figure 4: _Geometrical shape of the shadow of the higher-dimensional ALF charged black hole in celestial plane with \(M=1\)._
Based on the data provided in Table 1, one can plot the shadow circles of the ALF charged black hole with extra dimensions for different values of \(Q\) and \(n\). In Fig. 4, we can see the geometrical shapes of shadow of the ALF charged black hole with extra dimensions in the celestial coordinates for different values of \(Q\) and \(n\). Each plot in Fig. 4 is for a fixed value of \(Q\). From Fig. 4 we see that for a fixed value of \(Q\), the shadow shapes of the black hole decrease with increasing dimension. Therefore, the extra dimensions affect the shadow of the black hole significantly by reducing the size of its geometrical shape.
Also, in Fig. 5, we see the shadow circles of the ALF charged black hole with extra dimensions in the celestial coordinates for different values of \(Q\) with \(n=4\) in Fig. 4(a) and \(n=5\) in Fig. 4(b). In Fig. 4(a) for \(n=4\) we see that by increasing the electric charge value, the size of shadow circles decrease. In Fig. 4(b), however, for \(n=5\) the shadow size of the ALF charged black hole with higher dimensions for each value of electric charge approach each other while they experience a reduction in their size in comparison with corresponding ones for \(n=4\). One can see also the behavior for \(n>5\). Therefore, Fig. 5 shows us that for \(n\geq 5\), the shadow circles associated with different values of \(Q\) coincide on each other. Therefore, the impact of the electric charge on the shadow of the black hole in EHM gravity is suppressible.
reducing the energy emission rate, especially for \(Q=2\). From Fig. 6d, however, one can find that for \(n=5\) the energy emission rates of the ALF charged black hole with extra dimensions associated with each value of the electric charge approach each other while they experience an amplification in their values in comparison with corresponding ones for \(n=4\) in Fig. 6c. One can also verify such a behavior for \(n>5\). Therefore, Figs. 6c and 6d show us that for \(n\geq 5\), the energy emission rates associated with different values of \(Q\) coincide with each other. So, although the impact of the electric charge is to amplify the energy emission rate of the black hole in EHM gravity, which causes black hole evaporation to accelerate, its effect is dominated by the impact of extra dimensions.
#### iv.2.4 Deflection angle
Inserting Eqs. (5) and (6) into Eqs. (48) and (49), results in the Gaussian optical curvature for the higher-dimensional ALF black hole in EHM gravity, which up to the first order in mass and second order in electric charge
Figure 6: _The energy emission rate as a function of \(\varpi\) for the higher-dimensional charged ALF black hole in EHM gravity for different values of \(n\) with \(Q\)._
of the black hole can be approximately found as follows
\[K\approx-4\,\Gamma\left[\frac{n-1}{2}\right]\left\{\frac{M\pi^{\frac{3-n}{2}}(n-2 )(n-3)^{2}r^{1-n}-2Q^{2}\pi^{3-n}r^{4-2n}\Gamma\left[\frac{n-1}{2}\right]}{(n- 2)(n-3)}\right\}\,. \tag{59}\]
Furthermore, the surface element of the optical metric (48) for the higher-dimensional ALF black hole in EHM gravity corresponding to the metric coefficients (5) and (6) can be approximately found as follows
\[dS=\sqrt{g}\,drd\phi=\frac{r}{h(r)\sqrt{f(r)}}dtd\phi\approx rdrd\phi\,. \tag{60}\]
Now, employing Eqs. (59) and (60) in the deflection angle formula (53) leads to the deflection angle of the higher-dimensional ALF black hole in EHM gravity as follows
\[\begin{split}\Theta&=-\int_{0}^{\pi}\int_{\frac{ \xi}{\sin[\phi]}}^{\infty}KdS\\ &\approx-\int_{0}^{\pi}\int_{\frac{\xi}{\sin[\phi]}}^{\infty} \Gamma\left[\frac{n-1}{2}\right]\left\{\frac{M\pi^{\frac{3-n}{2}}(n-2)(n-3)^{ 2}r^{1-n}-2Q^{2}\pi^{3-n}r^{4-2n}\Gamma\left[\frac{n-1}{2}\right]}{(n-2)(n-3)} \right\}rdrd\phi\\ &=\frac{1}{\pi^{n}\xi^{2n-3}}\left\{4M\xi^{n}\pi^{\frac{n+4}{2} }\Gamma\left[\frac{n-2}{2}\right]-\frac{Q^{2}\xi^{3}\pi^{\frac{7}{2}}\Gamma \left[\frac{2n-5}{2}\right]\left(\Gamma\left[\frac{n-3}{2}\right]\right)^{2}}{ \Gamma[n-1]}\right\}\,.\end{split} \tag{61}\]
Figure 7: _The behavior of deflection angle of the higher-dimensional ALF charged black hole in EHM gravity in terms of \(\xi\) for different values of \(n\) and \(Q\)._
The behavior of the deflection angle of the higher-dimensional ALF charged black hole is illustrated in Fig. 7 for different values of \(n\) with respect to \(Q=0.5\) in Fig. 7a and for different values of \(Q\) with respect to \(n=4,5\) in Figs. 7b and 7c, respectively. From Fig. 7a, we see that decreasing the value of the impact parameter \(\xi\) results in extremely increasing the deflection angle of the black hole. Also, Fig. 7a shows us that for a fixed value of electric charge, the deflection angle of the black hole reduces by growing the number of extra dimensions. In Fig. 7b we see that for a fixed value of \(n\), the deflection angle of the black hole decreases by increasing electric charge \(Q\). However, as Fig. 7c shows, for \(n\geq 5\) the deflection angle curves of the black hole corresponding to each value of \(Q\) coincide. This again shows that the effect of the electric charge is dominated by the impact of extra dimensions in the ALF charged black hole.
#### iv.1.5 Constraints from EHT observations of M87*
Here we aim to compare the deduced shadow radius of the higher-dimensional ALF charged black hole in EHM gravity with the shadow size of supermassive black hole, M87* captured by EHT. Within 1-\(\sigma\) (68%) confidence levels, one can find that the shadow size of M87* supermassive black hole captured by EHT lies within the interval [167]
\[4.31\leq R_{s,M87^{*}}\leq 6.08\,. \tag{62}\]
Comparing this with the shadow size of the higher-dimensional ALF charged black hole in EHM gravity enables us to constrain the electric charge values.
Figure 8 indicates the behavior of the shadow radius of the higher-dimensional ALF charged black hole in EHM gravity in comparison with the EHT's shadow size of M87* within 1-\(\sigma\) uncertainties given in Eq. (62) versus the electric charge. In Fig. 8, the white (unshaded) region denotes the 1-\(\sigma\) confidence level while the brown (shaded) areas are the excluded regions, which are incompatible with the EHT observations associated with shadow radius of M87*. From Fig. 8, we see that the shadows of the higher-dimensional ALF charged black hole in EHM gravity associated with \(n=5,\ldots,11\) are incompatible with the observations of EHT. However, the shadow of the ALF charged black hole in EHM gravity with \(n=4\) lies in the 1-\(\sigma\) confidence level, so that in the range \(0\leq Q<1.8\) the shadow radius of
Figure 8: _The shadow radius of the higher-dimensional ALF charged black hole in EHM gravity in comparison with the shadow size of M87* captured via EHT within \(1\)-\(\sigma\) confidence level versus the electric charge. The brown (shaded) area is the excluded region, which is inconsistent with the observations of EHT, while the white (unshaded) region is the \(1\)-\(\sigma\) confidence level of EHT data._
the four dimensional black hole in EHM gravity has a good consistency with EHT observations. Moreover, like Table 1 and Fig. 5a, we see from Fig. 8 that decreasing the electric charge value leads to reduce the shadow radius of the four dimensional ALF charged black hole. The decreasing the electric charge, however, has no effect on the shadow of the higher-dimensional ALF charged black hole in EHM gravity associated with \(n=5,\dots,11\) since the impact of the electric charge in comparison with the extra dimension effect can be eliminated.
### ALAdS charged black hole with extra dimensions
Due to the complexity of the metric coefficients of the ALAdS charged black hole in EHM gravity with extra dimensions, we consider \(n=4\) in addition to two odd dimensions, \(n=5\) and \(n=7\). Moreover, based on the previously mentioned condition \((\alpha+\gamma\Lambda)<0\) for the ALAdS case, we utilize two different sets \((\alpha=0.01,\,\gamma=0.51)\) and \((\alpha=0.015,\,\gamma=0.81)\) by considering five different values for the negative cosmological constant, \(\Lambda=-0.02,-0.04,-0.06,-0.08\), and \(-0.10\). Moreover, we set \(Q=0.5\) to study the impact of extra dimensions and the negative cosmological constant.
#### iv.2.1 Effective potential
As mentioned before, the effective potential plays a key role in studying shadow. We can find the effective potential for the ALAdS charged black hole with extra dimensions by inserting Eqs. (11)-(13) together with (14) for \(n=4\), and (15) for \(n=5\) and \(n=7\) into Eq. (37), which for \(n=4\) yields
\[V_{eff} =\frac{4r^{4}(4+\beta\gamma)^{2}\left(3g^{2}r^{2}+1\right)^{2}}{ \left(6g^{2}r^{4}(4+\beta\gamma)-q^{2}+8r^{2}\right)^{2}}\Bigg{\{}\frac{\left( \mathcal{K}+L^{2}\right)}{12r^{6}(4+\beta\gamma)^{2}}\Bigg{(}-\sqrt{3}\,r^{3 }\cot^{-1}\left[\sqrt{3}\,gr\right]\left(3g^{2}q^{2}-2\beta\gamma\right)^{2}g ^{-1} \tag{63}\] \[+q^{4}\left(9g^{2}r^{2}-1\right)+48q^{2}r^{2}+12r^{3}(4+\beta \gamma)\left(r\left(-\beta\gamma+g^{2}r^{2}(4+\beta\gamma)+4\right)-\mu(4+ \beta\gamma)\right)\Bigg{)}-E^{2}\Bigg{\}}\]
and for \(n=5\) and \(n=7\) results in
\[V_{eff} =\frac{(n-2)^{2}(4+\beta\gamma)^{2}\left(g^{2}r^{2}(n-1)+n-3 \right)^{2}}{\left(g^{2}r^{2}(n-2)(n-1)(4+\beta\gamma)-q^{2}r^{6-2n}+4(n-3)(n -2)\right)^{2}} \tag{64}\] \[\times\Bigg{\{}\frac{\left(\mathcal{K}+L^{2}\right)}{r^{2}} \bigg{(}-\frac{\mu}{r^{n-3}}+\frac{8g^{2}r^{2}(2+\beta\gamma)+16}{(4+\beta \gamma)^{2}}+\frac{2q^{2}}{(n-2)(n-3)(4+\beta\gamma)r^{2n-6}}\] \[-\frac{2\beta\gamma(n-3)q^{2}}{g^{2}(n-1)^{2}(n-2)(4+\beta\gamma )^{2}r^{2n-4}}+\frac{(n-1)\beta^{2}\gamma^{2}g^{4}r^{4}}{(n-3)(4+\beta\gamma)^ {2}}{}_{2}{\rm F}_{1}\left[1,\frac{n+1}{2};\frac{n+3}{2};\frac{(n-1)g^{2}r^{2} }{3-n}\right]\] \[+\frac{2\beta\gamma(n-3)^{2}q^{2}}{g^{4}(n+1)(n-1)^{2}(n-2)(4+ \beta\gamma)^{2}r^{2n-2}}{}_{2}{\rm F}_{1}\left[1,\frac{n+1}{2};\frac{n+3}{2}; \frac{3-n}{(n-1)g^{2}r^{2}}\right]\] \[-\frac{q^{4}}{g^{2}(n-1)(n-2)^{2}(3n-7)(4+\beta\gamma)^{2}r^{2(n -5)}}{}_{2}{\rm F}_{1}\left[1,\frac{3n-7}{2};\frac{3n-5}{2};\frac{3-n}{(n-1)g^{ 2}r^{2}}\right]\Bigg{)}-E^{2}\Bigg{\}}\,.\]
Fig. 9 is the illustration of the effective potential versus \(r\) for the ALAdS charged black hole with higher dimensions for different values of \(n\) and \(\Lambda\) for which we utilized the set \((\alpha=0.01,\,\gamma=0.51)\). We see from Fig. 9a that for the fixed value of the cosmological constant \(\Lambda=-0.02\), the effective potential for the higher-dimensional ALAdS black hole (like ALF one) in EHM gravity increases by growing \(n\). Furthermore, Figs. 9b, 9c, and 9d show that for a fixed value of \(n\), increasing the value of \(\Lambda\) (i.e., decreasing its absolute value) leads to amplify the effective potential for the higher-dimensional ALAdS charged black hole. This amplification becomes more remarkable by increasing the
number of dimensions \(n\) since in Fig. (d)d for \(n=7\), the curve of effective potential related to \(\Lambda=-0.02\) has much larger values than the corresponding ones in Figs. (b)b and (c)c. Consequently, form Fig. 9, we find that the number of extra dimensions together with the cosmological constant have simultaneously an amplifying impact on the effective potential for the higher-dimensional ALAdS charged black hole in EHM gravity.
#### iv.2.2 Geometrical shapes of shadow
One can characterize the geometrical shape of the shadow of the ALAdS charged black hole in EHM gravity with extra dimensions on the observer's frame utilizing the celestial coordinates. Applying Eqs. (12) and (13) together with (14) for \(n=4\), and (15) for \(n=5\) and \(n=7\) onto Eq. (39) results in the radius of the photon sphere of the black hole. Also, one can gather the radius of shadow circles of the ALAdS black hole in EHM gravity with higher dimensions through inserting Eqs. (11)-(13) together with (14) for \(n=4\), and (15) for \(n=5\) and \(n=7\) into Eq. (42) and making use of Eq. (44). The numerical data associated with \(r_{*}\), \(r_{ch}\), and \(r_{0}\) for different values of \(\Lambda\) and \(n\) utilizing the considered set (\(\alpha=0.01\), \(\gamma=0.51\)) are provided in Table 2.
Using the data collected in Table 2, we illustrate of the shadow shapes in celestial coordinates of the higher-dimensional ALAdS black hole in EHM gravity in Fig. 10. Each plot in Fig. 10 is illustrated for a fixed value of \(\Lambda\). We see from Fig. 10 that when the cosmological constant \(\Lambda\) is fixed, the radius of shadow circles of the ALAdS charged black hole (like ALF one) decreases with increasing the number of dimensions.
Figure 9: _The graph of radial evolution of the effective potential for the higher-dimensional ALAdS charged black hole in EHM gravity for different values of \(n\) and \(\Lambda\) in which we set \(Q=0.5\) and \(M=1\) using the set (\(\alpha=0.01\), \(\gamma=0.51\))._
Now by fixing the number of dimensions \(n\), we plot the shadow circles of the ALAdS charged black hole with extra dimensions in EHM gravity in Fig. 11 for different values of \(\Lambda\). From Fig. 11, we can obviously find that decreasing the value of the negative cosmological constant (i.e., increasing its absolute value) results in remarkably growing the radius of the shadow circles of the ALAdS charged black hole. This means that turning off the cosmological constant yields smaller shadow sizes. Consequently, for the higher-dimensional charged ALAdS black hole in EHM gravity, from Figs. 10 and 11, we can see that the impact of the extra dimensions (cosmological constat) on the shadow of the black hole is to reduce (amplify) its size.
#### iv.3.3 Energy emission rate
One can insert \(r_{eh}\) values from Table 2 into Eq. (19) to find numerical values of the Hawking temperature of the ALAdS extra dimensional black hole in EHM gravity. Then, the numerical values of \(\sigma_{lim}\) in Eq. (45) for different \(\Lambda\) and \(n\) from the values of the shadow radius \(\sqrt{\eta+\xi^{2}}\) of the black hole should be found. Consequently, one can find the expressions of the energy emission rate in terms of different values of \(\Lambda\) and \(n\) associated with the higher-dimensional ALAdS black hole in EHM gravity by inserting the Hawking temperature and \(\sigma_{lim}\) values into Eq. (46).
Fig. 12 is the illustration of the energy emission rate for the charged ALAdS higher-dimensional black hole in EHM gravity in terms of the emission frequency \(\varpi\). In Fig. 11(a) the behavior of the energy emission rate is shown for the fixed value of the cosmological constant \(\Lambda=-0.02\) associated with \(n=4,5,\) and \(7\). Additionally, Figs. 11(b) and 11(c) are related to \(n=5\) and \(n=7\), respectively. From Fig. 11(a) we see that for a fixed value of \(\Lambda\), growing \(n\) results in significantly increasing the energy emission rate of the ALAdS charged black hole with extra dimensions. Thus, like ALF charged black hole, we found out that extra dimensions accelerate the evaporation of the ALAdS charged black hole with higher dimensions. Moreover, From Figs. 11(b) and 11(c) we see that for a fixed value of number of dimensions \(n\), decreasing \(\Lambda\) (i.e., increasing its absolute value) results in reducing the energy emission rate of the ALAdS charged black hole with extra dimensions. This means that turning off the cosmological constant leads to amplify the energy emission rate. Also, the curves of the energy emission of the black hole in Fig. 11(c) associated with \(n=7\) have larger values than the corresponding ones in Fig. 11(b) for \(n=5\). Consequently, the impact of the cosmological constant is to reduce the energy emission rate of the charged ALAdS higher-dimensional black hole, which results in decelerate its evaporation while the effect of extra dimensions is to accelerate it.
Figure 11: _Geometrical shape of the shadow of the higher-dimensional ALAdS charged black hole in celestial plane with \(M=1\) using the set \((\alpha=0.01,\,\gamma=0.51)\)._
#### iv.1.4 Deflection angle
We apply Eqs. (11)-(13) and (15) for odd \(n\)'s for simplicity on Eqs. (48) and (49) to gain the Gaussian optical curvature for the higher-dimensional ALAdS charged black hole. The Gaussian optical curvature of the black hole up to the first order in source mass and cosmological constant and second order in electric charge and again second order in coupling constants \(\gamma\) and \(\alpha\), can be found as follows
\[\begin{split} K\approx\frac{-8(n-3)\pi^{1-n}r^{-2n}}{\gamma^{2}( n-3)^{4}(n-2)^{5}(n-1)(n+1)}\bigg{\{}&\alpha^{2}M(n-3)(n-2)^{2}(n^{2}-1 )(n-4)(5n-7)\pi^{\frac{n+1}{2}}r^{n+5}\Gamma\left[\frac{n-1}{2}\right]\\ &+6\alpha\gamma\Lambda\pi^{n-1}(n-3)(n-2)^{3}(n+1)(2n-3)r^{2-2n} \\ &+540\pi^{2}Q^{2}r^{4}\gamma^{2}\left(\Gamma\left[\frac{n-1}{2} \right]\right)^{2}\bigg{\}}\,.\end{split} \tag{65}\]
Furthermore, the surface element of the optical metric (48) for the higher-dimensional ALAdS charged black hole in EHM gravity corresponding with the metric coefficients (11)-(13) and (15) for odd \(n\)'s can be found approximately as the same as Eq. (60). Now, by inserting Eqs. (60) and (65) into the deflection angle expression (53), one can get
Figure 12: _The energy emission rate as a function of \(\varpi\) for the higher-dimensional ALAdS charged black hole in EHM gravity for different values of \(n\) and \(\Lambda\)._
approximately the deflection angle of the higher-dimensional ALAdS black hole in EHM gravity as follows
\[\begin{split}\Theta&=-\int_{0}^{\pi}\int_{\frac{\xi}{ \kappa[m]}[\phi]}^{\infty}KdS\\ &=-\int_{0}^{\pi}\int_{\frac{\xi}{\kappa[m]}[\phi]}^{\infty}Krdrd \phi\\ &\approx\frac{\xi^{4-4n}}{(n-3)^{2}(n-2)^{4}}\Bigg{\{}\frac{56 \alpha^{2}M(n-5)(n-3)(n-2)\pi^{\frac{4-n}{2}}\xi^{3n+3}\Gamma\left[\frac{n-6}{ 2}\right]}{\gamma^{2}(n-7)}+\frac{3\sqrt{\pi}\alpha\Lambda(n-2)\Gamma\left[ \frac{4n-3}{2}\right]}{\gamma(n-1)^{3}\Gamma[2n-4]}\\ &+\frac{540\pi^{\frac{7-2n}{2}}Q^{2}\xi^{2n+2}\left(\Gamma\left[ \frac{n-3}{2}\right]\right)^{2}\Gamma\left[\frac{2n-5}{2}\right]}{(n+1)\Gamma [n]}\Bigg{\}}\,.\end{split} \tag{66}\]
Fig. 13 is the illustration of the deflection angle of the higher-dimensional ALAdS black hole in EHM gravity versus the impact parameter \(\xi\) with respect to \(\Lambda=-0.02\) and \(n=5\) for simplicity. From Fig. 13, we see that reducing the impact parameter \(\xi\) again results in increasing the deflection angle of the black hole.
#### iv.2.5 Constraints from EHT observations of M87*
Now we want to compare the shadow radius of the higher-dimensional ALAdS charged black hole in EHM gravity with the shadow size of M87* supermassive black hole captured by EHT in Eq. (62) to constrain the cosmological constant values and coupling constants of the EHM theory.
Figure 14 shows the behavior of the shadow radius of the higher-dimensional ALAdS charged black hole in EHM gravity in comparison with the shadow radius of M87* given by EHT within 1-\(\sigma\) uncertainties as seen in Eq. (62). In Fig. 14, the white (unshaded) region indicates the 1-\(\sigma\) confidence level while the brown (shaded) areas are the excluded regions, which are inconsistent with the observations of EHT related to the shadow radii of M87*. In Figs. 14a and 14b utilizing the sets (\(\alpha=0.01\), \(\gamma=0.51\)) and (\(\alpha=0.015\), \(\gamma=0.81\)), respectively, the comparison between the shadow radius of the higher-dimensional ALAdS charged black hole and M87* is shown versus the cosmological constant. From Fig. 14a we see that for \(n=4\), the shadow radius of the black hole is in consistence with M87* shadow for \(0<\Lambda<-0.025\) while for \(n=5\) and \(n=7\) such a consistency can be seen in \(-0.05<\Lambda<-0.07\) and \(-0.09<\Lambda\), respectively. Comparing Fig. 14a and Fig. 14b shows that these ranges shift a bit towards larger values of \(\Lambda\) while the values of the shadow radius of the black hole experiences a tiny amplification by increasing \(\alpha\) and \(\gamma\) in Fig. 14b. It should be noted that Figs 14a and 14b show that omitting the cosmological constant leads to reduction of the shadow
Figure 13: _The behavior of deflection angle of the higher-dimensional ALAdS black hole in EHM gravity in terms of \(\xi\) for \(n=5\) and \(\Lambda=-0.02\)._
radius of the black hole. Moreover, Fig. (c)c is for comparison between the shadow radii of the higher-dimensional ALAdS charged black hole and M87* versus the coupling constant \(\gamma\) with respect to the set (\(\alpha=0.015\), \(\Lambda=-0.10\)). From Fig. (c)c, we see that for \(n=4\), the shadow radius of the black hole is compatible with M87* shadow in the range \(0.12<\gamma<0.17\) while for \(n=5\) and \(n=7\) such a compatibility appears in \(0.26<\gamma<0.36\) and \(0.46<\gamma\), respectively. Also, Fig. (c)c indicates that increasing the coupling constant \(\gamma\) leads to amplification of the shadow radius of the black hole. Additionally, Fig. (d)d is for comparing the shadow radius of the higher-dimensional ALAdS charged black hole and M87* versus coupling constant \(\alpha\) using the set (\(\gamma=0.51\), \(\Lambda=-0.10\)). In Fig. (d)d, we see that for \(n=4\), the shadow radius of the black hole is compatible with M87* shadow in the interval \(0.03<\alpha<0.044\) while for \(n=5\) and \(n=7\) such a compatibility appears in \(0.016<\alpha<0.02\) and \(\alpha<0.012\), respectively. We see from Fig. (d)d that increasing the coupling constant \(\alpha\) results in reducing the shadow size of the black hole. The key point here is that from Fig. 14 one can expect that the extra dimensions, especially \(n=5\) can be apparently observed from the shadow of black holes captured by EHT thanks to the presence of the cosmological constant in the EHM theory.
Figure 14: _The shadow radius of the higher-dimensional ALAdS charged black hole in EHM gravity in comparison with the shadow size of M87* captured via EHT within \(1\)-\(\sigma\) confidence level. The brown (shaded) areas are the excluded regions, which are inconsistent with the observations of EHT while the white (unshaded) region is the \(1\)-\(\sigma\) confidence level of EHT data._
Summary and Conclusions
In this study, according to string theory, braneworld models, and AdS/CFT correspondence, we motivated to take into account the higher-dimensional ALF and ALAdS charged black hole solutions of the EHM theory to investigate the behaviors of the corresponding shadow and deflection angle. Our main goal was to discover how extra dimensions and the other parameters of the theory affect the shadow of the black holes. To do this, we first provided the required general formalism to study the shadow behavior of these higher-dimensional black holes utilizing the Hamilton-Jacobi approach and Carter method to formulate the null geodesics around them and derive the corresponding effective potentials. Next, we introduced the celestial coordinates to specify the shadow shape of the higher-dimensional black holes on the observer's sky. We also estimated the energy emission rate and deflection angle formulas in the higher-dimensional scenario. Additionally, we introduced the black hole shadow observables including shadow size and distortion, as well as shadow area and oblateness proposed by Hioki-Maeda and Kumar-Ghosh proposals, respectively. Then, employing the constructed framework, we studied the shadow behavior, deflection angle, and energy emission rate of the ALF and ALAdS charged black holes in EHM gravity with extra dimensions. We computed and analyzed the significant impacts of the electric charge, cosmological constant, and extra dimensions on the shadow, deflection angle, and energy emission rate of the black holes within the setup. Moreover, we constrain these parameters by comparing the shadow size of M87* from EHT observations with the shadow radius of the higher-dimensional ALF and ALAdS charged black holes.
For the higher-dimensional charged ALF case, we discovered that for a fixed value of the electric charge \(Q\), the shadow size of the black hole decreases with increasing the number of extra dimensions \(n\). Also, when the electric charge value increases with a fixed \(n\), the shadow size of the black hole again decreases, whereas the effect of the electric charge in comparison with extra dimensions on the shadow of charged higher-dimensional ALF black hole is suppressible. Also, we saw that for a fixed value of \(Q\), the energy emission rate of the ALF charged black hole with extra dimensions extremely increases by growing \(n\). Also, growing the electric charge increases the energy emission rate, but its effect can be eliminated. Therefore, we found that extra dimensions accelerate the evaporation of the ALF black hole in EHM gravity with higher dimensions. Then, using the Gauss-Bonnet theorem, we have calculated the leading terms of the deflection angle in the weak-limit approximation. We have discussed the impact of charge and the extra dimensions on this optical quantity. It was obvious that for a fixed value of the electric charge, the deflection angle of the ALF black hole in EHM gravity with extra dimensions reduces by growing the number of dimensions. Also, for a fixed value of \(n\), the deflection angle of the black hole decreases by increasing the electric charge \(Q\). However, the effect of the electric charge is dominated by the impact of extra dimensions on the deflection angle of the black hole. Furthermore, by comparing the shadow radius of the black hole with M87* shadow released by EHT, we observed that only the shadow of four dimensional ALF charged black hole with \(0\leq Q<1.8\) lies in the 1-\(\sigma\) uncertainties of EHT data.
On the other hand, for the higher-dimensional ALAdS charged black hole in the EHM gravity we observed that when the negative cosmological constant \(\Lambda\) is fixed, the radii of shadow circles of the higher-dimensional ALAdS charged black hole decrease by increasing the number of extra dimensions \(n\). However, for a fixed \(n\), the shadow radius of the higher-dimensional ALAdS charged black hole increases by decreasing the negative cosmological constant (i.e., increasing its absolute value). Also, we found that for a fixed value of \(\Lambda\), the energy emission rate of the ALAdS charged black hole with extra dimensions extremely increases by growing \(n\), whereas for a fixed \(n\), the energy emission rate of the black hole decreases by decreasing the negative cosmological constant (i.e., increasing its absolute value). Hence, we found that extra dimensions and negative cosmological constant accelerate the evaporation of the ALAdS charged black hole with higher dimensions. Moreover, we observed that increasing the coupling constant \(\gamma\) of the EHM gravity leads to amplify the shadow radius of the black hole whereas increasing the coupling constant \(\alpha\) of the theory results in reducing the shadow size of the black hole. Surprisingly, by comparing the shadow of M87* captured
by EHT with the shadow of the black hole, we proved that the four, five, and seven dimensional ALAdS charged black hole are compatible with EHT data thanks to the presence of the negative cosmological constant.
In summary, we can came to conclusion that the shadows of higher-dimensional ALF and ALAdS charged black holes in EHM theory are characterized by the extra dimensions in addition to their parameters. In this regard, the extra dimensions within EHM theory affect the shadows of the black holes by reducing their size, significantly. On the other hand, owing to the existence of the negative cosmological constant within EHM theory, we concluded that it seems possible to detect the effects of the extra dimensions via EHT. The key point here is that from Fig. 14, one can expect that the extra dimensions, especially \(n=5\) can be apparently observed from the shadow of black holes captured by EHT thanks to the presence of the cosmological constant in the EHM theory. These outcomes may lead to the possibility of testing the higher-dimensional charged black hole solutions of EHM gravity by employing astrophysical observations.
###### Acknowledgements.
The authors would like to thank Milad Hajebrahimi for fruitful comments and discussions. Also, the authors appreciate the respectful referees for carefully reading the manuscript and their insightful comments which boosted the quality of the paper, considerably.
|
2305.10558 | Weakened Topological Protection of the Quantum Hall Effect in a Cavity | We study the quantum Hall effect in a two-dimensional homogeneous electron
gas coupled to a quantum cavity field. As initially pointed out by Kohn,
Galilean invariance for a homogeneous quantum Hall system implies that the
electronic center of mass (CM) decouples from the electron-electron
interaction, and the energy of the CM mode, also known as Kohn mode, is equal
to the single particle cyclotron transition. In this work, we point out that
strong light-matter hybridization between the Kohn mode and the cavity photons
gives rise to collective hybrid modes between the Landau levels and the
photons. We provide the exact solution for the collective Landau polaritons and
we demonstrate the weakening of topological protection at zero temperature due
to the existence of the lower polariton mode which is softer than the Kohn
mode. This provides an intrinsic mechanism for the recently observed
topological breakdown of the quantum Hall effect in a cavity [Appugliese et
al., Science 375, 1030-1034 (2022)]. Importantly, our theory predicts the
cavity suppression of the thermal activation gap in the quantum Hall transport.
Our work paves the way for future developments in the cavity control of quantum
materials. | Vasil Rokaj, Jie Wang, John Sous, Markus Penz, Michael Ruggenthaler, Angel Rubio | 2023-05-17T20:31:01Z | http://arxiv.org/abs/2305.10558v3 | # On the Topological Protection of the Quantum Hall Effect in a Cavity
###### Abstract
We study the quantum Hall effect in a two-dimensional homogeneous electron gas coupled to a quantum cavity field. As initially pointed out by Kohn, Galilean invariance for a homogeneous quantum Hall system implies that the electronic center of mass (CM) decouples from the electron-electron interaction, and the energy of the CM mode, also known as Kohn mode, is equal to the single particle cyclotron transition. In this work, we point out that strong light-matter hybridization between the Kohn mode and the cavity photons gives rise to collective hybrid modes between the Landau levels and the photons. We provide the exact solution for the collective Landau polaritons and we demonstrate the weakening of topological protection at zero temperature due to the existence of the lower polariton mode which is softer than the Kohn mode. This provides an intrinsic mechanism for the recently observed topological breakdown of the quantum Hall effect in a cavity [Appugliese et al., Science 375, 1030-1034 (2022)]. Importantly, our theory predicts the cavity suppression of the thermal activation gap in the quantum Hall transport. Our work paves the way for future developments in the cavity control of quantum materials.
Interaction and topology give rise to rich exotic phases of matter, among which the integer quantum Hall (IQH) effect and the fractional quantum Hall (FQH) effect stand out [1; 2; 3; 4]. On the other side, over the last decade, great progress has been achieved in the manipulation of quantum materials with the use of quantum fields originating from a cavity [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15]. Specifically, for two-dimensional (2d) materials in magnetic fields, ultrastrong coupling of the Landau levels to the cavity field and the observation of Landau polariton quasiparticles have been achieved [16; 17; 18; 19; 20]. Recently, modifications of the magnetotransport properties inside a cavity due to Landau polaritons were reported [21; 22] and most significantly cavity modifications of the IQH transport was demonstrated [23; 24]. The experimental phenomena was argued to originate from a disorder-assisted cavity-mediated long-range hopping [25].
In this work, given that in experiments the GaAs samples have low disorder and that the cavity field is homogeneous in the bulk of the cavity [23], we study the quantum Hall system in the homogeneous limit with vanishing disorder and we propose an alternative theory for the observed cavity modified IQH transport [23]. Our theory highlights the importance of the hybridization between cavity photons and the collective Kohn mode in the quantum Hall system, and provides the exact solution for the polariton modes of the light-matter system. In connection to the experimental findings [23], our theory draws the picture that the transport in the hybrid system is strongly influenced by the polariton states, in contrast to the standard quantum Hall transport which is purely electronic. Crucially, the low energy physics is dictated by the lower polariton mode which is softer than the cyclotron mode. The softening of the cyclotron mode signals the weakened topological protection and provides an intrinsic mechanism for the recently observed topological breakdown [23]. Importantly, our theory predicts that the cavity suppresses the thermal activation gap which can be studied experimentally in the temperature dependence of the quantum Hall transport in the cavity.
_Model Hamiltonian._--The model considers a two-dimensional electron gas coupled to a strong magnetic field and a single-mode homogeneous cavity field, as schematically depicted in Fig. 1(a). The system is described by the Pauli-Fierz Hamiltonian [26; 27; 28]
\[\hat{H}=\sum_{i=1}^{N}\frac{\left(\mathbf{\pi}_{i}+e\hat{\mathbf{A}}\right)^{2}}{2m_{ e}}+\hbar\omega\left(\hat{a}^{\dagger}\hat{a}+\frac{1}{2}\right)+\sum_{i<j}W( \mathbf{r}_{i}-\mathbf{r}_{j}), \tag{1}\]
where \(\mathbf{\pi}_{i}=\mathrm{i}\hbar\mathbf{\nabla}_{i}+e\mathbf{A}_{\mathrm{ext}}( \mathbf{r}_{i})\) are the dynamical momenta of the electrons and \(\mathbf{A}_{\mathrm{ext}}(\mathbf{r})=-\mathbf{e}_{x}By\) describes the homogeneous magnetic field \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}_{\mathrm{ext}}(\mathbf{r})=B\mathbf{e}_ {z}\). The cavity field \(\hat{\mathbf{A}}=\sqrt{\frac{\hbar}{2c_{0}\mathcal{V}\omega}}\mathbf{e}_{x} \left(\hat{a}+\hat{a}^{\dagger}\right)\) is characterized by the in-plane polarization vector \(\mathbf{e}_{x}\) and the photon's bare frequency \(\omega\). The \(\mathcal{V}\) and \(\epsilon_{0}\) are the effective mode volume and the dielectric constant, respectively, and the ladder operators \(\hat{a}\) and \(\hat{a}^{\dagger}\) represent the bare photon fields. We have parameterized the bare electron dispersion by an effective mass term and assumed Galilean invariance. With Galilean invariance in a purely homogeneous system, the
CM is decoupled from the relative motion of the electrons, regardless of the interaction strength [29]. Note that besides Galilean invariance, our theory assumes a homogeneous cavity field. The kinetics of the CM and its coupling to light is best described in terms of the CM coordinate \(\mathbf{R}=(X,Y)=\sum_{i=1}^{N}\mathbf{r}_{i}/\sqrt{N}\) where \(N\) is the total particle number. The Hamiltonian describing the coupling of the CM to light reads
\[\hat{H}_{\rm cm}=\frac{1}{2m_{e}}\left(\mathbf{\Pi}+e\sqrt{N}\hat{\mathbf{A}} \right)^{2}+\hbar\omega\left(\hat{a}^{\dagger}\hat{a}+\frac{1}{2}\right) \tag{2}\]
where \(\mathbf{\Pi}=\mathrm{i}\hbar\mathbf{\nabla}_{\mathbf{R}}+e\mathbf{A}_{\rm ext }(\mathbf{R})\) is the canonical momentum of the CM. It is important to mention that if we break either Galilean invariance or consider a spatially inhomogeneous cavity field, the relative degrees of freedom will couple to quantum light. The CM Hamiltonian has the form of two coupled harmonic oscillators, one for the Landau level transition and one for the photons. In many cases such a Hamiltonian is known as the Hopfield Hamiltonian which can be solved by the Hopfield transformation [30]. The Hopfield model has been employed in previous works for the description of single-particle Landau level transitions coupled to cavity photons [19; 22]. Here, it shows up for the collective coupling of the electrons which emerges naturally through the CM. After the Hopfield transformation we find
\[\hat{H}_{\rm cm}=\hbar\Omega_{+}\left(\hat{b}^{\dagger}_{+}\hat{b}_{+}+\frac{ 1}{2}\right)+\hbar\Omega_{-}\left(\hat{b}^{\dagger}_{-}\hat{b}_{-}+\frac{1}{2}\right) \tag{3}\]
where \(\{\hat{b}^{\dagger}_{\pm},\hat{b}_{\pm}\}\) are the creation and annihilation operators of the Landau polariton quasiparticles. We provide the details about the CM Hamiltonan and its diagonalization in the Supplementary Material. The \(\Omega_{\pm}\) are the upper and lower Landau polariton modes respectively,
\[\Omega_{\pm}^{2}=\frac{\omega^{2}+\omega_{d}^{2}+\omega_{c}^{2}}{2}\pm\sqrt{ \omega_{d}^{2}\omega_{c}^{2}+\left(\frac{\omega^{2}+\omega_{d}^{2}-\omega_{c}^ {2}}{2}\right)^{2}} \tag{4}\]
where \(\omega_{d}=\sqrt{e^{2}N/m_{e}\epsilon_{0}\mathcal{V}}\) is the diamagnetic frequency originating from the \(\hat{\mathbf{A}}^{2}\) which depends on the number of electrons \(N\) and the effective mode volume \(\mathcal{V}\). To define the polariton operators we represent the photon annihilation operator in terms of a displacement coordinate \(q\) and its conjugate momentum as \(\hat{a}=(q+\partial_{q})/\sqrt{2}\), with \(\hat{a}^{\dagger}\) obtained via conjugation [26; 27]. In this representation the polariton operators \(\{\hat{b}_{\pm},\hat{b}^{\dagger}_{\pm}\}\) can be written in terms of mixed, polaritonic coordinates as \(S_{\pm}=\sqrt{\hbar/2\Omega_{\pm}}\left(\hat{b}_{\pm}+\hat{b}^{\dagger}_{\pm}\right)\) with
\[S_{+}=\frac{\sqrt{m_{e}}\bar{Y}+q\Lambda\sqrt{\hbar/\omega}}{\sqrt{1+\Lambda^ {2}}}\text{ and }S_{-}=\frac{-q\sqrt{\hbar/\omega}+\sqrt{m_{e}}\Lambda\bar{Y}}{\sqrt{1+ \Lambda^{2}}}\]
where \(\bar{Y}=Y+\frac{\hbar K_{e}}{eB}\) is the guiding center and \(K_{x}\) is the electronic wave number in the \(x\)-direction. Also we introduced the parameter \(\Lambda=\alpha-\sqrt{1+\alpha^{2}}\) with \(\alpha=\left(\omega_{c}^{2}-\omega^{2}-\omega_{d}^{2}\right)/2\omega_{d} \omega_{c}\) which quantifies the mixing between electric and photonic degrees of freedom.
_Behavior of polaritons._--The Landau polariton modes \(\Omega_{\pm}\) depend on the cavity frequency \(\omega\) and the number of electrons through the diamagnetic frequency \(\omega_{d}\). The behavior of the polariton modes \(\Omega_{\pm}\) as a function of the cavity frequency can be understood from their exact expressions Eq. (4) also shown in Fig. 1(b). Before the avoided crossing the upper polariton \(\Omega_{+}\) follows the cyclotron frequency \(\omega_{c}\) while the lower polariton the bare cavity mode \(\omega\). After the avoided crossing the situation is inverted. At the resonance point \(\omega=\omega_{c}\) the two modes are separated by the Rabi splitting \(\Omega_{R}=\Omega_{+}-\Omega_{-}\) which is approximately proportional to the diamagnetic frequency \(\Omega_{R}\approx\omega_{d}\). The lower polariton is the most important one for the low energy physics of the system and we will show that its behavior controls the transport properties of the 2d quantum Hall system. Approaching the limit \(\omega\to 0\), the lower polariton mode becomes gapless reproducing the result in Refs. [15; 20]. In addition, \(\Omega_{-}\) decreases as a function of the light-matter coupling strength, controlled via the diamagnetic frequency \(\omega_{d}\), i.e., \(\Omega_{-}<\omega_{c}\) when \(\omega_{d}>0\). In what follows, we discuss the implications of the polariton states for the quantum Hall transport at zero and finite temperature.
_Fragility of topological protection against polariton lifetimes and ultrastrong light-matter coupling._--A clean or weakly disordered quantum Hall system at zero temperature, \(T=0\), as long as it is gapped, is expected to be topologically protected [31]. However, the softening of the cyclotron mode, due to the emergence of the lower polariton, indicates that the topological protection of the hybrid system is weakened by the light-matter coupling.
Figure 1: (a) Two-dimensional material confined inside a cavity. The distance between the cavity mirrors is \(L_{z}\) while the whole system is placed perpendicular to a homogeneous magnetic field \(\mathbf{B}\). (b) Real part of the linear current response function \(\chi_{xy}(w)\) for a quantum Hall system coupled to a terahertz cavity. The light-matter coupling is controlled by the diamagnetic frequency \(\omega_{d}\) which is the strong coupling regime \(\omega_{d}=0.5\)THz, while the broadening parameter is chosen \(\delta=0.05\)THz. We see the two hybrid modes, the upper \(\Omega_{+}\) and lower \(\Omega_{-}\) polariton. Compared to the cyclotron mode \(\omega_{c}=1\)THz, \(\Omega_{-}\) is softer. This signals the weakened topological protection of the hybrid system.
Due to the gap reduction of the lower polariton the transport of the system can be more easily affected by disorder, which leads to a finite lifetime for the polariton quasiparticles. The polariton lifetimes will be included phenomenologically and we will see that their effect combined with ultrastrong coupling enables the breakdown of topological protection [23; 24].
The gauge-invariant current operator in the case of homogeneous fields solely depends on the CM canonical momentum and the cavity field [32; 15]\(\hat{\mathbf{J}}=-\frac{e\sqrt{N}}{m_{e}}\left(\mathbf{\Pi}+e\sqrt{N}\hat{ \mathbf{A}}\right)\). Due to this property and the separability of \(\hat{H}_{\mathrm{cm}}\) from the electronic correlations we can compute the transport of the system by focusing only on the states of \(\hat{H}_{\mathrm{cm}}\). At \(T=0\) the system is in the polariton vacuum \(|\Psi_{\mathrm{gs}}\rangle=|0_{+}\rangle|0_{-}\rangle\) which is annihilated by both polariton operators \(\hat{b}_{\pm}\). Given this state, we employ the standard Kubo formalism [33] for the computation of the current correlators \(\chi_{ab}(t)=-\mathrm{i}\Theta(t)\langle\Psi_{\mathrm{gs}}|[\hat{J}_{a}(t), \hat{J}_{b}]|\Psi_{\mathrm{gs}}\rangle/\hbar\) in the time domain which we can transform to the frequency domain in order to obtain the optical conductivities [33]\(\sigma_{ab}(w)=\frac{\mathrm{i}}{w+\mathrm{i}\delta}\left(\frac{e^{2}n_{2d}}{ m_{e}}\delta_{ab}+\frac{\chi_{ab}(w)}{A}\right)\) where \(A\) is the area of the 2d material, \(\delta\) is the broadening parameter, and \(\delta_{ab}\) the Kronecker delta with \(a,b\in\{x,y\}\). The full details for the transport computations are provided in the Supplementary Material. The poles of the linear response functions \(\chi_{ab}(w)\) identify the optical responses of the system and its excitations which are shown in Fig. 1(b). In addition, using the Kubo formula we find the Hall and longitudinal DC (\(w=0\)) conductivities
\[\sigma_{xy} =\frac{e^{2}\nu}{h(1+\Lambda^{2})}\left[\frac{\Lambda(\Lambda+ \eta)}{\Omega_{-}^{2}/\omega_{c}^{2}+\delta^{2}/\omega_{c}^{2}}+\frac{1-\eta \Lambda}{\Omega_{+}^{2}/\omega_{c}^{2}+\delta^{2}/\omega_{c}^{2}}\right]\] \[\sigma_{yy} =\sigma_{D}\left[1-\frac{1}{1+\Lambda^{2}}\left(\frac{\Omega_{+ }^{2}}{\Omega_{+}^{2}+\delta^{2}}+\frac{\Lambda^{2}\Omega_{-}^{2}}{\Omega_{-}^ {2}+\delta^{2}}\right)\right] \tag{5}\]
where \(\eta=\omega_{d}/\omega_{c}\). We note that \(\sigma_{D}=e^{2}n_{2d}/m_{e}\delta\) is the Drude DC conductivity, and that in the expression for the Hall conductance we introduced the Landau level filling factor \(\nu=n_{2d}h/eB\)[34; 35]. Taking the value of the broadening parameter to zero \(\delta\to 0\) we find that the Hall conductance is quantized \(\sigma_{xy}=e^{2}\nu/h\), consistent with the Thouless flux insertion argument [31]. In the last step we used two properties of the mixing parameter \(1-\eta\Lambda=\Omega_{+}^{2}/\omega_{c}^{2}\) and \(\Lambda(\Omega_{-}^{2}/\omega_{c}^{2}-1)=\eta\) which can be exactly deduced from the definition of \(\Lambda\).
The polariton lifetimes are responsible for the broadening in the transmission spectra observed experimentally in quantum Hall systems coupled to cavities [17; 18; 23]. The total lifetime is a result of several mechanisms, scattering by impurities in the material, radiative decay due to interaction with the electromagnetic vacuum [27], coupling to phonons, as well as to the substrate. Here, we phenomenologically model the polariton lifetime as \(\tau=1/\delta\) by keeping a finite broadening \(\delta\) which enables to model the experimental optical spectra as for example in Fig. 1(b).
Motivated by the experimental measurements in Refs. [18; 23; 21] we choose \(\delta=10^{-3}\)THz and in Fig.2 we plot \(\sigma_{xy}\) and \(\sigma_{yy}\) under ultrastrong light-matter coupling, which is quantified by the diamagnetic frequency \(\omega_{d}\). In the regime where the cavity frequency is much smaller than the cyclotron frequency, we see that \(\sigma_{xy}/(e^{2}/h)\) deviates from unity and at the same time \(\sigma_{yy}\) deviates from zero. Both phenomena signal the breakdown of topological protection. The deviations from the expected values occur off-resonance, for a small cavity frequency, because in this regime the lower polariton gap \(\Omega_{-}\) is significantly reduced (see Fig. 1(b)). Importantly, in Fig. 2 we see that the effects on the transport get significantly enhanced as we increase \(\omega_{d}\) from 0.1THz to 0.2THz. This demonstrates that it is the interplay between the ultrastrong light-matter coupling and the finite polariton lifetime that causes the effects on transport. This intuitive physical picture is in agreement with the observed breakdown of topological protection recently reported in Ref [23], and the disorder-assisted cavity-mediated hopping mechanism [25].
It is worth to mention that the above analysis is consistent with the result in the long-wavelength limit (also known as optical limit) \(\omega\to 0\) and \(\delta=0\)[36]. The Hall conductivity for \(\delta=0\) is perfectly quantized for all \(\omega>0\) (finite energy gap) but drops to \(e^{2}\nu/\hbar/(1+\eta^{2})\) for \(\omega\to 0\) (gapless) as it was shown in Ref. [36]. This is the point where the canonical transformation to the polariton basis becomes singular. In this sense, the phenomenological broadening parameter \(\delta\) is a physical way to regularize the optical limit result.
_Cavity suppression of the thermal activation gap.--_ In addition, finite temperature transport properties are strongly influenced by coupling the electrons to the cavity field. This can be understood from the formula for the thermal behavior of the longitudinal transport \(\sigma_{yy}(T)/\sigma_{yy}(T=0)\approx\exp{(-\beta\Delta)}\) where \(\Delta\) is the activation gap of the system and \(\beta=kT\). For the light-matter coupled system, \(\Delta=\Omega_{-}\). Thereby for the IQH effect, the coupling to the cavity generally speaking reduces the
Figure 2: Quantum Hall transport in a cavity at \(T=0\) with a finite broadening \(\delta=10^{-3}\)THz which models the finite lifetime of the polariton excitations. For ultrastrong light-matter coupling \(\omega_{d}=0.1-0.2\)THz and for a small cavity frequency, \(\omega\), when compared to the cyclotron mode \(\omega_{c}=1\)THz, we see that the Hall and the longitudinal conductivities deviate from the topologically expected values, \(1\) and \(0\), respectively.
activation gap from the bare Landau level gap \(\omega_{c}\) to \(\Omega_{-}\) and makes the Hall transport easier to be modified by temperature.
The quantitative description of the thermal activation gap can be obtained from the finite temperature linear-response Kubo formula [33], through \(\chi_{\alpha\beta}(\omega)\) which is the retarded current-current correlation function,
\[\chi_{ab}(w)=\sum_{M,Q}\frac{e^{-\beta E_{M}}-e^{-\beta E_{Q}}}{\mathcal{Z}} \frac{\langle\Psi_{M}|\hat{J}_{a}|\Psi_{Q}\rangle\langle\Psi_{Q}|\hat{J}_{b}| \Psi_{M}\rangle}{w+(E_{M}-E_{Q})/\hbar+\mathrm{i}\delta} \tag{6}\]
where \(|\Psi_{M}\rangle,|\Psi_{Q}\rangle\) are the many-body states with eigenenergies \(E_{M},E_{Q}\) respectively, and \(\mathcal{Z}\) is the partition function \(\mathcal{Z}=\sum_{M}\exp(-\beta E_{M})\). The dominant contribution for the transport is from the ground-state, and the next leading order contribution is from the first excited state. The activation gap is accounted by the exponential suppression of the contribution to conductivity from the excited states. Since the lowest excitation in the system is \(\Omega_{-}\) (see Fig. 1) we expect the lower polariton to control the low temperature transport of the system. The details of the temperature dependent transport formalism are given in the Supplementary Material.
For the temperature dependent computations presented in Fig. 3 we use parameters in the same regime as the ones reported experimentally in Ref. [23]. The magnetic field strength is chosen \(B=1T\) where a quantum Hall plateau is reported in Ref. [23], and the cavity is in the terahertz regime, as in the experiments. For the geometry we consider in Fig.1, the diamagnetic frequency \(\omega_{d}\), which controls the light-matter hybridization, can be estimated through the electron density \(n_{2d}\) as \(\omega_{d}=\sqrt{e^{2}N/m_{e}}\epsilon_{0}\mathcal{V}=\sqrt{e^{2}n_{2d}\omega /\pi cm_{e}}\epsilon_{0}\) where we used also the expression for the fundamental cavity frequency \(\omega=\pi c/L_{z}\)[20]. Here, the electron density is in the range \(n_{2d}=5-10\times 10^{11}\mathrm{cm}^{-2}\) in accordance with experimentally reported values [23].
Figure 3 demonstrates that indeed the transport properties of the quantum Hall system can be modified by coupling strongly to the cavity field. From the behavior of both conductivities it is evident that the dependence of transport on temperature is enhanced for the lower cavity frequency \(\omega=0.1\)THz. This is can be directly connected to gap reduction in the system as the lower polariton \(\Omega_{-}\) takes a smaller value for a smaller cavity frequency. In addition to this important finding we see that the temperature effect is also enhanced by the electron density \(n_{2d}\) by comparing Figs.3 (a) and (b) to Figs.3 (c) and (d). This is to be expected since the electron density is crucial for the light-matter coupling strength as it controls the diamagnetic frequency \(\omega_{d}\).
_Connections to Experiments and Future Directions._-- The above analysis suggests that the activation gap of the hybrid system is strongly suppressed by coupling to cavity modes. Importantly, our model enables the theoretical estimate of the activation gap and direct comparison to experiment. It is an interesting prospect to test experimentally our prediction that the activation gap should follow the lower polariton excitation.
Further, we comment on the FQH effect which is stabilized by electron-electron interactions. In samples with low disorder, the activation gap of the FQH effect is given by the many-body gap, typically determined by the magneto-roton energy [37], which we assume it to be smaller than \(\Omega_{-}\) and therefore protected from the cavity induced phenomena. This picture is consistent with the experimental observations that FQH plateaus are much less modified by the cavity in comparison to the integer ones [23]. From this analysis we anticipate that the FQH effect starts to be modified at low temperature when the lower polariton becomes softer than the many-body gap.
To summarize, using a Galilean invariant quantum Hall model coupled to a homogeneous single-mode cavity field, we are able to provide the exact solution for the hybrid, polariton states and discuss their experimental implications for quantum Hall transport in cavities. We find that the lower polariton \(\Omega_{-}\) is significantly softer than the bare cyclotron mode and leads to the weakening of the topological protection of the hybrid system. Thus, our theory provides an intrinsic mechanism for the recently observed breakdown of the topological protection of the IQH effect due to cavity vacuum fluctuations [23]. In addition our theory predicts that the modification of transport by temperature is enhanced by the cavity, as the thermal activation gap is suppressed due to the lower polariton mode. Having understood analytically the ho
Figure 3: Low temperature transport of the quantum Hall system at magnetic field strength \(B=1T\) coupled to a cavity, for different values of the light-matter coupling. In (a) and (b) \(n_{2d}=5\times 10^{11}\mathrm{cm}^{-2}\) while in (c) and (d) \(n_{2d}=10\times 10^{11}\mathrm{cm}^{-2}\). We see that the light-matter coupling strongly affects the quantum Hall transport. For the smaller cavity frequency \(\omega=0.1\)THz and the larger electron density the deviation from the topologically protected values maximizes. This relates to the behavior of lower polariton mode \(\Omega_{-}\) which controls the thermal activation in the system. The broadening parameter is chosen very small \(\delta=10^{-4}\)THz in order to guarantee numerical convergence without influencing transport.
mogeneous setting, our work paves the way for future investigations going beyond the homogeneous approximation for the cavity field, such that the interaction between the CM polariton modes with the electron-electron correlations comes into play. This could potentially lead to polariton-induced topological order and novel correlated phases between light and matter. Another important direction to be taken is the inclusion of disorder and impurity scattering in our model which will be crucial for a more precise understanding of transport in materials under strong coupling. Finally, incorporating leakage and the multimode structure of the cavity will enable a more realistic description of transport phenomena in complex electromagnetic environments [38].
###### Acknowledgements.
We would like to thank J. Faist, F. Appugliese, J. Enkner, and L. Graziotto and for fruitful discussions. V. R. acknowledges support from the NSF through a grant for ITAMP at Harvard University. J. S. acknowledges support from the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF8686 at Stanford University. This work is also supported from the Cluster of Excellence 'CUI: Advanced Imaging of Matter'- EXC 2056 - project ID 390715994, SFB-925 "Light induced dynamics and control of correlated quantum systems" - project 170620586 and Grupos Consolidades (IT1453-22). We acknowledge support from the Max Planck-New York City Center for Non-Equilibrium Quantum Phenomena. J. W. acknowledges the support from Flatiron institute where this project was initialized. The Flatiron Institute is a division of the Simons Foundation.
|
2305.08317 | By-Software Branch Prediction in Loops | Load-Dependent Branches (LDB) often do not exhibit regular patterns in their
local or global history and thus are inherently hard to predict correctly by
conventional branch predictors. We propose a software-to-hardware branch
pre-resolution mechanism that allows software to pass branch outcomes to the
processor frontend ahead of fetching the branch instruction. A compiler pass
identifies the instruction chain leading to the branch (the branch backslice)
and generates the pre-execute code that produces the branch outcomes ahead of
the frontend observing them. The loop structure helps to unambiguously map the
branch outcomes to their corresponding dynamic instances of the branch
instruction. Our approach also allows for covering the loop iteration space
selectively, with arbitrarily complex patterns. Our method for pre-execution
enables important optimizations such as unrolling and vectorization, in order
to substantially reduce the pre-execution overhead. Experimental results on
select workloads from SPEC CPU 2017 and graph analytics workloads show up to
95% reduction of MPKI (21% on average), up to 39% speedup (7% on average), and
up to 3x improvement on IPC (23% on average) compared to a core with
TAGE-SC-L-64KB branch predictor. | Maziar Goudarzi, Reza Azimi, Julian Humecki, Faizaan Rehman, Richard Zhang, Chirag Sethi, Tanishq Bomman, Yuqi Yang | 2023-05-15T03:08:09Z | http://arxiv.org/abs/2305.08317v2 | # By-Software Branch Prediction in Loops
###### Abstract
Load-Dependent Branches (LDB) often do not exhibit regular patterns in their local or global history and thus are inherently hard to predict correctly by conventional branch predictors. We propose a software-to-hardware branch pre-resolution mechanism that allows software to pass branch outcomes to the processor frontend ahead of fetching the branch instruction. A compiler pass identifies the instruction chain leading to the branch (the branch _backslice_) and generates the pre-execute code that produces the branch outcomes ahead of the frontend observing them. The loop structure helps to unambiguously map the branch outcomes to their corresponding dynamic instances of the branch instruction. Our approach also allows for covering the loop iteration space selectively, with arbitrarily complex patterns. Our method for pre-execution enables important optimizations such as unrolling and vectorization, in order to substantially reduce the pre-execution overhead. Experimental results on select workloads from SPEC CPU 2017 and graph analytics workloads show up to 95% reduction of MPKI (21% on average), up to 39% speedup (7% on average), and up to 3x improvement on IPC (23% on average) compared to a core with TAGE-SC-L-64KB branch predictor.
B.1.4.b Languages and compilers, B.8 Performance and Reliability, C.0.b Hardware/software interfaces, C.1.1.b Pipeline processors, C.1.5.a Instruction fetch, D.3.4.b Compliers, D.4.8.b Modeling and prediction
## 1 Introduction
Virtually all conventional branch predictors rely on the history of the branch outcomes. While effective in many workloads, such history-based predictors often fail on load-dependent branches (LDB) due to lack of predictable patterns in the branch history. Previous studies have shown the significance and impact of such LDBs [1, 2] in overall performance. Furthermore, LDBs incur long latency to resolve if the load misses in the caches, and thus, the pipeline is fed with many instructions from potentially incorrect execution path to be only squashed later on a mispredicted LDB.
Fig. 1 shows an example from Leela (SPEC 2017) with an LDB dependent on multi-level indirect loads. The key observations here are that (a) a loop repeats the LDB, and (b) the Taken/Not-taken outcomes of the LDB instances can be computed earlier in parallel since there is no loop-carried dependence. Existing hardware-only techniques [3] attempt to automatically form and pre-execute the branch _backslices_ (i.e., the chain of instructions feeding the LDB). Major drawbacks of these approaches are: (i) cost, i.e., they need to dedicate hardware for backslice identification at runtime, (ii) latency, i.e., the initial iterations of the loop cannot benefit since hardware has not yet identified the pattern (this is especially a problem for loops with low trip counts, such as the one shown in Fig. 1), and (iii) aliasing, as multiple copies of the same branch (due to loop unrolling or other compiler optimizations based on the code structure) may confuse the hardware learning or engagement.
To address these challenges, we propose a SW/HW mechanism to pre-compute the branch outcomes in software and tell the BPU which dynamic instance of the branch they correspond to. This mechanism can be considered as the _branch-counterpart for software data prefetching_. If the branch information, which is computed in SW, is provided early enough to the HW (i.e., before the frontend observes that dynamic instance of the branch), a prospective misprediction is avoided. Even if late, the pre-computation reduces the branch misprediction penalty since the branch resolution latency is reduced because the data elements needed to resolve the LDB are brought to caches earlier, similar to [4]. We call this proposed mechanism BOSS, Branch-Outcome Side-channel Stream.
We define the BOSS SW/HW interface by using memory-mapped I/O channels. As a result, we do not need any extension to the base processor ISA (ARM in our current evaluation). The memory layout for channels is defined such that multiple branch outcomes can be communicated to HW in parallel, thus enabling the vectorization of the pre-execute loop. BOSS needs two new operations: (i) **BOSS_open** operation for each BOSS channel to tell the Branch Prediction Unit (BPU) which static branch(es) this channel will be feeding, and (ii) **BOSS_write** operation to pass branch outcomes and their corresponding indexes to the BPU through the configured BOSS channel. Fig. 2 shows the instrumented source code for LDB in Fig. 1. Before the outer loop, the BOSS channel is configured by the BOSS_open() operation; then the pre-execute loop produces the branch outcomes and passes them to hardware via BOSS_write() operation, detailed below. Note that the pre-execute loop does not have to cover the _entire_ iteration
Fig. 1: An example LDB in a loop from leela, SPEC 2017.
space of the target loop; the programmer (or compiler via Profile-Guided Optimization) may decide to only partially cover the iteration space.
BOSS is designed for loops where each dynamic instance of the target branch can be identified by a loop iteration counter. For other repetitions, such as recursions, this scheme does not work outright. We employ strip mining to partition loops with large trip counts into multiple loops with small trip counts. Any duplication of the target branch does not hinder BOSS since same channel can feed multiple static branch instances. Success of the scheme depends on two factors: (i) the pre-execute loop is run early enough before the frontend reaches the branch, and (ii) as in software-pretching, the overhead of additional instructions does not overweigh the gains; unrolling and vectorization can help to reduce the overhead. The pre-execute loop can also be skipped statically (decision made by profiling) or dynamically (only executed if the target branch is mis-predicted beyond a set limit).
Our contributions are:
* A software-hardware mechanism for by-software pre-resolution of hard-to-predict branches in loops. This resembles software prefetching, but for branch prediction.
* Design of the SW/HW interface to allow vectorization of the pre-execute loop to reduce the scheme overhead.
* Design of the microarchitecture, and modeling and evaluation of the scheme on Gem5.
## 2 BOSS for Software Branch Resolution
### System Overview
Each BOSS channel is configured by software to feed one (or more) static branch instances, and stores the branch outcome (Taken/Not Taken) for the dynamic instances of the target branch, identified by their corresponding _iteration_ number in the loop. The hardware monitors fetch, commit, and squash of the target branches to keep track of appropriate iteration number to consume next. Each iteration of the outer loop marks one _generation_ of the inner loop. BOSS keeps track of the generation number that is currently in the channel (by monitoring the End PC, which was passed during channel configuration) so as to avoid mis-consumption in case of early break from the inner loop. Note that the frontend may speculatively go to multiple generations ahead by an early break or an otherwise exit from the inner loop. Nevertheless, the outcomes for only one generation are kept in the BOSS channel (obviously this can be extended). When a BOSS_write for a new generation commits, which is always later than the commit of all instances of prior generations, any remaining items from the previous generation in the channel are discarded.
### SW-HW Interface
BOSS \(\mathrm{SW/HW}\) interface consists of two operations: 1. **BOSS_open** for configuration of the channel per target branch (only done once for each branch) and 2. **BOSS_write** to pass \(\mathrm{T/NT}\) branch outcomes of the chosen iteration-space of the loop to the processor frontend. We avoided introducing new instructions to the base ISA, and instead used ordinary memory write instructions (ST instruction in the ARM ISA) to a dedicated address-range. We allocate 8-Bytes for configuration and 256-Bytes for outcomes for each channel. The 256-Bytes address-range corresponds to the iteration-number (the dynamic instance number) 0 to 255 of the target branch; higher iteration numbers rotate over the same address-range. The choice of an address-range, instead of a single byte address, to pass the outcomes to hardware allows to vectorize the pre-execute loop unless a loop-carried dependence prevents it.
BOSS extends the architecture state, but since the added state is only a set of hints to the BPU, it is safe to ignore to save/restore it upon context switch. Nevertheless, ordinary load instructions (LD in ARM ISA) from same dedicated address-range are employed to find out open BOSS channels and read out the BOSS state of the open ones. Employing lazy save/restore, similar to the case for floating-point registers, is another possible improvement.
### Compiler Support
The target branch is either annotated by the programmer, or automatically identified by PGO analysis on representative profiles. Then a compiler transformation pass produces the pre-execute loop through the following steps:
1. Identify the loop induction variable (loop counter), and its start, end and increment variables/values.
2. Starting from the target branch, trace back the chain of instructions feeding the branch back to the loop induction variable.
3. Generate code for the pre-execute loop before the target loop, with same range and increment. Within the loop, compute the branch-outcome for a given value of the induction variable and use BOSS_write to communicate the outcome.
4. Employ strip-mining if the range of values of the induction variable is large (more than the BOSS channel capacity) or not known statically.
The branch-outcomes are only useful if passed to BPU before the frontend reaches that instance of the branch. Thus, the pre-execute loop should be put as early in the code as possible. The compiler checks data dependencies for the above, but usually limits itself to the function boundary. Cross-module analysis can be employed to go even beyond
Figure 2: BOSS pre-execute loop for target branch LDB in Fig. 1. Note that the BR label corresponds to the conditional branch instruction for the designated 1-statement, and that the End label designates an instruction after the inner loop exits.
the function boundary.
### Microarchitecture
The branch-outcomes are stored in a lookup-table addressed by a _<channel#, generation#, iteration#>_ tuple (Fig. 3); an outcome bit and a valid bit are stored per entry. On the outcome-production side (i.e. the branch-outcome values coming from the software), the channel# and iteration# are already implied in the address employed in the **BOSS_write** operation. The generation# is incremented under the hood when the End instruction (which represents an arbitrary instruction right after the loop ends, and whose relative-PC was passed during **BOSS_open** operation** -- See Fig. 2) commits.
On the consumption side, (i.e. the branch-outcomes fed to the BPU) the fetch/squash/commit of the target branch as well as the Loop-End instruction are monitored. For the target branch, the fetch/squash do an increment/decrement of the consumer iteration# respectively, and a commit removes the corresponding branch-outcome from the BOSS outcomes storage if there. For the Loop-End instruction, the fetch/squash do an increment/decrement on the consumer generation#, and also push+reset/pop the current consumer-iteration# on the iter#-stack table in the micro-architecture; note how this stack is required to properly handle squash of the Loop-End instruction. The commit of the Loop-End instruction increments the producer-side generation#.
The outcomes coming from BOSS take priority over ordinary BPU predictions since the BOSS outcomes are absolutely correct ones. Thus if a branch instance hits the BOSS storage, the stored outcome is passed as the prediction. The multiplexer on the top-left of Fig. 3 does this.
As in Fig. 3, two tables store target branch PCs and End-instruction PCs, two tables keep track of proper iteration# (one level depth for the stack was enough in all our experiments), and two other tables store producer-side and consumer-side generation#. For a typical case of supporting 4 simultaneous BOSS channels, 256 iterations and 2 generations per channel, a tiny amount of 256B (BOSS outcomes: 4 channels x 256x2 bits per channel) + 64B (2 PC tables: 4 channels x 8B PC each) + 8B (2 iter# tables: 4 channels x 8b iter# each) + 1 B (2 gen# tables: 4 channels x 1 bit each) = 329B of storage is enough.
Note that the entire BOSS system can be turned off until a BOSS_open operation is executed. An additional BOSS_close operation can signal to turn it off again after the outer loop.
## 3 Experimental Results
We implemented BOSS in gem5 and evaluated a number of applications from SPEC, GAP and cBench suites. Target branches and Regions-of-Interest (ROIs) were selected from prior studies [3, 5] as well as from our own perf analysis, and were evaluated on the processor settings in Table I. Each ROI covers a continuous execution path such that covers several calls to the function enclosing the target branch. Note that for BOSS-instrumented cases, the pre-execute loop is also included in all the measurements.
Fig. 4 shows the overhead instructions normalized to the baseline. Fig. 5 and Fig. 6 show speedup on each ROI, and the IPC gains respectively. For each experiment _loop, unrolled_, and _vectorized_ bars correspond to applying no change, unrolling, and vectorization to the pre-execute loop. Although IPC is consistently improved significantly, the overhead decides whether that translates to speedup or not.
Experiments show that although unrolling and vectorization reduce the instruction overhead, but since they reduce the number of instructions between the production and consumption sites as well, they are not always successful in getting better gains (see leela-kn for instance). Note that since this is a software-based optimization, it can be avoided upon slowdowns. Thus, we focus only on the sped up cases from now on.
The gains come from removing mispredictions from the target loop. Fig. 7 shows the obtained reductions in misprediction rates of the target branch. If applied early enough, BOSS should remove all mispredictions of the target branch, but since we did not go back beyond the function starting point to inject the pre-execute loop, this did not realize in many cases. Furthermore, when the trip-count of the loop is low, e.g. 4 and 8 in the case of leela functions, even the pre-execute loop instructions themselves are not numerous enough to provide the necessary gap between the commit of the BOSS-write instructions and the fetch of the target branch.
BOSS addresses specifically chosen branches. The reduction in total MPKI depends on the contribution of the chosen branch in total MPKI. Fig. 8 shows the MPKI of all branches and the obtained gains. Note that branches in the pre-execute loop are also counted in these results for _Loop, Unrolled_ and _Vectorized_ cases.
## 4 Related Work
Various helper thread mechanisms [6, 7, 8] run a (potentially stripped down) thread ahead of the main thread, resolve the branches and inform the main thread. The obvious overhead and the synchronization difficulty between the two threads are the main concerns.
Runahead techniques [3, 9, 10] let the processor run original code earlier, or actively deduce loops and backslice of
\begin{table}
\begin{tabular}{|l|l|} \hline Processor core & ARM v8-a, 8-issue, 192-entry ROB \\ \hline Branch predictor & TAGE-SC-L, 64KB \\ \hline L1 Instruction/ & 32KB-way64KB-away, 64B lines, 2-cycle hit latency, write back \\ \hline L2 unified cache & 2MB, 8-way, 20-cycle latency, write-back \\ \hline HW prefetcher & Stride prefetcher for L/D L1/2 \\ \hline \end{tabular}
\end{table} TABLE I: EXPERIMENTS Setup
Fig. 3: Micro-architecture of the BOSS mechanism.
delinquent branches or the loads in them [13], and run them earlier, so as to resolve future branches. The learning latency, especially for low-tripcount loops, remains the limiting factor.
Customized branch predictors [2, 11] use either longer branch histories [2] or custom logic [11] for hard-to-predict branches. Longer histories do not help on load-dependent branches, and custom logic needs substantial change in the hardware. Correlating the branch with an earlier load-address or store instruction [12] is another customization mechanism. Complementarily, BOSS extends the HW predictor by SW.
### _Comparison with branch runahead_
Branch Runahead [3] reports higher speedups, but note that it runs the additional instructions essentially on an extra, although stripped-down, core (so called DCE: Dependece Chain Engine), and thus the overheads are not reflected in the reported gains. BOSS, on the other hand, runs everything on the same core; consequently, BOSS does not require an additional core to work, and the gains reported here take the overhead into account as well.
As inherent in every full-hardware mechanism, there is a delay to detect the backslice instruction sequence with enough confidence; this is inherent because a full-hardware mechanism needs to see the same instruction sequence repeated at least a few times to build enough confidence level. For same reason, branch runahead is reporting that on average roughly 20% (up to 40%) of the time the mechanism is inactive (meaning that "at the time the core needed the prediction, no dependence chains had been activated to produce that prediction" [3]). A co-designed approach such as BOSS resolves this issue since the compiler has statistically identified and formed the backslice; thus, no runtime overhead applies to detect it, nor it is missed by the detection mechanism.
Another problem inherent in all-hardware mechanisms is late initiation points; after the backslice is identified, the hardware mechanism needs to identify an initiation-point to copy/synchronize the input values of the backslice from architecture registers of the main core to the extra core, to let it start executing the backslices. Branch runahead does that upon detecting a branch misprediction on the registered branch; this is a good choice to get the latest live-in values for the backslice input registers, but may be too late to initiate execution of them. Branch Runahead [3] reports that on average another \(\sim\)30% (up to over 60%) of the time the mechanism is late ("The late category refers to predictions which have active chains, but are generated too late to be useful for the core" [3]). In BOSS, since the compiler statically generates the backslices and has full visibility into all the codes around it, the pre-execute loop can be brought maximally forward to allow maximum available time between production and consumption of the branch outcomes so as to minimize the lateness.
A downside of a co-designed mechanism such as BOSS is that software/compiler does not know if the misprediction actually happens at runtime or not. Thus the overhead is always paid, whereas the in-hardwarem mechanism avoid it.
Branch runahead has in-hardware mechanisms to handle chained/nested branches. This is inexpensive in hardware because the branch-outcomes are nearby and can be used under the hood to trigger other backslices, but doing it in software requires adding new condition-checks in the pre-execute loop which may in turn get mispredicted. Thus, BOSS would not suit such cases of nested branches.
### _Comparison with control-flow decoupling_
Control-flow decoupling (CFD) [5] changes the code struc
Fig. 4: Instruction overhead; the additional Committed Instructions.
Fig. 5: ROI Speedup values.
Fig. 8: Branch MPKI numbers for all branches in the ROI.
Fig. 6: IPC Gain on ROIs.
Fig. 7: Branch Misprediction rates on the Target Branch only.
ture to produce branch outcomes earlier and explicitly consume them in a second loop.
While CFD has its advantage in avoiding a redundant compute of the branch condition, it is not practical in many cases for 3 important reasons:
1. CFD extends visible architecture state: the contents of the architectural branch-queues that hold the branch-direction outcomes; if these values are lost during context-switch, the program functionality fails. Same is true for function calls when the callee may use same CFD mechanism as the caller, and thus may overwrite those branch-direction outcomes. To address these issues, [5] provides Save_BQ/Restore_BQ interfaces, but this actually means that library-developers should also be aware of, and adhere to, this new architecture-state expansion, and similarly OS should also be aware of it and save/restore the newly added architecture-state beyond the ISA documented in the architecture reference manual of the original processor. This is not trivial at system design.
2. It is not always safe to apply the CFD compiler transformation. Legality of a transformation is a must; thus, for CFD to be applied, the compiler needs to prove the safety of the transformation, which is not possible in many cases due to potential aliasing among the pointers/arrays in the loop, as well as loop-carried dependences.
3. The CFD transformations may prevent/impact other transformations by the compiler resulting in a net loss.
It is important to note that BOSS, on the other hand, works as a branch-hint, and thus resolves the above issues as we describe further below; to appreciate the difference of _CFD-transformation_ vs _BOSS's hint-provisioning_, it may help to look at an analogy on data-access case: imagine a loop has long-latency misses on a 'load' instruction whose backslice is loop-separable; one can create a corresponding pre-execute loop to issue those 'load' instructions earlier and push the data to a newly introduced in-hardware queue so that they are then poped and consumed by the main loop; this is analogous to the CFD-transform. On the other hand, one can instead keep the original main loop intact, but put software-prefetch instructions in the pre-execute loop with same data-addresses as those 'load' instructions; this is analogous to BOSS's hint-provisioning. In the latter case, even if all the executed software-prefetch instructions are dropped by the processor (which is still a totally correct implementation of an ARM processor as per ARM ISA architecture reference manual), or a context-switch happens or a function is called in between software-prefetch and the target 'load' instruction, the program functionality of the latter approach is still correct because all those software-prefetch operations were _hopeful hints_ or _assists_; of course the performance may be hindered in such cases, but it is tolerable if you note that the former approach totally breaks in these cases if the save/restore is not done.
Thus, compared to CFD,
1. No library change or OS support is necessary by BOSS for program correctness. The branch-direction hints that it provides may even be totally ignored by a less capable processor core, or get lost upon a call or a context-switch; the functionality of the program remains intact.
2. It is always safe to apply BOSS; the functionality of the program always remains right. Obviously, the downside is that BOSS has to compute the branch-condition twice: once in the pre-execute loop and once more in the main loop. In this paper, we showed that despite this additional overhead, still reasonable gains can be obtained.
3. Since the original main loop remains intact, any prior compiler transformation is still applicable.
We showed that despite the overhead, BOSS is still beneficial. On top of that, note below additional advantages enabled by the hint-nature of BOSS that CFD by nature cannot do:
**a. Partial coverage of the iteration-space of the target loop** In many cases, there are early-exists from the loop (e.g. break or return under a condition). Thus the _typical_ trip-count of the loop may differ from maximal trip-count checked (e.g. by the loop-end-check in a for-loop). Profile-guided optimizations (FQO), or even simple manual profiling, can reveal that typical trip-count. In such case, the pre-execute loop can be run only up to that typical trip-count instead of covering the entire iteration-space of the main loop.
Taking this one step further, profiling can show which iterations of the loop more often experience a branch-misprediction by the in-HW predictors, and the pre-execute loop can cover only that subset.
Thus, partial coverage applies to loops in which there is an early exit or there are specific loop iteration ranges in which traditional branch predictors have a harder time guessing correctly for the target branch within the loop's body.
**Uscease demonstration:** Fig. 9 shows kill_or_connect() function of leela benchmarks of SPEC 2017, which demonstrates a use case of partial coverage's early exit. To demonstrate the case, we created different micro-benchmarks each covering a subset of the iteration-space; these are represented by n-to-m in the results in Fig. 10 where n and m represent the lower-bound and the upper-bound of the pre-execute loop in Fig. 9 respectively. The results in Fig. 10, including IPC, instruction-overhead, and finally cycle-count (speedup) are normalized to the full-coverage case where the pre-execute loop covers all the 4 iteration from 0 to 3.
As Fig. 10 shows, partial coverage can make a big difference on overheads and gains, and is an additional angle to tune BOSS for the highest gain.
**b. Record-and-replay of branch-outcomes:** Since BOSS is a hint-passing mechanism, it can be used to replay a sequence of statically or dynamically recorded sequence of Taken/Not-taken branch outcomes that are known to be _mostly_ (but not always) correct through profiling or analysis.
Dynamic instances of the branch in the loop may show an irregular hard-to-predict pattern of Taken/Not-taken (e.g. 101100010 where each 1 shows a Taken and every 0 a Not-Taken instance of the branch), but then repeat (even partially) that pattern subsequently when the loop is observed again. In such case, BOSS can be employed without the pre-execute loop: Record-and-replay focuses on the idea that a branch within an inner loop (BIL) can repeat (fully or partially) its branching behavior across multiple outer loop iterations. In that case, the BOSS hint functionality can record BIL's behavior across one outer loop iteration and replay it for a subsequent outer loop iteration.
**Usecase demonstration:** This microbenchmark in Fig. 11 comes from leela's save_critical_neighbours() function with data values for arrays coming from original benchmark with ref.sgf as input. Through experiments, it can be seen that on average, 93% of a generation's target branch outcomes are repeated on subsequent generation, demonstrating an effective case for the record-and-replay functionality. Note how the imperfection of the matchings prevents a mechanism such as CFD to be used here.
Another use-case for record-and-replay is serverless functions in datacenters: the repeat-cycle of a branch may be so long that the micro-architecture context is lost when the branch is revisited. This is already observed in server workloads and serverless functions in datacenters [14], as well as in Android on mobile phones; such above save-restore mechanism as BOSS provides can be used here as well.
**c. Correlation among different branches:** This case is based on the idea that a branch's behavior could tell us about the behavior of another branch within same or a different loop. The BOSS hint can make use of this correlation, and thus can help predict the behavior of this branch within the separate loop. Techniques such as Whisper [15] try to establish a correlation among branches, and then pass that correlation via a formula to the hardware to allow it predict the branch outcome. BOSS can do it in software for the same.
**Usecase demonstration:** The microbenchmark in Fig. 12 shows the case that one branch's behavior within leela's kill_or_connect(), directly relates to the behavior of another branch in a separate loop within kill_neighbours() in Fig. 13. The root cause for this correlations is that both these functions have a common caller and are called closely next to each other by that caller; the differentiating factor for them is the vertex vs. pos values in the two functions, which are often equal as per experiments. The BOSS hint is able to take advantage of this relationship, and forward the (mostly) correct branching behavior to the first generation of target branch within kill_neighbours(). Yet again note how this correlation is statistical, but not mathematically provable; as a result, CFD cannot provide this functionality, but BOSS's hint nature does.
Figure 11: Record-and-replay mechanism for partially-repeated patterns, demonstrated on a case from Leela benchmark of SPEC 2017.
Figure 12: BOSS allows to use the correlation among branches in different loops to resolve branches in a target loop. The designated source branch is correlated to the other branch in Fig. 13.
Figure 10: Various ranges of partial-coverage make a big difference in final gain obtained by BOSS.
## 5 Summary and Conclusion
BOSS can be best described as _branch counterpart of software data prefetching_. Accordingly, it by nature bears all the pros and cons of its data prefetching counterpart. BOSS is not a replacement for hardware prefetching, in the same way that software prefetching is not a replacement for hardware prefetchers. Again similar to software prefetching, BOSS is a custom solution for certain niche cases where more general solutions do not perfectly fit, such as low-tripcount loops or for partial coverage of the loop iteration-space. Extending BOSS to also cover the loop-end branch is a trivial extension. We quantified the achievable gains by BOSS on a number of SPEC and other benchmarks, and explored the limits of its applicability.
**Discussion of limitations:** In the pre-execute mode, we limit BOSS usecases to non-nested branches to avoid having to insert branches in the pre-execute loop. If caller and callee functions both use BOSS at the same time (e.g. imagine the caller does the call in between the pre-execute loop and the main loop, or within the main loop), they should use different BOSS channels, or otherwise the callee would overwrite the branch-outcome values in the channel, and hence, the caller loses to use any remaining elements that used to be available in the channel. Obviously, save/restore is one solution, which is also relatively cheap and can be applied in the caller whenever calling a certain/potential BOSS-user function such as a library function. Saving of an already-open channel can also be done under the hood by the hardware when a new BOSS_open() is called on the same already-open channel by the callee; it can then be restored upon return from the callee. Note that each additional channel is quite inexpensive; each channel imposes only 83 bytes overhead (see SS2.4). Finally, even if by mistake or otherwise same channel is used by the caller and callee (e.g. imagine the case of a call to a library function whose source is not available to the programmer who is willing to use BOSS), the program functionality remains intact because BOSS is only a hint and is safe to be wrong. Of course the performance may suffer in such erroneous case, and hence this should be avoided by proper save/restore, but the point is that even if not, the programs does not break.
Early-eits from the target loop is supported by the generation? mechanism (see SS2.2). Skip (e.g. continue statement in C) is supported only after the target branch in the loop, because otherwise, the target branch would be a nested branch (which is not intended to be supported as said above) under the condition checked for that skip.
To reduce BOSS overhead in the pre-execute mode, we are evaluating selective runs of the pre-execute loop only when the target branch is heavily mis-predicted. PGO-based selective ranges of the iteration-space is another method for the same goal. As also mentioned in SS4.2, record-and-replay and correlation-based usecases are other promising avenues that the additional overhead would be much less since they do not need a redundant compute of the branch condition.
|
2305.07024 | SparseGNV: Generating Novel Views of Indoor Scenes with Sparse Input
Views | We study to generate novel views of indoor scenes given sparse input views.
The challenge is to achieve both photorealism and view consistency. We present
SparseGNV: a learning framework that incorporates 3D structures and image
generative models to generate novel views with three modules. The first module
builds a neural point cloud as underlying geometry, providing contextual
information and guidance for the target novel view. The second module utilizes
a transformer-based network to map the scene context and the guidance into a
shared latent space and autoregressively decodes the target view in the form of
discrete image tokens. The third module reconstructs the tokens into the image
of the target view. SparseGNV is trained across a large indoor scene dataset to
learn generalizable priors. Once trained, it can efficiently generate novel
views of an unseen indoor scene in a feed-forward manner. We evaluate SparseGNV
on both real-world and synthetic indoor scenes and demonstrate that it
outperforms state-of-the-art methods based on either neural radiance fields or
conditional image generation. | Weihao Cheng, Yan-Pei Cao, Ying Shan | 2023-05-11T17:58:37Z | http://arxiv.org/abs/2305.07024v1 | # SparseGNV: Generating Novel Views of Indoor Scenes with Sparse Input Views
###### Abstract
We study to generate novel views of indoor scenes given sparse input views. The challenge is to achieve both photorealism and view consistency. We present SparseGNV: a learning framework that incorporates 3D structures and image generative models to generate novel views with three modules. The first module builds a neural point cloud as underlying geometry, providing contextual information and guidance for the target novel view. The second module utilizes a transformer-based network to map the scene context and the guidance into a shared latent space and autoregressively decodes the target view in the form of discrete image tokens. The third module reconstructs the tokens into the image of the target view. SparseGNV is trained across a large indoor scene dataset to learn generalizable priors. Once trained, it can efficiently generate novel views of an unseen indoor scene in a feed-forward manner. We evaluate SparseGNV on both real-world and synthetic indoor scenes and demonstrate that it outperforms state-of-the-art methods based on either neural radiance fields or conditional image generation. [https://github.com/xt4d/SparseGNV](https://github.com/xt4d/SparseGNV)
## 1 Introduction
Synthesizing high-quality novel views of 3D indoor scenes is a long-standing and challenging task in computer vision [9, 26, 15]. Typically, this task requires dense scans from various viewpoints as input. However, indoor scenes are often spatially complex, and capturing every region of a scene can be expensive and even intractable. To overcome this challenge, we aim to synthesize novel views with sparse input observations, which reduces the data capture burden. An ideal approach should be capable of generating views by hallucinating unobserved regions with view consistency.
Sparse view synthesis methods have gained significant attention recently, particularly those based on neural radiance fields (NeRFs) [32, 23], which rely on a certain level of view coverage as input. However, due to the lack of image generation ability, the above methods are intractable for hallucinating largely unobserved areas. Transformer-based methods [33, 14] learn latent scene representations from 2D observations and conditionally generate images given new viewpoints. However, the lack of explicit 3D representation makes it challenging for these methods to synthesize visual details from unstructured latent space. Another line of work [7, 31, 30] focuses on generating novel views or long-term videos starting from a single image, using generative networks to paint the "outside" of a view autoregressively, but they face limitations in synthesizing consistent views between multiple frames. This leads to the core motivation of our approach: marrying explicit 3D scene structures with image generative models for a joint capability of generating views with a limited visual clue and maintaining scene consistency.
We propose SparseGNV: a framework that learns generalizable scene priors to generate novel views conditioned on sparse input views. SparseGNV is first trained on a large indoor scene dataset to obtain priors that are generalizable across scenes. Once trained, SparseGNV can efficiently generate novel views in a forward pass given observed views of a new scene and target viewpoints, without the need for per-scene optimization. To generate 2D novel views grounded in 3D scene structures, we design SparseGNV with three modules: a neural geometry module,
Figure 1: The proposed SparseGNV generates novel view images of unseen indoor scenes based on 4 observed views.
a view generator module, and an image converter module.
_The neural geometry module_ reconstructs a set of input views into a 3D neural point cloud where each point is associated with an embedding vector. The neural point cloud can be rendered to 2D color and mask images from arbitrary viewpoints using volume rendering following Point-NeRF [39]. Although the point cloud can be scattered and incomplete due to input sparsity, the rendered images still provide structural and texture clues for hallucinating unobserved regions and maintaining consistency. _The view generator module_ generates a novel view conditioned on a _scene context_ and a _query_. The _scene context_ is an overview of the given scene, which consists of the observed images and images rendered by the neural geometry module from multiple sampled viewpoints. It provides a global context that benefits inferring missing regions and maintaining consistency. The _query_ specifies the view that is required to generate. It consists of the rendered image from the target viewpoint. The _query_ provides guidance to retrieve information from the _scene context_ for generating the target novel view. The module uses a joint convolution and transformer-based encoder-decoder network that maps the _scene context_ and the _query_ to a shared latent space, and then autoregressively generates the novel view in the form of discrete tokens [36, 24]. _The image converter module_ is a convolutional decoder network that can reconstruct the discrete tokens back to photorealistic 2D images.
We evaluate SparseGNV on both real-world and synthetic indoor datasets, and the results outperform recent baselines using either neural radiance fields or conditional image generation. We show example generations of SparseGNV in Figure 1.
#### Contributions
* We propose SparseGNV: a learning framework to synthesize consistent novel views of indoor scenes with sparse input views, which combines neural 3D geometry and image generation model to enable photorealistic view synthesis with consistent structure faithful to the observations.
* We design a joint convolution and transformer-based image generation network that effectively incorporates contextual information from 3D scene structures.
* Evaluation results on real-world and synthetic indoor scene datasets demonstrate that SparseGNV achieves state-of-the-art performance for novel view synthesis with only a few observations.
## 2 Related Work
Novel View SynthesisNovel view synthesis is a task to produce images of scenes from arbitrary viewpoints given a number of input views. Early work achieves photorealistic synthesis by capturing a dense set of views [16, 8]. Recently, neural networks based methods have made significant progress on enabling better synthesis quality, wider ranges of novel viewpoints, and more compact model representation. Neural radiance fields (NeRF) [22] is a milestone work that trains multi-layer perceptron (MLP) to encode radiance and density for producing novel views via volume rendering. Following work based on NeRF extends novel view synthesis on varies of aspects: relaxing image constraints [20], improving quality [1], dynamic view synthesis [18, 28], pose estimation [19, 21], rendering in real-time [41], and object / scene generation [27, 10].
High-quality synthesis of scene views generally requires iterative per-scene optimizations with large number of observations. As dense inputs is unavailable in many scenarios, the study of few view synthesis is growing rapidly [34, 11, 23, 12, 3], and one direction is to learn priors across scenes and predicts novel views [42, 2, 37, 33, 14]. Pixel-NeRF [42] is a learning framework that conditions NeRF on one or few input images to predict continuous scene representation. MVSNeRF [2] learns a generic deep neural network that combines plane-swept cost volumes with volume rendering for constructing radiance fields. IBRNet [37] is a network of MLP and ray transformer that estimates radiance and volume density from multiple source views. Scene Representation Transformer [33] combines convolutional network and transformers to encodes input images into latent scene representations and decodes novel views. ViewFormer [14] is another transformer based approach with two stages, where images are encoded into tokens via a codebook network in the first stage, and the tokens of novel views are generated autoregressively conditioned on the inputs in the second stage. Depth prior can be useful for novel view synthesis [32] which completes a dense depth map first to guide optimization of NeRF. However, these methods can have poor performance with inputs of large sparsity.
#### Indoor Scene Synthesis from Sparse Views
Synthesizing novel view of indoor scenes is a practical task naturally challenged by data sparsity. With incomplete RGB-D scans, SPSG [5] generates high-quality colored reconstructions of 3D scenes in the form of TSDF. It uses a self-supervised approach to learn geometry and color inpainting with adversarial and perceptual supervisions on the 2D renderings of the reconstructions. CompNVS [17] is a framework to synthesis novel views from RGB-D scans with largely incomplete scene coverage. It first encodes scans into neural voxel grids, and then uses a geometry predictor with texture inpainter to complete the grids with embedding. A neural render decodes the grids into images and refined via adversarial training. These geometry based methods
requires depth scan and strong 3D completion modeling which are hardly adapted to various scenes. PixelSynth [31] synthesizes novel view of a single image by outpainting unobserved areas projected via 3D reasoning. LookOutsideRoom [30] synthesizes long-term video from a single scene image base on an autoregressive transformer modeling consecutive frames. These single image based methods are unable to maintain consistency between observations. Pathdreamer [13] targets on generating panorama images at novel positions given one or a few observations. It consists of a structure generator and an image generator. The structure generator projects observations into 3D semantic geometry. The image generator uses SPADE network [25] to generate photorealistic views from panorama semantic maps. Pathdreamer focuses on panorama images and requires semantic labeling of indoor scene which cannot be applied conventionally.
## 3 Methodology
In this section, we first briefly introduce the notation and the problem statement. We then propose SparseGNV with designs of the three modules. Lastly, we introduce the procedures of training and inference.
### Notation & Problem
Let \(\mathcal{V}\) = \(\{(I_{i},\pi_{i})\,|\,i=1,2,...,N\}\) be a set of views of indoor scenes, where \(I_{i}\in\mathbb{R}^{W\times H\times 3}\) is the \(i\)-th color image and \(\pi_{i}\) is the camera pose of \(I_{i}\). \(\mathcal{V}\) can be divided into an input observed view set \(\mathcal{O}\) and a novel view set \(\mathcal{X}\). Given an input sparse set of \(\mathcal{O}\), our problem is to generate a view image at a target novel viewpoint. As unobserved regions can be large, hallucinating novel views exactly matching ground truth is not easy. We therefore focus on high-fidelity generations while maintaining the view consistency.
### The SparseGNV framework
We propose SparseGNV: a learning framework incorporating 3D scene structures and image generative models to generate consistent novel views of indoor scenes given only sparse input views. SparseGNV is trained on a large indoor scene dataset to achieve generalization ability. Given sparse input views of an unseen scene, SparseGNV can efficiently generate novel views in a feed-forward manner. SparseGNV is designed with three modules: the neural geometry module, the view generator module, and the image converter module. The neural geometry module takes the input views to build a 3D neural point cloud [39] that can provide rendered guidance images from arbitrary viewpoints. The view generator module generates a novel view conditioned on a scene context of global information and a query regarding the target pose. The scene context and the query pack the information provided by the rendered guidance images, which are fed to a convolution and transformer-based network to generate the novel view in the form of discrete image tokens [36, 24]. The image converter module reconstructs the tokens back to the final images through a decoder network. We show an overview of SparseGNV in Figure 2. The detailed description of the three modules is as follows.
**Neural Geometry Module.** Given an input sparse set of observations \(\mathcal{O}\), the neural geometry module builds an underlying 3D neural point cloud that produces rendered guidance images from arbitrary poses. Those rendered guidance images provide structural and color clues that can complement scene representation and guide the generation of target novel views.
The module builds a neural point cloud following PointNeRF [39] with two steps: 1) reconstructs a 3D point cloud using the input \(\mathcal{O}\), which requires depths of the views that can be estimated via pre-trained Multi-View Stereo (MVS); 2) assigns each point of the cloud an embedding vector, which is computed by MVSNet [40] given the corresponding pixel of the observed image.
With the neural point cloud, the module can produce rendered color images \(F_{i}\in\mathbb{R}^{W\times H\times 3}\)[39]. In detail, given an arbitrary camera pose \(\pi_{i}\), ray marching is performed to query a number of points on each ray. The embedding vectors of all the queried points are mapped to radiance and density via multi-layer perceptrons (MLPs). Through volume rendering, a ray color is obtained and assigned to the corresponding pixel of the image \(F_{i}\). If a ray hits no neural point, the ray is marked as invalid. All the rays form a validation mask \(M_{i}\in\{0,1\}^{W\times H}\) indicating which part of \(F_{i}\) is geometrically valid. The module output is formally expressed as:
\[F_{i},M_{i}=\texttt{NeuralGeometry}(\pi_{i},\mathcal{O}\,;\,\theta), \tag{1}\]
where \(\theta\) is the parameters of module networks including the MVSNet and the MLPs. The mask \(M_{i}\) can be used to filter out the invalid part of \(F_{i}\) for a clear signal.
The module networks are jointly trained to produce a visually reasonable \(F_{i}\) with structure and color information. The objective is regressing \(F_{i}\) to the ground truth color image \(I_{i}\) on the valid rays:
\[\min_{\theta}\sum_{i}||(F_{i}-I_{i})\odot M_{i}||_{2}^{2}. \tag{2}\]
**View Generator Module.** The view generator module uses a joint convolution and transformer-based network that takes a scene context and a query as input to generate a target novel view. The scene context is the global information that includes two types of _"previews"_: reference previews and probed previews. The reference previews are from the input observed poses, and the probed previews are from several _sampled_ novel poses interpolated between the observed poses. The query is a preview from the target novel
pose, which specifies the target viewpoint required to generate. Each preview of these three types is composed of four items: 1) an observed image \(I_{i}\) (using an "N/A" image if unavailable); 2) a rendered color image \(F_{i}\); 3) a mask image \(M_{i}\); 4) a ray map \(D_{i}\) of origins and directions derived from the camera pose \(\pi_{i}\)[33]. For each preview, we concatenate the corresponding \(I_{i}\), \(F_{i}\), \(M_{i}\), and \(D_{i}\) to one multi-channel image, which is then fed into a convolutional network with the output spatially divided into a group of local patches \(B_{i}\):
\[B_{i}=\texttt{ConvNet}(I_{i}\oplus F_{i}\oplus M_{i}\oplus D_{i}). \tag{3}\]
Each patch group \(B_{i}\) is additionally labeled by adding a learnable segment embedding [6] regarding one of the three preview categories: reference, probed, and query. This allows the model to distinguish them and utilize information properly. We concatenate all the patches into one sequence, and pass it into a transformer encoder network to obtain a latent representation:
\[h=\texttt{TransformerEncoder}\left(\bigcup_{i}B_{i}\right). \tag{4}\]
The latent representation \(h\) is a set of hidden vectors that encodes both scene context and query information. The target novel view can then be generated conditioned on \(h\). Due to the recent success of Vector Quantization (VQ) in image synthesis [24, 29, 30], we present the target image as VQ codebook tokens \(\mathcal{S}=\{s_{1},s_{2},...,s_{T}\}\). The distribution of \(\mathcal{S}\) is formulated as a probability \(p(\mathcal{S}|h)\) which can be factorized as:
\[p(\mathcal{S}|h)=\prod_{t=1}^{T}p(s_{t}|\mathcal{S}_{<t},h), \tag{5}\]
where \(\mathcal{S}_{<t}\) = \(\{s_{1},s_{2},...,s_{t-1}\}\), and \(p(s_{t}|\mathcal{S}_{<t},h)\) is the probability of the \(t\)-th image token. We use a transformer decoder to model \(p(\mathcal{S}|h)\) by autoregressively estimating \(p(s_{t}|\mathcal{S}_{<t},h)\). In detail, the last layer of the decoder generates hidden states \(z\), and a linear layer \(f(z)\) maps \(z\) into a vector with the dimension of the codebook size. The probability \(p(s_{t}|\mathcal{S}_{<t},h)\) is computed as \(\texttt{softmax}(f(z))\). We train the entire network by minimizing the objective of negative log-likelihood loss on the probability estimation:
\[\mathcal{L}=\sum_{s_{t}\in\mathcal{S}}-\log p(s_{t}|\mathcal{S}_{<t},h). \tag{6}\]
**Image Converter Module.** The image converter module is structurally based on a convolutional autoencoder network that encodes an image into discrete representation and decodes it back to the image. In SparseGNV, the image converter module plays two roles: 1) encoding a ground truth color image \(I\) into VQ codebook tokens \(\mathcal{S}\) for training the view generation module; 2) decoding a generated \(\mathcal{S}\) back to the image at inference. The architecture of the converter network follows VQ-GAN [24].
### Training & Inference
The training of SparseGNV requires two stages. In the first stage, the neural geometry module and the image converter module are trained separately. Given a scan of indoor
Figure 2: An overview of SparseGNV which consists of three modules: 1) Neural geometry module; 2) View generator Module; 3) Image Converter Module.
scene views \(\mathcal{V}=\{I_{i},\pi_{i}\}\), we randomly sample a set of views as observations to build a neural point cloud, and iterate \(I_{i}\) from \(\mathcal{V}\) to supervise the rendered images for training the MVSNet and the MLPs jointly, as shown in Equation (2). The image converter module is trained by reconstruction of a collection of wild images including indoor scene views. In the second stage, we use the neural geometry module to produce scene context and query, and supervise the view generator module to generate VQ tokens of novel views obtained by the image converter module, as shown in Equation (6). The inference of SparseGNV is straightforward. Taking a number of observed views, we use the neural geometry module to build a neural point cloud. We then produce scene context and query with rendered images from the neural point cloud, and pass them to the view generator network. With the output latent representation \(h\), we autoregressively draw out the VQ tokens using multinomial sampling. Lastly, we use the image converter model to reconstruct the VQ tokens into the final image.
## 4 Experiments
### Experimental Settings
**Data Preparation.** We use the ScanNet dataset [4], following the original train/test split for training the proposed and baseline methods in the experiments. For each scan in the dataset, we randomly capture sub-scans of consecutive 256 frames. We then downsample the sub-scans to 32 frames (1/8 ratio) as a sample of a view set. For a training sample, we randomly pick 4 out of 32 frames as observed views and the rest as novel views. For a testing sample, we create 3 evaluation groups with observation number \(|\mathcal{O}|\) equals 2, 4, 8, respectively. Following the settings of [32], we hold out "scene0708_00", "scene0710_00", "scene0738_00", "scene0758_00", "scene0781_00" as test scenes and randomly select one sample for each scene. The comparing resolution is set to 624 x 468 after scaling and cropping dark borders. In the experiments, we assume accurate camera poses and depths so that the ground truths are provided to all the comparison methods for training and testing if necessary. We also test our trained model on the Replica dataset [35] to demonstrate the generalization ability. Due to the page limitation, we include more details in the supplementary material.
**Evaluation Metrics.** We compute the peak signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM) [38] and the learned perceptual image patch similarity (LPIPS) [43]. We report the averaged metric results of comparing predicted novel views to ground truth images.
**Method Setting & Baselines.** Given a sample of \(|\mathcal{O}|\) input views and 32 - \(|\mathcal{O}|\) novel views, we use the \(|\mathcal{O}|\) input views to build a neural point cloud for the rendered color and mask images of all 32 viewpoints. The scene context is formed by reference previews produced from the \(|\mathcal{O}|\) input views and probed previews from the 32-\(|\mathcal{O}|\)-1 novel views. The query is produced from the rest 1 novel view. The neural geometry module follows the Point-NeRF settings [39]. The image converter module follows the settings in [30]. The network of the view generator module includes an encoder and a decoder. The encoder architecture mainly follows [33] with slight modifications. The decoder is a stack of 6 vanilla transformer decoder layers. We train the model with a learning rate 1e-4 and batch size 16 using the Adam optimizer. We set up five baseline methods for comparison: Point-NeRF [39], PixelSynth [31], IBRNet [37], ViewFormer [14], and NeRF with dense depth priors (DDP-NeRF) [32]. For Point-NeRF, we train its MVSNet and ray marching MLPs across scenes, and test it in a feed-forward manner. For PixelSynth, we take the pre-trained model provided by the authors, and predict novel views by choosing the nearest observation as the start image to outpaint its reprojection of the novel pose. For IBRNet and ViewFormer, we train and test models with the same setting to our method (IBRNet is tested in feed-forward). For DDP-NeRF, we optimize the model with ground truth depths and camera poses for each scene.
### Primary Results & Analyses
We compare the quantitative results on 3 groups of sparse input views with observation number \(|\mathcal{O}|\) = 2, 4, 8, respectively. As the results presented in Table 1, our method outperforms all the baselines on PSNR, SSIM, and LPIPS. Note that, without "Geometry" (i.e., rendered color images and masks from the neural geometry module), the performance of our method significantly drops. This proves the importance of 3D structure for generating high-quality novel views. We show the generations of novel views with ground truths in Figure 3, 4, and 5 (more results in supplementary). The results of Point-NeRF are often corrupted and scattered, which are caused by the incomplete underlying point clouds. Without generalization ability, Point-NeRF can hardly perform well with sparse inputs. PixelSynth produces distorted views as the poses of the novel views are largely shifted from the referenced observations. Therefore, the reasoning 3D surfaces cannot be projected correctly which causes the distortion. The results of IBRNet are often blurred and show black areas where rays hit no clue from sparse observations. ViewFormer generates vague shapes but lacks details as it only depends on compressed code where information is lost. DDP-NeRF performs the best among all the baselines. But due to the large sparsity, the renderings of DDP-NeRF unavoidably overfit to input views that cause blurs even with depth priors. Our method generally outperforms the others in terms of fidelity and details. With increasing the observation number, our method generates novel views of better visual quality over
all (proved by metrics), sufficiently leveraging the given information and demonstrating strong applicability.
### View Consistency
To demonstrate the view consistency of SparseGNV, we show continuous novel view generations between two observations in Figure 6 (first two rows). The quality and consistency are fairly maintained without significant perturbation. The neural geometry module provides a strong scene context of 3D structure, which ensures a stable generation ability by the downstream modules. We further show a sequence of generated novel views that moves away from the two observations in Figure 6 (last two rows). The office desk is fairly maintained until moving out, as there is enough clue of its shape and appearance. Unfortunately, the cabinet appears with only its surface, and the books on top of the cabinet are completely missed of generation. Since there is no clue of their occurrence, the model tends to gen
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline & \multicolumn{3}{c|}{\(|\mathcal{O}|=2\)} & \multicolumn{3}{c|}{\(|\mathcal{O}|=4\)} & \multicolumn{3}{c}{\(|\mathcal{O}|=8\)} \\ \hline Method & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline Point-NeRF [39] & 9.606 & 0.375 & 0.689 & 11.004 & 0.364 & 0.680 & 13.495 & 0.435 & 0.617 \\ PixelSynth [31] & 11.503 & 0.412 & 0.750 & 12.261 & 0.443 & 0.716 & 12.880 & 0.459 & 0.684 \\ IBRNet [37] & 11.739 & 0.400 & 0.725 & 12.823 & 0.450 & 0.717 & 14.099 & 0.524 & 0.702 \\ ViewFormer [14] & 14.365 & 0.541 & 0.674 & 14.927 & 0.549 & 0.649 & 15.420 & 0.553 & 0.633 \\ DDP-NeRF [32] & 14.281 & 0.451 & 0.712 & 15.799 & 0.495 & 0.630 & 17.491 & 0.567 & 0.554 \\ Ours w/o “Geometry” & 13.157 & 0.426 & 0.699 & 15.124 & 0.553 & 0.617 & 16.010 & 0.537 & 0.582 \\ Ours & **15.248** & **0.555** & **0.530** & **16.240** & **0.563** & **0.495** & **17.894** & **0.585** & **0.451** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results on the ScanNet test scenes.
Figure 3: Synthesized novel views given input views of \(|\mathcal{O}|=2\).
erate a white wall to maintain consistency.
### Time & Memory & Model Size
We conduct experiments on NVIDIA V100 GPUs. The inference speed of the trained model is 0.83s per batch of 24 images. The training of the neural geometry module takes about 1 day using 1 GPU (batch size 1, memory \(\leq\) 20G depends on scene size). The training of the image generator module takes about 1 week using 2 GPUs (batch size 16, 20.9G). VQ decoder uses the pre-trained checkpoint from [30] thus no re-training is required. The total parameter count of the three modules is: 0.724M (Point-NeRF) + 88M (convolution and transformer network) + 76M (VQ decoder). Please note that the training needs to be performed only once, and the trained framework can generalize to unseen indoor scenes without further fine-tuning.
## 5 Conclusions & Limitations
In this paper, we study the problem of novel view synthesis of indoor scenes given sparse input views. To generate both photorealistic and consistent novel views, we propose SparseGNV: a learning framework that incorporates 3D structure into image generative models. The framework is designed with three network-based modules. The neural geometry module builds a 3D neural point cloud to produce rendered images from arbitrary viewpoints. The view generator module takes the rendered images to form scene context and query, which are fed into a convolution and transformer-based network to generate the target novel view represented in VQ codebook tokens. The image converter module finally reconstructs the tokens back to the view image. SparseGNV is trained across scenes to learn priors, and infers novel views of unseen scenes in a feed-forward manner. The evaluation results on real-world and synthetic indoor scenes demonstrate the exceeding performance of the method over recent baselines.
**Limitations.** SparseGNV synthesizes novel views using an image generation model based on the VQ codebook. The output is therefore less stable compared to the volume rendering-based methods. For example, the object details and lighting could be altered. The framework also requires camera poses and depths which can be unavailable when the observed views are extremely sparse.
Figure 4: Synthesized novel views given input views of \(|\mathcal{O}|\) = 4.
Figure 5: Synthesized novel views given input views of \(|\mathcal{O}|=8\).
Figure 6: A continuous generation between only two observations (red box) and moving away. The 1st and 3rd rows are the ground truth. The 2nd and 4th rows are the generated novel views of “in between” and “moving away”, respectively. |
2304.10671 | Point-supervised Single-cell Segmentation via Collaborative Knowledge
Sharing | Despite their superior performance, deep-learning methods often suffer from
the disadvantage of needing large-scale well-annotated training data. In
response, recent literature has seen a proliferation of efforts aimed at
reducing the annotation burden. This paper focuses on a weakly-supervised
training setting for single-cell segmentation models, where the only available
training label is the rough locations of individual cells. The specific problem
is of practical interest due to the widely available nuclei counter-stain data
in biomedical literature, from which the cell locations can be derived
programmatically. Of more general interest is a proposed self-learning method
called collaborative knowledge sharing, which is related to but distinct from
the more well-known consistency learning methods. This strategy achieves
self-learning by sharing knowledge between a principal model and a very
light-weight collaborator model. Importantly, the two models are entirely
different in their architectures, capacities, and model outputs: In our case,
the principal model approaches the segmentation problem from an
object-detection perspective, whereas the collaborator model a sematic
segmentation perspective. We assessed the effectiveness of this strategy by
conducting experiments on LIVECell, a large single-cell segmentation dataset of
bright-field images, and on A431 dataset, a fluorescence image dataset in which
the location labels are generated automatically from nuclei counter-stain data.
Implementing code is available at https://github.com/jiyuuchc/lacss | Ji Yu | 2023-04-20T23:22:41Z | http://arxiv.org/abs/2304.10671v2 | # Point-supervised Single-cell Segmentation via Collaborative Knowledge Sharing
###### Abstract
Despite their superior performance, deep-learning methods often suffer from the disadvantage of needing large-scale well-annotated training data. In response, recent literature has seen a proliferation of efforts aimed at reducing the annotation burden. This paper focuses on a weakly-supervised training setting for single-cell segmentation models, where the only available training label is the rough locations of individual cells. The specific problem is of practical interest due to the widely available nuclei counter-stain data in biomedical literature, from which the cell locations can be derived programmatically. Of more general interest is a proposed self-learning method called collaborative knowledge sharing, which is related to but distinct from the more well-known consistency learning methods. This strategy achieves self-learning by sharing knowledge between a principal model and a very light-weight collaborator model. Importantly, the two models are entirely different in their architectures, capacities, and model outputs: In our case, the principal model approaches the segmentation problem from an object-detection perspective, whereas the collaborator model a sematic segmentation perspective. We assessed the effectiveness of this strategy by conducting experiments on LVECell, a large single-cell segmentation dataset of bright-field images, and on A431 dataset, a fluorescence image dataset in which the location labels are generated automatically from nuclei counter-stain data. Implementing code is available at [https://github.com/jjyuuchc/lacs_jax](https://github.com/jjyuuchc/lacs_jax)
instance segmentation, single cell segmentation, knowledge distillation, weakly-supervised learning.
## I Introduction
Automated cell segmentation from microscopy images is the first step in the pipeline of single-cell analysis. In recent years, segmentation methods based on deep learning have demonstrated unparalleled performance and are increasingly being adopted by biologists as the method of choice [1, 2, 3]. Currently, there are two general types of single-cell segmentation models in the literature. The first type treats the problem as a pixel-level classification/regression task [2, 3, 4, 5, 6, 7, 8, 9], which has its roots in the semitic segmentation field. In the simplest case, the model classifies each pixel of the input microscopy image as either the foreground, the background, or the cell border. A simple post-processing step can then be employed to create the segmentation masks of individual cells. Unfortunately, cell border classification often suffers from high error rates. More recent models in this category typically perform more sophisticated pixel mappings, e.g., to the Euclidean distance to the nearest background pixel. Nevertheless, the general idea is to convert a microscopy image to an intermediate pseudo-image that is more algorithmically manageable, and to use hand-crafted post-processing algorithm to convert the pseudo-image into instance segmentations of single cells. The second general approach is to use an object detection/segmentation model, e.g. MaskRCNN [10]. These models have an object detection branch that is trained to predict the bounding-boxes of the objects (i.e., cells) in the image, as well as a relatively light-weight segmentation branch to produce the exact segmentation mask within each bounding-box. Even though these models were not designed specifically with biomedical data in mind, their applications to biomedical imaging data [11, 1] appears to be straightforward.
While both types of models perform well for the single-cell segmentation task, they also suffer from the disadvantage of needing large-scale training data, which can be very expensive to produce. In response, recent studies have focused on two revenues in attempts to mitigate this problem. The first line of research involves domain adaptation [12, 13, 14, 15, 16, 17]. The goal is to adapt a target domain dataset to a labeled source domain dataset. This allows for producing new models on dataset that is unlabeled or mostly unlabeled, assuming a well-annotated source dataset is available. A second line of research aims to train models with weak supervisions [18, 19, 20, 21, 11], using approximate labels instead of full segmentation masks, which would significantly reduce labeling cost.
This paper focuses on a specific weakly-supervised training setting where the only available label is the rough locations of individual cells. This specific setting is of practical relevance, particularly in the fluorescence imaging domain, due to the common practice of acquiring nuclei counter-stain image when collecting cellular microscopy data, in which case the point labels can usually be computed from the nuclei image directly without human input. Many authors have proposed methods in attempts to utilize point labels [22, 23, 24, 25], but a robust and generalizable method for training single-cell segmentation models has yet emerged. Several of the published methods [22, 23, 24] rely on the assumption that there is color consistency within the instance, which is rarely true for cellular imaging data; and methods [18] that doesn't assume color consistency produce image-level segmentations instead of instance-level single-cell segmentations.
Here we propose a new method to train a point-supervised instance segmentation model, \(\mathcal{X}_{P}\), based on a general architecture outlined in Fig. 1:
\[\mathcal{X}_{P}\colon\boldsymbol{x};\boldsymbol{\theta}_{p}\mapsto\{ \boldsymbol{r}_{t},\boldsymbol{m}_{t}\mid i=1\,...\,n\,\} \tag{1}\]
where \(\boldsymbol{\theta}_{p}\) is the model weights, \(\boldsymbol{x}\in\mathbb{R}^{H\times W\times C}\) represents the input image, \(\boldsymbol{r}_{t}=\{y_{t},\boldsymbol{x}_{i}\}\) is the prediction of a single cell's location by the detection head (Fig. 1), and \(\boldsymbol{m}_{t}\in[0,1]^{H\times W}\) is the probability map of the cell's segmentation mask, outputted by the segmentation head of the model (Fig. 1).
Since training the detector branch of the model is straightforward with point supervision, the main challenge is to find a way to train the segmentation branch in a self-supervised manner. One of the contributions of this paper is to demonstrate a novel self-supervised training technique we call collaborative knowledge sharing (CKS). In this training scheme, we utilize a pair of what we call principal/collaborator models. The principal model is defined as (1), which can be viewed as a subtype of object detection model. The collaborator model, on the other hand, performs the instance segmentation task from a pixel-level classification perspective:
\[\mathcal{X}_{C}\colon\boldsymbol{x};\boldsymbol{\theta}_{c}\mapsto\{ \boldsymbol{M}_{c},\boldsymbol{B}_{c}\} \tag{2}\]
where \(\boldsymbol{M}_{c}\in[0,1]^{H\times W}\) is the prediction of the binary segmentation for the whole input image, i.e., for all cells combined, and \(\boldsymbol{B}_{c}\in[0,1]^{H\times W}\) represents the predictions of cell-border pixels. In addition, the collaborator model is of significantly lower capacity than the principal model. Self-learning is achieved by training the two models to produce consistent outputs from the same input.
As discussed in the beginning of this section, both the object-detection type and pixel-mapping type models can be effective in performing the single-cell segmentation task. However, it is not known previously that they can be combined in this way to form an effective strategy for self-learning. This training scheme can be viewed as a subtype of the consistency regularization method [26], which has gained popularity recently in both semi-supervised [27] and unsupervised [28] training settings. The goal of consistency regularization is to minimize the differences between outputs of multiple computational paths despite added noises. The noise can be at the sample level by data augmentation, or at the computational path level via stochastic operations, e.g., dropout. However, different paths typically have similar structure and computational complexity. The key distinction feature of CKS is that the principal model and the collaborator model have entirely different architectures. In addition, the collaborator model is not intended to become "competent" at its task. Instead, we adopted the principal-collaborator metaphor to highlight that the key contribution of the collaborator model is to bring in a different perspective to the problem at hand, resembling a human collaborator in the academic setting. In addition, here we also report characteristics of CKS that are unexpected of those of traditional consistency regularization. For example, we found that it is preferable to keep the collaborator model at a significantly lower capacity in comparison to the principal model, which suggested that there are divergent mechanisms at play between CKS and consistency regularization.
## 2 Related Work
### Weakly-supervised instance segmentation
Weakly-supervised learning reduces the annotation cost by employing incomplete or noisy labels to train models. In the segmentation literature, many weak labels have been proposed in place of segmentation masks, e.g., scribbles [29, 30], bounding boxes [20, 31], and points [23, 32]. For instance-level segmentations, bounding box in particular fit neatly into the training pipeline of the object detection/segmentation models. In addition, [31] shows that bounding box can be viewed as a noisy segmentation mask, and recursive training, i.e., using earlier training results as the updated labels for later training, is an effective denoise strategy. A similar strategy was employed in [18] with point supervision. The authors obtained good results with image segmentation after combining recursive training with co-training of model pairs, although the method was not generalized to instance segmentations.
An alternative line of research focuses on performing training on simple auxiliary tasks [25, 33, 34, 35], such as image classification, and try to obtain segmentation results by examine the model structure. For example, the class activation mapping (CAM) [36] is a popular proxy for segmentation masks. This strategy allows very simple labeling, e.g., image-level labeling. However, the segmentation results are usually approximate.
### Point supervision.
In the segmentation literature, the term point supervision has been used to refer to two distinct types of labels. The first type requires randomness in location and should be in fact viewed as an extremely under-sampled segmentation mask [19, 32]. For this reason, this type of point label is usually directly used as a replacement for segmentation masks without additional changes in the model or the training pipeline. In addition, these works employed additional bounding-box labels to learn object detection in order to achieve instance segmentation.
Most relevant to our work are models trained only on non-randomized point labels. For example, [22, 23] both proposed methods to train nuclei segmentation models by deriving
Figure 1: Outline of the principal model architecture.
pseudo-segmentation label from point labels. However, the methods rely on color consistency within the nucleus, making it difficult apply them to other cell segmentation problems. In [18] the authors proposed a method based on consistency regularization and recursive training. But the model produced image-level segmentation instead of single-cell segmentation. [25] trained a model using only point label and used CAM to approximate cell segmentation. We ourselves have published a method [21] to train a single-cell segmentation model by combining point labels with a image-level segmentation label, which is a direct precursor to the work discussed here.
Finally, point labels had also been used for delineating purpose during inference [37].
### Consistency regularization and collaborative learning
Our CKS training method should be viewed as a subtype of the consistency regularization method [26, 27]. These methods use consistency loss, which can be defined on both labeled and unlabeled inputs, to regularize model training. A typical setup involves the co-training of a pair of teacher/student models with the same architecture. CKS deviates from this typical approach by using two models of intentionally different architectures. In addition, it is more natural to view the entire collaborator model as a deep-regularization term, which controls the search space of the principal model. Among the many variants of consistency regularization methods, the one we resemble the most is probably cross-task consistency learning [38], which has found various applications in biomedical segmentation [39, 40, 41]. This method trains a multi-task model with both shared and divergent computational paths and is based on inference path invariance. The underlying logic is that the predictions made for different tasks out of one input image should be consistent. Like CKS, cross-task consistency compares divergent output types. Different from CKS, different computational paths in cross-task consistency learning are still relatively balanced and usually share weights.
The concept of knowledge sharing has its roots in knowledge distillation [42], which was originally designed to transfer knowledge from a large teacher model to a smaller student mode. The more recent extension of this concept to online model training [43] is a form of collaborative learning, for which there is no clear distinction between the teacher and the student. In that sense, CKS is closer to the original form of knowledge distillation, although to arrive at an entirely different purpose.
## 3 Method
### Principal model
The architecture of our principal model's design (Fig. 1) resembles that of an object detection/segmentation model, except the detection branch produces predictions of cell locations instead of bounding boxes. We chose ConvNeXt [44] as the CNN encoder backbone due to its good performance in various machine vision tasks and use feature pyramid network (FPN) to integrate ConvNeXt output, forming the multi-scale image features as inputs for the two branches of the decoders. Different from standard instance segmentation models, our model does not expand the range of feature scales at the FPN stage. We reasoned that this is not necessary because the typical instance sizes in a single-cell segmentation problem would not be very large.
The detection head is a standard multi-layer CNN, which outputs \(\mathbf{D}\in[0,1]^{H\times W/s/s}\) representing the probability of finding a cell at any of the grid \(\{\mathbf{\frac{H}{s}}\times\mathbf{\frac{W}{s}}\}\) locations, as well as an \(\mathbf{D_{s}}\in\mathbb{R}^{H\times W/s/s}\), representing the relative offset of an exact cell location within each grid. Here the \(s\) is the scaling factor of the feature input relative to the source image, and is either 4, 8, 16 or 32 in our model setup. The model performs detections on all scales of the multi-scale feature map, although the detectors share weights at different scales. Note that unlike the standard bounding-box-based object detection scheme, in which different scales are used for the detections of instances of different sizes, here we detect all instances at all scales, simply because we do not have any labeling information regarding the instance sizes. At the inference time, redundant detections are removed by non-max-suppression, using \(\mathbf{D}\) for ranking.
Training the detector branch is straightforward, since the ground truth values, \(\mathbf{D^{CT}}\) and \(\mathbf{D_{s}^{CT}}\), can be computed from the point labels. We use focal binary cross-entropy loss for \(\mathbf{D}\) and \(L2\) loss for \(\mathbf{D_{s}}\), i.e.:
\[\mathcal{L}_{det}=\sum_{s}\mathcal{L}_{fce}(\mathbf{D},\mathbf{D^{GT}})+\|\mathbf{D_{s}}- \mathbf{D_{s}^{GT}}\| \tag{3}\]
where \(\mathcal{L}_{fce}(\cdot)\) is the focal binary cross-entropy loss function. We also perform a minor label smoothing by considering grid locations within a small threshold distance to the true location as positive, even if the cell location is not exactly within the grid.
The segmentation head computes \(\{\mathbf{m_{t}}\mid i=1\,...\,n\,\}\) directly, using the image features at the highest resolution (s=4) as the input. Here \(\mathbf{n}\) is the number of cells. Therefore, the head performs \(\mathbf{n}\) parallel segmentation computations for each input image. For the datasets we worked with in this paper, the \(\mathbf{n}\) value can be as large as 3000-4000. Therefore, computational efficiency is important here. Since we do not know the instance sizes, we cannot use algorithms such as Roi-Align to define the regions of segmentation analyses. On the other hand, performing segmentation on the whole image is costly and unnecessary. Instead, we use a model hyperparameter to define the maximum area surrounding the cell location for segmentation computation. The \(\mathbf{m_{t}}\) values outside the region are assumed to be 0. Because the analyzed area almost always encompasses more than one cell, we need to incorporate the cell location information into the feature inputs to break the translational invariant property of CNN, and thus output the segmentation for a specific cell. The model does this by creating a position encoding tensor derived from the feature vector at the location of the cell, feeding it through a multi-layer perceptron (MLP) and reshaping and resizing the resulting vector to match
the size of the segmentation window. The position encoding tensor is concatenated to the image feature tensor to form the full input for segmentation prediction (Fig. 2).
It is easy to see that if the training data were labeled with segmentation masks, the principal model can be easily trained in a fully supervised manner by employing cross-entropy loss to the segmentation output.
### Collaborator model
Our collaborator model is a very light-weight CNN performing the pixel level mappings. To facilitate experimentations, here we use two separate nets to compute \(\mathbf{M_{c}}\) and \(\mathbf{B_{c}}\) with no shared weighs, although this is not necessary in real applications. We use a U-Net-like CNN to compute \(\mathbf{M_{c}}\), following other works that employed models of this type, although our default net is much shallower (two down-scale operations total) and narrower (16, 32, and 64 feature channels at three respective scales). The default net for computing \(\mathbf{B_{c}}\) is even simpler: it is a three-layer CNN with 32 channels each. The design choices reflect our belief that the contribution of the collaborator model is not to be _good_ at the segmentation task, but to offer a different perspective to the problem.
In addition, the collaborator model was discarded after training and not used during inferences.
### Knowledge Sharing - Segmentation
To entice the collaborator model to learn segmentation from the principal model, we first need to construct an image-level segmentation prediction from the instance level outputs. We do this by finding at each pixel the maximum of instance segmentations logit, at the same time shifting the prediction by an offset representing the prior knowledge:
\[\mathbf{M_{p}}=\max_{l}\mathbf{\sigma}[\ell(\mathbf{s_{l}})+\pi\ell_{prior}(d,y_{l},x_{l})] \tag{4}\]
Here the \(\mathbf{\sigma}(\cdot)\) is the sigmoid function, \(\ell(\mathbf{s_{l}})\) represent logits of \(\mathbf{s_{l}}\), and \(\ell_{prior}\) is a 2D Gaussian shape centered at \(y_{l},x_{l}\) with a variance \(\mathbf{d}^{2}\), and \(\pi\) a small scaling factor reflecting the confidence of the prior. This offset term slightly increased the logits near the cell center. This is particularly helpful during the early stage of the training, when the predictions of \(\mathbf{s_{l}}\) are noisy, and \(\ell_{prior}\) is often the main contributor to the \(\mathbf{M_{p}}\), allowing a more stable training of the collaborator model. In later stage of the training, the \(\ell(\mathbf{s_{l}})\) values are larger and the effect of \(\ell_{prior}\) becomes minimal.
We train the collaborator model to follow \(\mathbf{M_{p}}\):
\[\mathcal{L}_{M}=\frac{1}{H\times W}\left[(\mathbf{1}-\mathbf{M_{p}})\cdot\mathbf{M_{c}}+ \mathbf{M_{p}}\cdot(\mathbf{1}-\mathbf{M_{c}})\right] \tag{5}\]
While not a commonly used loss function, its effect can be intuitively seen by examine its gradient:
\[\frac{\partial\mathcal{L}_{M}}{\partial\mathbf{M_{c}}}=\frac{1-2\mathbf{M_{p}}}{H \times W} \tag{6}\]
Thus \(\mathbf{M_{c}}\) will move towards either 0 or 1 depending on whether \(\mathbf{M_{p}}\) is bigger or smaller than 0.5.
Conversely, to transfer knowledge from the collaborator to the principal model, we use:
\[\mathcal{L}_{S}=\frac{1}{K}\Biggl{\{}\sum_{l}(\mathbf{1}-\mathbf{M_{c}}) \cdot\mathbf{s_{l}}+\mathbf{M_{c}}\cdot(\mathbf{1}-\mathbf{s_{l}})\] \[+\sum_{l}\mathbf{s_{l}}\cdot\sum_{j\neq i}\mathbf{s_{j}}\Biggr{\}}\]
The first part of \(\mathcal{L}_{S}\) resembles (4), but is computed at instance level, and has the effect of driving \(\mathbf{s_{l}}\) toward \(\mathbf{M_{c}}\). This term alone, however, will lead to over-segmentation, because \(\mathbf{M_{c}}\) is the segmentation of all cells. To compensate, the second half of \(\mathcal{L}_{S}\) incurs a penalty whenever two instances assign non-zeros values at the same pixel. \(K\) is a normalization factor.
### Knowledge Sharing - Cell border
Similar as the last section, we first need to reconstruct an image-level prediction from the instance level segmentations:
\[\mathbf{B_{p}}=\tanh\sum_{l}\varphi(\mathbf{s_{l}}) \tag{8}\]
where \(\varphi(\cdot)\) is the sobel filter, which convert the segmentation foreground to segmentation edges, and the hyperbolic tangent function (tanh) was applied to ensure that the results remain bound between 0 and 1. We used L2 loss to train both the collaborator model and the principal model:
\[\mathcal{L}_{B}=\left\|\mathbf{B_{c}}-\mathbf{B_{p}}\right\| \tag{9}\]
Fig. 2: Design of the segmentation head of the principal model. The inputs are the image feature map (from CNN encoder) and a cell location (from the ground truth label during training and the detection head during inference). The location encoding is computed from the feature vector at the cell location and was concatenated to the feature map to define the specific instance that needs to be segmented. MLP: multi-layer perceptron. Conv: a 2D convolution layer with activation.
### Preventing Model Collapse
Model collapsing is a common issue in self-supervised learning schemes, which we need to prevent. The most likely collapsing mode in our setting is that all segmentation predictions producing zero everywhere. We incorporate a simple penalty term to prevent this happening:
\[\mathcal{L}_{MC}=\frac{\delta}{n}\sum_{i}(\overline{s_{i}})^{-1} \tag{1}\]
where \(\overline{s_{i}}\) denotes that average of \(s_{i}\), and \(\delta\) is small scaling factor to ensure that the term is usually much smaller than other losses, except when the predictions for all pixels move towards zeros.
### Model Updating
Both the principal model and the collaborator model are updated using the standard gradient descent method every batch. But the two models learn with different loss functions. For the principal model:
\[\mathcal{L}_{pr}=\lambda_{det}\mathcal{L}_{det}\,+\lambda_{g}\mathcal{L}_{g}+ \lambda_{B}\mathcal{L}_{B}+\lambda_{MC}\mathcal{L}_{MC} \tag{1}\]
And for the collaborator model:
\[\mathcal{L}_{co}=\lambda_{M}\mathcal{L}_{M}\,+\lambda_{B}\mathcal{L}_{B} \tag{2}\]
Here various \(\lambda\) values are relative loss weights. In this study we do not tune these values and set all \(\lambda\) weights to be 1.
## 4 Experiments
### Datasets
We conducted experiments on LIVECell [1] and A431 [21] datasets.
The LIVECell dataset is a large-scale microscopy image dataset aimed at testing single-cell segmentation models and is currently the largest dataset available of this type. The dataset consists of bright field images of 520 \(\times\) 704 pixels from eight different cell lines exhibiting variable cell morphology and imaging contrasts. The data were also collected on samples with a very wide range of cell density. The original authors had split the dataset into training (3253 images), validation (570 images), and testing (1564 images) splits.
The A431 dataset was a fluorescence microscopy dataset. The main interest in this dataset is that it contains point labels that had been generated in an unsupervised fashion from the nuclei counter-stain data. Therefore, the entirely training pipeline on this dataset can be considered a fully _unsupervised_ one. The training set contains 500 images of human squamous-cell carcinoma A431 cell of 512 \(\times\) 512 pixels. In addition to the point label, the training set was also labeled with manually produced image-level segmentations, which we did not use. The test set contains 25 images that had been manually segmented.
### Implementation Details
#### 4.2.1 Networks and training configurations
All experiments employed a ConvNeXt backbone at the small configuration with a patch size of 4. We set the maximum segmentation area to be relatively small (96 \(\times\) 96 pixels) to allow quick experimentation.
For the LIVECell dataset, we trained models under both the weakly-supervised setting (using point labels) as well as the fully-supervised setting (using segmentation mask). The supervised model presumably sets the upper bound of model performances. To assign the hyperparameter \(\mathcal{d}\) in (4), we grouped the eight cell lines into three groups according to their average size (large, medium, and small) and used three different values (20, 15 and 10) respectively. The value of \(\pi\) is fixed at 2.0 for all experiments. We train 9\(\times\)10\({}^{4}\) steps with one image per step, using ADAM optimization with an initial learning rate of 10\({}^{-3}\) and a finetune rate of 2\(\times\)10\({}^{-4}\). We use two different model initialization schemes for the semi-supervised model: a) We randomly initialized the model using the He method [45], except for the ConvNeXT backbone, which used the ImageNet weights. b) We pretrained the principal model on TissueNet dataset [2], which is a fluorescence microscopy dataset on tissue cryo-sections. We will use CKS-1 and CKS-2 to denote models trained under these two different initialization schemes, respectively.
For the A431 dataset, we trained only the weakly-supervised model. Models were initialized with ImageNet weights. Since data is from one cell line, we set \(\mathcal{d}\) to be a constant 20. All other training settings are the same as LIVECell.
#### 4.2.2 Preprocessing and augmentation
The LIVECell dataset was labeled with instance segmentations. We first computed centroids of each segmentation and used these values as the point labels. The dataset included cell lines of very different sizes. We pre-scaled all images so that the largest cell lines (SKOV3 and Huh7) were scaled down 30% and the smallest BV2 and MCF7 lines up by two folds. This allows us to set the maximum area for segmentation to a smaller value (96 pixels), which speeds up the experiments. For both the LIVECell and A431 datasets, we use a simple augmentation protocol that includes image rotation, flipping and resizing (\(\pm\)15%). Additionally, the augmentations also include random adjustments of brightness and contrast.
#### 4.2.3 Baseline
We used the CAM-based WSISPR method in [25] as a baseline. The method trains a U-Net to predict cell locations and uses an unsupervised algorithm to convert CAM into approximate segmentation masks.
#### 4.2.4 Evaluation
We evaluate the model performance using the Average Precision (\(AP\)) metric:
\[AP_{IOU}=\frac{\sum_{k}\tau_{k}p_{k}}{c} \tag{3}\]
where \(C\) is the total number of ground truth cells, \(T_{k}\) is an indicator function of whether the k-th detection is positive or negative according to the specified segmentation mask _IOU_ (intersection-over-union) threshold, and \(P_{k}\) is the precision of
the first k detections. We do not perform smoothing of the precision-recall curve when computing _AP_. We also computed mAP, which is the average of APs at the series of ten _IOU_ thresholds ranging from 0.5 to 0.95 at an equal spacing.
### Comparison of LIVECell Models
Fig. 3 presents segmentations examples from various LIVECell models. Because different cell lines exhibit very different morphological characteristics, we provided one example for each cell line. These results qualitatively show that
Fig. 3: Visualization of cell segmentation results on LIVECell dataset. Cell line names are shown at top. Scale bar: 20 μm. GT: ground truth.
our method can establish single-cell segmentation models, despite using only the very weak point labels.
The quantitative comparison of model performances is shown in Table I. The CKS models outperform the baseline by a significant margin. However, the performances were uneven for different cell lines. Both the supervised model and CKS models struggle with non-convex shaped cells (e.g. SHSY5Y), indicating that the issue is likely related to the general approach of CNN, although the weak supervision exacerbates the difficulties. In addition, CKS models also have weakness with images with very low contrast (e.g. Huh7).
Pre-training on Tissuenet dataset was clearly beneficial and for most cell lines the model obtained more than 90% of the performance of the supervised model, measured by AP50. However, for cells with relatively simple shapes (BV2 and SkBr3), pre-training led to very little differences.
### Sensitivity to hyperparameter \(d\)
Point label has an inherent problem: Considering a case in which a cell organelle (e.g., nucleus) is also generally located at cell center, there is no inherent reason for the model to produce cell segmentation instead of organelle segmentation, as they would have been labeled the same. We got around this problem by using \(d\) to specify prior knowledge regarding the expected cell sizes. Here we test how sensitive the model results depend on \(d\). Beside the original model, we tested two additional conditions: (1) we set the same \(d\) values (20) for all cell lines, and (2) we remove the prior component completely by setting \(\pi=0\). Table II presents results. The main finding here is that the exact choices of \(d\) values have a minor impact on model accuracy, but removing the prior specification completely resulted in significant decrease in model accuracy.
### Sensitivity to Collaborator Model Capacity
One distinct feature of CKS is that the consistency is evaluated between two highly unbalanced models. The collaborator model is designed to be very light weight. Table III presents comparisons of some other collaborator model designs with various degrees of complexity (Fig. 4). Specifically, we tested deeper U-Net for image foreground prediction and more complex CNN for cell border prediction. The results shown here demonstrated that in general, increasing the collaborator complexity did not help with improving the principal model accuracy. In fact, increasing the model complexity for cell border detection had a clear adverse effect. These are characteristics generally unexpected for collaborative learnings [43], suggesting that the mechanism of CKS may be different from other related methods.
### Extending to Nuclei Image Derived Point Labels
For all experiments in LIVECell dataset, we generated point label from centroid of segmentation masks. It is unclear whether the model took advantage of this hidden correlation. Therefore, we also test our method on the A431 dataset, for which the point labels were generated using an unsupervised blob detection algorithm based on nuclei counter-statin data, and thus not at the exact center of the cells. In addition, this training scheme can be considered as completely unsupervised, which has practical advantages.
We presented examples of segmentation in Fig. 5 and quantitative results in Table IV. These results showed that the model significantly outperformed the baseline. Therefore, the CKS method can be used for "inexact" point labels, such as those indicated by the nuclei locations.
## V Conclusions and Discussions
In conclusion, we proposed in this paper a novel method to train single-cell segmentation models using only point labels. The proposed method is, to our knowledge, the first one of this type that performs segmentation in an end-to-end fashion. Below we briefly discuss the pro and cons of the proposed method:
* The collaborator model is very light weight and adds very little in terms of computational complexity. This allows training in relatively cheap hardware.
* Model performance can be very close to fully supervised ones if the shape complexity of the cell is low.
* Cons:
* The method relies on a fixed hyperparameter to decide how big of an area to perform the segmentation for, which is not computationally efficient if the inputs data contains both very large and very small cells, because the hyperparameter needs to be set according to the largest cells.
* The model performance is uneven and is dependent on the cell shape complexity. It is possible that this problem can be alleviated in semi-supervised training settings where part of the data is labeled with segmentations.
|
2307.09670 | JAZZVAR: A Dataset of Variations found within Solo Piano Performances of
Jazz Standards for Music Overpainting | Jazz pianists often uniquely interpret jazz standards. Passages from these
interpretations can be viewed as sections of variation. We manually extracted
such variations from solo jazz piano performances. The JAZZVAR dataset is a
collection of 502 pairs of Variation and Original MIDI segments. Each Variation
in the dataset is accompanied by a corresponding Original segment containing
the melody and chords from the original jazz standard. Our approach differs
from many existing jazz datasets in the music information retrieval (MIR)
community, which often focus on improvisation sections within jazz
performances. In this paper, we outline the curation process for obtaining and
sorting the repertoire, the pipeline for creating the Original and Variation
pairs, and our analysis of the dataset. We also introduce a new generative
music task, Music Overpainting, and present a baseline Transformer model
trained on the JAZZVAR dataset for this task. Other potential applications of
our dataset include expressive performance analysis and performer
identification. | Eleanor Row, Jingjing Tang, George Fazekas | 2023-07-18T22:48:54Z | http://arxiv.org/abs/2307.09670v1 | JAZZVAR: A Dataset of Variations found within Solo Piano Performances of Jazz Standards for Music Overpainting
###### Abstract
Jazz pianists often uniquely interpret jazz standards. Passages from these interpretations can be viewed as sections of variation. We manually extracted such variations from solo jazz piano performances. The JAZZVAR dataset is a collection of 502 pairs of '_Original_' and '_Variation_' MIDI segments. Each _Variation_ in the dataset is accompanied by a corresponding _Original_ segment containing the melody and chords from the original jazz standard. Our approach differs from many existing jazz datasets in the music information retrieval (MIR) community, which often focus on improvisation sections within jazz performances. In this paper, we outline the curation process for obtaining and sorting the repertoire, the pipeline for creating the _Original_ and _Variation_ pairs, and our analysis of the dataset. We also introduce a new generative music task, Music Overpainting, and present a baseline Transformer model trained on the JAZZVAR dataset for this task. Other potential applications of our dataset include expressive performance analysis and performer identification.
Keywords:Jazz piano dataset, music generation, transformer model
## 1 Introduction
The growing interest in generative music models has led to the exploration of their potential in specialised music composition tasks. As current trends often focus on generating complete songs or music continuation tasks [2, 3], there is a lack of datasets designed for specialised music tasks. However, these specialised music tasks, such as music infilling [16, 19] and composition style transfer [15, 21], could contribute to the development of artificial intelligence (AI) tools in music composition.
We introduce Music Overpainting as a novel specialised generative music task, inspired by the concept of overpainting in fine art and Liszt's compositional approaches to rearrangement in his piano transcriptions from classical music. Music Overpainting generates variations by providing a rearrangement of a music segment. While the task aims to reframe the musical context by changing elements such as rhythmic, harmonic, and melodic complexity and ornamentation, the core melodic and harmonic structure of the music segment is preserved. Compared to related music generation tasks such as compositional style transfer [4] and music infilling [16; 19], Music Overpainting creates small variations within the same style and retains perceptible similarities in the underlying melodic contour and harmonic structure of the music segment. Outputs from Music Overpainting could be used in AI tools for music composition, to add variation and novelty to desired sections of music.
Our motivation for creating this dataset stems from the lack of available datasets for novel and specialised generative music tasks. Not only did we find that there was a lack of clean and high-quality MIDI data for investigating tasks such as Music Overpainting, but also in the context of solo jazz piano music in general. Most existing jazz datasets consist of transcriptions of improvised "solo" sections within a jazz performance or feature multiple instruments. Few datasets feature interpretations of the "head" section, containing the main musical theme, for solo piano only. Additionally, we found that many jazz datasets do not include performances from female musicians, so we are proud to include several extracts of performances from female jazz pianists within our dataset. Our dataset helps to fill this gap, while also providing insights into how jazz pianists rearrange standards for solo piano from a music information retrieval (MIR) perspective.
The JAZZVAR dataset comprises of 502 pairs of _Original_ and _Variation_ MIDI segments from 22 jazz standards, 47 performances, and 35 pianists. An _Original_ segment is 4-bars long and manually transcribed from a lead sheet of a jazz standard. A _Variation_ segment is manually found from an automatically transcribed piano performance of the same jazz standard. We find _Variation_ segments by searching for passages that are melodically and harmonically similar to _Original_ segments. Figure 1 shows more details of the data curation pipeline. Table 1 provides more information about the _Original_ and _Variation_ segments. The jazz standards and the piano performances in our dataset
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Feature** & **Original** & **Variation** \\ \hline Segment length & 4 bars & misc. \\ Location & “head” section & “head” section \\ File format & Manually-transcribed MIDI & Automatically-transcribed \\ & MIDI & MIDI \\ Musical format & Melody and chords & Two-handed solo piano \\ Type & Lead sheet of & Piano performance of \\ & jazz standard & jazz standard \\ Source & MuseScore & Youtube \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of _Original_ and _Variation_ Segments.
are under copyright, therefore the JAZZVAR dataset cannot currently be made available for direct download. However, researchers will be allowed to access the dataset on request.
The JAZZVAR dataset serves as a foundation for exploring the Music Overpainting task across genres. What we refer to as _Variations_ are passages of music from a jazz standard that have been reinterpreted or rearranged by jazz pianists'. However, we can view these reinterpretations as variations on the melody and chords of the jazz standards. We use the _Original_ and _Variation_ pairs in the dataset to train a Music Transformer model to generate novel passages of variation from a simple MIDI primer. By presenting this novel dataset and introducing the Music Overpainting task, we aim to contribute to the field of generative music research and encourage further exploration of the relationship between composers and AI tools in various music genres.
The remainder of this paper is organised as follows: Section 2 provides an overview of related datasets in the field of generative music and MIR, Sections 3 and 4 present an in-depth description and analysis of the JAZZVAR dataset, Section 5 introduces Music Overpainting as a generative music task and uses the JAZZVAR dataset to train the Music Transformer model for generation.
## 2 Related Works
Existing jazz datasets that can be used for MIR and Generative Music tasks often feature the improvisation or solo section only of the jazz performance. The Weimar Jazz Database (WDB) [17], consists of 456 manually transcribed solos by 78 performers and contains no solo piano performances. The DTL1000 dataset [5] from the "Dig That
Figure 1: The process of creating _Original_ and _Variation_ pairs. _Original_ sections are MIDI segments from a lead sheet transcription of a jazz standard. Audio of a piano performance playing the same jazz standard is transcribed automatically into MIDI. A _Variation_ is found by manually searching for passages that are melodically and harmonically similar to the _Original_ in the “head” section of the piano performance.
Lick" project is a set of 1750 automatically transcribed solos from 1060 tracks. However, it is not clear how many of these tracks are piano solo tracks.
The Million Song Dataset (MSD) [1] is a collection of audio features and metadata for one million contemporary popular music tracks. While the MSD does not specifically focus on jazz, it does include a substantial number of jazz recordings that could be used for comparative analysis. The Lakh MIDI Dataset (LMD) [13] is a collection of 176,581 unique MIDI files that are matched to songs within the Million Song Dataset using Dynamic Time Warping-based alignment methods [18]. Similarly, to the DTL1000 dataset, the MSD and the LMD have no specific focus on solo jazz piano performances.
## 3 Jazzvar Dataset
### Data Collection
#### 3.1.1 Repertoire
A jazz standard is a well-known, and commonly played song in the jazz repertoire. Many popular songs composed in the early to mid-twentieth century for film, television, and musical theatre are now prominent jazz standards. Some of the more famous jazz standards include Gershwin's "Summertime" for the opera _Porgy and Bess_ (1935) and "All the Things You Are" by Jerome Kern and Oscar Hammerstein II for the musical _Very Warm for May_ (1939). These popular songs have been continually played and rearranged by jazz musicians for decades. Popular songs originating from these times contain a "refrain" section, which was the main theme of the song. In jazz music, the "head" section is often synonymous with these "refrain" sections. Many jazz musicians would learn the songs by ear, or through unofficial lead sheets, such as the ones circulated within the _Fake Real Book_. Some jazz musicians, such as the trumpeter Miles Davis (1926-1991) and Thelonious Monk (1917-1982), composed music themselves and these pieces have also become famous jazz standards.
Within this context, our goal was to find lead sheets of jazz standards and audio recordings of solo piano performances of jazz standards. The first publication dates of the jazz standards in our dataset range between 1918 and 1966, while the performances span from the mid-twentieth to the beginning of the twenty-first century.
#### 3.1.2 Jazz Standard Lead Sheets
Lead sheets are condensed versions of song compositions that musicians have transcribed and passed through the community. They are presented as a single melodic line with accompanying chords.
We sourced MIDI and MusicXML lead sheets from MuseScore, created by users who often referenced the _Fake Real Book_. Candidate pieces were found using the following criteria:
1. entirely in 4/4 timing,
2. jazz standards mostly consisting of popular songs from the early to mid-twentieth century.
The lead sheets were cleaned and corrected by removing introductions and verses, to retain only the refrain section. Songs with repeated refrains were further edited to
include only the final repeat. We converted any MusicXML files to MIDI and made corrections by referencing the chords in lead sheets. In some cases, we transcribe the chords and melody by ear from early recordings of popular songs or completely rewrite the MIDI, as many of the source files were corrupt. In total, we collected and cleaned 234 jazz standards, of which a subset of 22 appear within the JAZZVAR dataset.
#### 3.1.3 Audio of Jazz Solo Piano Performances
To compile a list of solo piano performances of jazz standards, we manually searched for well-known jazz pianists' solo performances on Spotify and Youtube that matched the list of 234 MIDI lead sheets we had collected. We also used the _Solo piano jazz albums1_ category on Wikipedia to help find performances. We gathered Spotify Metadata for these performances, which we used to collect the respective audio data. This approach allowed us to compile a diverse set of performances, including some by female pianists, and to capture the rich history of jazz piano performance.
Footnote 1: See Wikipedia: [https://en.wikipedia.org/wiki/Category:Solo_piano_jazz_albums](https://en.wikipedia.org/wiki/Category:Solo_piano_jazz_albums)
### Automatic Music Transcription of Jazz Audio
Automatic Music Transcription (AMT) algorithms such as [11, 8] enable us to transcribe audio recordings into MIDI representations. According to results from a listening test conducted by Zhang et al. [22], the High-Resolution transcription system proposed by Kong et al. [11] is preferred over the other two systems by participants in terms of conserving the expressiveness of the performances. We used the Spotify metadata to download the jazz audio from Youtube and applied the High-Resolution model [11] to transcribe the downloaded jazz audios into MIDIs. In total, we collected and transcribed 760 audio recordings covering a wide range of performances from 148 albums by 101 jazz pianists, of which a subset of 47 performances appear within the JAZZVAR dataset.
### Pair Matching Process
We segmented 4 bar sections from the MIDI lead sheets by taking into consideration the phrases in the main melody. As the jazz standards that we chose were all in 4/4 time, most of the phrases were contained within a 4-bar structure. We labeled these four bar sections as _Original_ segments. We segmented 22 jazz standards and collected an average of 6 segments per standard. In order to create our _Variation_ segments to form a data pair, we manually searched through the AMT solo jazz piano performances of the jazz standards and found segments that were melodically and harmonically similar to the _Original_ segment for each jazz standard. To facilitate the matching process for finding _Original_ and _Variation_ pairs, we created a Python application with a graphical user interface (GUI), which allowed us to view and listen to individual _Original_ segments. 2 We then searched through the AMT jazz performances and saved passages that closely corresponded to the _Original_ segments melodically and harmonically.
Footnote 2: We plan to release the GUI for reproducing our dataset. A GitHub page will be released by the publication of the paper.
## 4 Analysis
### Experimental Dataset Analysis
We calculated several musical statistics across the dataset to provide insights into the dataset's musical content and structure according to [6]. We compared the differences between the _Original_ and the _Variation_ sections and summarise several characteristic features in Table 2.
#### 4.1.1 Pitch Class Entropy
The higher mean pitch class entropy in the _Variation_ segments (3.13) compared to the _Original_ segments (2.94) suggests that jazz pianists tend to introduce more diversity in pitch distribution when interpreting jazz standards. This increased complexity and unpredictability in the variations reflect the improvisational and creative nature of jazz music.
#### 4.1.2 Pitch Range
The mean pitch range in the _Variation_ segments (47.20) is considerably larger than in the _Original_ segments (36.44), indicating that jazz pianists often expand beyond the range of pitches used within a jazz standard. This expanded pitch range could contribute to a richer and more expressive musical experience in the variations.
#### 4.1.3 Polyphony
Polyphony is defined as the mean number of pitches played simultaneously, evaluated only at time steps where at least one pitch is played. The mean polyphony is slightly lower in the _Variation_ segments (5.01) compared to the _Original_ segments (5.30). This suggests that jazz pianists may use fewer simultaneous pitches on average in their reinterpretations. However, the higher standard deviation in the _Variation_ segments (2.08) indicates that the polyphonic structures in these reinterpretations can be quite diverse.
#### 4.1.4 Number of Pitches
The higher mean number of pitches in the _Variation_ segments (29.42) compared to the _Original_ segments (16.08) implies that jazz pianists tend to incorporate more distinct pitches when rearranging jazz standards. This increase in the number of pitches adds to the complexity and expressiveness of the variations.
#### 4.1.5 Pitch in Scale
Pitch-in-scale rate is defined as the ratio of the number of notes in a certain scale to the total number of notes [6]. The slightly lower mean value of pitch in scale in the _Variation_ segments (0.83) compared to the _Original_ segments (0.89) indicates that jazz pianists may be more inclined to use pitches outside the underlying
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Feature** & \multicolumn{2}{c}{**Originals**} & \multicolumn{2}{c}{**Variations**} \\ & Mean & SD & Mean & SD \\ \hline Pitch Class Entropy & 2.94 & 0.24 & 3.13 & 0.24 \\ Pitch Range & 36.44 & 3.60 & 47.20 & 10.91 \\ Polyphony & 5.30 & 0.28 & 5.01 & 2.08 \\ Number of Pitches & 16.08 & 0.28 & 29.42 & 8.05 \\ Pitch in Scale & 0.89 & 0.24 & 0.83 & 0.08 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Means and standard deviations for various statistics for combined segments in _Original_ and _Variation_ sections.
scale in their reinterpretations. This tendency could contribute to a more adventurous and explorative musical experience in the variations.
In summary, the analysis of the JAZZVAR dataset reveals that jazz pianists often introduce greater complexity, diversity, and expressiveness when rearranging jazz standards for solo piano. Our findings highlight the dataset's potential for application in tasks such as Music Overpainting. Not only are these insights valuable for the development of specialised generative music models, but they also provide a better understanding of the creative process in jazz music.
### Comparison of Multiple Pianists
Some of the jazz standards featured within the dataset are performed by multiple pianists. Therefore, there are some _Original_ segments that are matched to multiple _Variation_ segments from different pianists. To further highlight the diversity of variations within the dataset, we present a musical analysis of multiple pianists' interpretations of the same _Original_ segment, from the jazz standard "All the Things You Are".
#### 4.2.1 Melody
The melody from the _Original_ segment was found and isolated within each _Variation_ segment. To obtain accurate representations of the melodies, we manually extracted the melody lines from the _Variation_ segments. This manual extraction process involved listening closely to the melody in the _Original_ in order to carefully isolate the melody line within the performances note by note, ensuring higher accuracy and fidelity of melodic extraction in comparison to an automatic approach. We then compared the isolated melodies to find their pitch and duration deviation from the ground truth, the melody from the _Original_ segment. We applied the Needleman-Wunsch [7, 12] alignment algorithm which aligns melodies by minimizing the differences in pitch class and duration between the corresponding notes. Based on the alignment results, we calculate the average deviation score using the following equation:
\[Average\ Deviation=\frac{1}{n}\sum_{i=1}^{n}(PC_{i}+D_{i}), \tag{1}\]
where \(PC_{i}\) denotes the deviation of pitch class, \(D_{i}\) denotes the deviation of note duration, and \(i\) refers to the \(i\)-th note in the melody. We excluded the missing notes in the summation over the note sequences.
This average deviation score provides a measure of how similar the two melodies are, with lower scores indicating higher similarity. The deviation scores of the pianists' _Variation_ from the _Original_ melody can be found in Table 3. Our results show that different pianists' have unique and individual approaches to interpreting the _Original_ melody. Some pianists, such as Leslie North, have a closer adherence to the _Original_ melody, while others, like Bill Evans, exhibit greater differences.
We also mapped the melodic contours of the performances to further explore the differences between the interpretations, using the Contourviz3 package as shown in Figure 2. The visual representation of melodic contours allowed us to observe the overall structure and direction of the melody as it evolved throughout the performance. By comparing the melodic contours of different pianists, we found that some tended to be more experimental with their melodic choices, while others adhered more closely to the _Original_ melody. This variation in melodic contours provides additional evidence of the rich diversity present in our dataset.
Footnote 3: Contourviz can be found in: [https://github.com/cjwit/contourviz](https://github.com/cjwit/contourviz)
similar chord progression to the _Original_, but modified a minor chord to major, resulting in a significant shift in the performance's intention and musical direction. We also observed that certain pianists used extended chords more extensively than others who played more closely to the _Original_. Other pianists added more chords to the chord progression, which sped up the harmonic rhythm.
Our analysis shows that the dataset contains a diverse range of interpretations, even when playing the same jazz standard. Within jazz, performers are individualistic and can be creative with their musical choices. The differences in melodic deviations, melodic contours, and harmonic rhythms between performances not only demonstrate the artistic freedom of each pianist but also indicates that the dataset could be a useful resource for those interested in expressive performance analysis or performer identification tasks.
## 5 Music Overpainting
### Problem Definition
As defined in Section 1, Music Overpainting is a generative music task that aims to create variations on pre-existing music sections. Within the context of the JAZZVAR dataset, we can specifically define the task as generating a _Variation_ segment from a given _Original_ segment. Given an _Original_ jazz standard segment \(O\) from the JAZZVAR dataset, and a _Variation_ segment \(V\), the goal of the Music Overpainting task is to find a reinterpretation \(I(O)\) such that:
\[V=I(O) \tag{2}\]
### Generation with Music Transformer
Transformers have been widely applied to generate music in genres such as Pop, Classical, as well as Jazz [10, 9, 20]. Their convincing output demonstrate their capability of modeling musical structures and patterns. In this work, we adopted the design of
Figure 3: A line graph comparison of the Harmonic Rhythm of the original melody (in Blue) and pianists’ interpretations of the melody.
Music Transformer [9] which uses music motifs as primers for conditional generation. To train the transformer model, we concatenated the _Variation_ segments to the end of the _Original_ segments for each pair in the JAZZVAR dataset. In total, we obtained 502 concatenations and used 90% for training and 10% for validation. For the inference process, we treated the _Original_ segment as a primer and generated a _Variation_ segment following the probability distribution learned by the transformer model.
### Results
We present piano-rolls of two _Original_ segments, referred to as **A** and **B**, and the corresponding generated _Variation_ segments4 with _Original_ segments used as primers to the model in Figure 4. We use the same pitch-related features calculated for the dataset in Table 2 to compare the _Original_ segments and the corresponding generations. According to these results, we observe that the generated _Variation_ segments are more complex and diverse in terms of the music features presented in Table 4, as well as the articulation and dynamics. By listening to the generations, we find that the model's ability to accurately preserve the melody and chord patterns of the _Original_ segment in the generated output can be improved.
Footnote 4: Listening samples of the generations can be found at [https://drive.google.com/drive/folders/13SimiT2AevqP3ma3xWy4LanQwcjyRLLG1?usp=sharing](https://drive.google.com/drive/folders/13SimiT2AevqP3ma3xWy4LanQwcjyRLLG1?usp=sharing)
Figure 4: Piano-rolls of two _Original_ (left in Blue) and the corresponding generated _Variation_ (right in Red) sections. The _Original_ A is from the song “All the Things You Are”, and the _Original_ B is from the song “Alfie”.
## 6 Conclusion
We present the JAZZVAR dataset a collection of 502 MIDI pairs of _Variation_ and _Original_ segments. We evaluated the dataset with regard to several musical features and compared the melodic and harmonic features of _Variations_ for different pianists performing the same _Original_ jazz standard. Our results indicate the diversity and complexity of _Variation_ in the dataset, which is one important component for successfully training a specialised generative music model. We introduced the Music Overpainting task, and trained a Music Transformer using the JAZZVAR dataset to generate _Variation_ segments with the _Original_ segments as primers.
Having a collection of _Variations_ performed by different pianists on the same jazz standard allows us to apply the dataset to explore tasks such as performer identification and expressive performance analysis. We aim to expand the JAZZVAR dataset in the future, using our collection of AMT MIDI data of jazz performances and corresponding jazz standards. This could either be achieved through the manual matching method as shown in Section 3.3, or through an automatic method, which would allow for a greater number of _Original_ and _Variation_ pairs to be produced. We believe that the deep generative models for the Music Overpainting task will greatly benefit from the increment of dataset size.
|
2302.11811 | Ordered normed spaces of functions of bounded variation | In this paper, we define and study the space of all the functions of bounded
variation $f:[x,y]\to \mathbb{Y}$ denoted by $\mathcal{BV}[x,y],$ where $[x,y]$
is an ordered interval and $\mathbb{Y}$ is an absolute order unit space having
vector lattice structure. By default, under the order structure of
$\mathbb{Y},$ the space $\mathcal{BV}[x,y]$ forms a nearer absolute order unit
space structure and in some cases it turns out to be an absolute order unit
space (in fact, a unital $AM$-space). By help of variation function, we also
define a different kind of order structure on the space $\mathcal{BV}[x,y]$
that also makes $\mathcal{BV}[x,y]$ a nearer absolute order unit space
structure. Later, we also show that under certain conditions this ordering
induces a complete norm on $\mathcal{BV}[x,y].$ | Amit Kumar | 2023-02-23T06:43:33Z | http://arxiv.org/abs/2302.11811v1 | # Ordered normed spaces of functions of bounded variation
###### Abstract.
In this paper, we define and study the space of all the functions of bounded variation \(f:[x,y]\to\mathbb{Y}\) denoted by \(\mathcal{BV}[x,y]\), where \([x,y]\) is an ordered interval and \(\mathbb{Y}\) is an absolute order unit space having vector lattice structure. By default, under the order structure of \(\mathbb{Y}\), the space \(\mathcal{BV}[x,y]\) forms a nearer absolute order unit space structure and in some cases it turns out to be an absolute order unit space (in fact, a unital \(AM\)-space). By help of variation function, we also define a different kind of order structure on the space \(\mathcal{BV}[x,y]\) that also makes \(\mathcal{BV}[x,y]\) a nearer absolute order unit space structure. Later, we also show that under certain conditions this ordering induces a complete norm on \(\mathcal{BV}[x,y]\).
Key words and phrases:Vector lattice, dedekind property, absolutely ordered space, functions of bounded variation, norm, absolute order unit space, \(AM\)-space, order completeness and completeness 2010 Mathematics Subject Classification: Primary 46B40; Secondary 46L05, 46L30 The author was financially supported by the Institute Post-doctoral Fellowship of IIT Bhubaneswar, India.
## 1. Introduction
The theory of functions of bounded variation is well known in Mathematical Analysis. Functions of bounded variation are also called \(\mathcal{BV}\)-functions. In Complex Analysis, \(\mathcal{BV}\)-functions are used to defined arc-length of smooth curves. In other words, if \(\gamma:[0,1]\to\mathbb{C}\) is a continuously differentiable function, then \(\gamma\) is a \(\mathcal{BV}\)-function and the total variation of \(\gamma\) is given by \(\mathcal{V}(\gamma)=\int_{0}^{1}|\gamma^{\prime}(t)|dt\).
In 1881, Camille Jordan initated the theory of \(\mathcal{BV}\)-functions of a single variable to deal with the convergence in fourier series [9]. On the other hand, the theory of \(\mathcal{BV}\)-functions of several variables was initated by Leonida Tonelli in 1926 (see [5]). However, \(\mathcal{BV}\)-functions of several variables were formally defined and studied by Lamberto Cesari in 1936 [4]. The \(\mathcal{BV}\)-functions forms an algebra of discontinuous functions having first order derivative almost everywhere. This is a major importance of \(\mathcal{BV}\)-functions in Mathematics, Physics and Engineering as it helps to define a generalized solutions of of non-linear problems that involves functionals, ordinary and partial differential equations. It is worth to notice that triangle inequality plays a crucial role in the study of the \(\mathcal{BV}\)-functions. The triangle inequality holds in \(\mathbb{R}\) and \(\mathbb{C}\) that is why it is possible to study \(\mathcal{BV}\)-functions in these spaces. For more informations about \(\mathcal{BV}\)-functions, we refer to see [6, 21] and references therein.
Order structure is one of important parts of the \(C^{*}\)-algebras. It characterizes \(C^{*}\)-algebras. Its fundamental importances can be found in [3, 10, 11, 12, 13, 19] and references therein. Parallel theory of order structure has also been developed in vector spaces, for details see [1, 2, 8, 20, 22]. Being inspired by the richness of order structure, Karn also started working on the order theoretic aspects of \(C^{*}\)-algebras. Some of his related works can be seen in [14, 15, 16, 17, 18].
In [17], Karn introduced and studied the notion of absolutely ordered spaces and absolute order unit spaces. Under the condition [16, Theorem 4.12](triangle inequality holds), absolutely ordered spaces turn out to be vector lattices and under the same condition absolute order unit spaces turn out to be unital \(AM\)-spaces. That was the reason, Karn named "absolutely ordered spaces" as "non-commutative vector lattice models". Therefore, it is obvious question for the study of \(\mathcal{BV}\)-functions in absolutely ordered spaces. In this paper, we have defined and studied the notion of \(\mathcal{BV}\)-functions in absolutely ordered spaces. Finally, our aim is to show that \(\mathcal{BV}\)-functions forms ordered normed spaces.
The development of the paper is as follows. In the second section, we recall the preliminaries which are essential to write this paper. In the third section, we define \(\mathcal{BV}\)-functions and study their basic properties (Theorems 3.4 and 3.7, and Lemma 3.8). We investigate when the space of \(\mathcal{BV}\)-functions forms an absolute order unit space and \(AM\)-space (Theorem 4.5). In the fourth section (last section), we define variation function for \(\mathcal{BV}\)-functions and study its basic properties in terms of \(\mathcal{BV}\)-functions (Theorem 4.3). We also construct some norms on the space of \(\mathcal{BV}\)-functions under which it turn out to be ordered normed spaces (Theorem 4.5, Corollaries 4.7 and 4.9). Under the order completeness, one of these norms turns out to be a complete norm (Theorem 4.8).
## 2. Preliminaries
Let \(\mathbb{X}\) be a real vector space. A non-empty subset \(\mathbb{X}^{+}\) of \(\mathbb{X}\) is said to be a cone, if \(x+y\) and \(\alpha x\in\mathbb{X}^{+}\) for all \(x,y\in\mathbb{X}^{+}\) and \(\alpha\in\mathbb{R}^{+}\cup\{0\}.\) Then \((\mathbb{X},\mathbb{X}^{+})\) is said to be a _real ordered vector space_. Given a partial ordered space \((\mathbb{X},\leq),\) put \(\mathbb{X}^{+}=\{x\in\mathbb{X}:x\geq 0\}.\) Then \(x\leq y\) if \(y-x\in\mathbb{X}^{+}.\) In this way, \(\leq\) is unique with the following properties: \(x\leq x\) for all \(x\in\mathbb{X},\)\(x\leq z\) provided \(x\leq y\) and \(y\leq z\) and, \(x+z\leq y+z\) and \(\alpha x\leq\alpha y\) provided \(x\leq y,\)\(z\in\mathbb{X}\) and \(\alpha\in\mathbb{R}^{+}.\) If \(\mathbb{X}^{+}\cap-\mathbb{X}^{+}=\{0\},\) then the cone \(\mathbb{X}^{+}\) is called _proper_ and if \(\mathbb{X}=\mathbb{X}^{+}-\mathbb{X}^{+},\) then it is called _generating_. It is worth to note that \(\mathbb{X}^{+}\) is proper if and only if \(\leq\) is anti-symmetric.
An element \(e\in\mathbb{X}^{+}\) is called order unit for \(\mathbb{X}\) provided for every \(x\in\mathbb{X},\) we have \(ee\pm x\in\mathbb{X}^{+}\) for some \(\epsilon>0.\) The cone \(\mathbb{X}^{+}\) is called _Archimedean_ provided for \(x\in\mathbb{X}\) and a fixed \(y\in\mathbb{X}^{+}\) such that \(\epsilon y+x\in\mathbb{X}^{+}\) for all \(\epsilon>0,\) it turns out that \(x\in\mathbb{X}^{+}.\)
In a real ordered vector space \((\mathbb{X},\mathbb{X}^{+})\) with order unit \(e\) and such that \(\mathbb{X}^{+}\) is proper and Archimedean, we can always define a norm on \(\mathbb{X}\) in the following way:
\[\|x\|:=\inf\{\epsilon>0:\epsilon e\pm x\in\mathbb{X}^{+}\}.\]
This is called norm determined by \(e.\) Moreover, \(\mathbb{X}^{+}\) is norm-closed as well as \(\|x\|e\pm x\in\mathbb{X}^{+}\) for every \(x\in\mathbb{X}.\) In this case, \(\mathbb{X}\) is called an _order unit space_ and we denote it by \((\mathbb{X},e).\)
Let \(\mathbb{X}\) be a real ordered vector space and \(\mathbb{S}\) be a non-empty subset of \(\mathbb{X}.\) Then \(\mathbb{S}\) is called bounded above in \(\mathbb{X}\) if there exists \(z\in\mathbb{X}\) such that \(x\leq z\) for all \(x\in\mathbb{S}.\) In this case, we say that \(\mathbb{S}\) is bounded above by \(z\) and \(z\) is called upper bound of \(\mathbb{S}.\) Similarly, \(\mathbb{S}\) is called bounded below in \(\mathbb{X}\) if there exists \(w\in\mathbb{X}\) such that \(w\leq x\) for all \(x\in\mathbb{X}.\) In this case, we say that \(\mathbb{S}\) is bounded below by \(w\) and \(w\) is called lower bound of \(\mathbb{S}.\) We say that \(z\in\mathbb{X}\) is supremum of \(\mathbb{S}\) if \(z\) is upper bound of \(\mathbb{S}\) and whenever \(w\in\mathbb{X}\) is any other upper bound of \(\mathbb{S},\) it turns out that \(z\leq w.\) In this case, we write: \(\sup\{x:x\in\mathbb{S}\}=z.\) Similarly, we say that \(w\in\mathbb{X}\) is infimum of \(\mathbb{S}\) if \(w\) is lower bound of \(\mathbb{S}\) and whenever \(z\) is any other lower bound of \(\mathbb{S},\) it turns out that \(z\leq w.\) In this case, we write: \(\inf\{x:x\in\mathbb{S}\}=w.\) Note that \(\sup\{x:x\in\mathbb{S}\}\) exists in \(\mathbb{X}\) if and only if \(\inf\{-x:x\in\mathbb{S}\}\) exists in \(\mathbb{X}.\) In this case, \(\sup\{x:x\in\mathbb{S}\}=-\inf\{-x:x\in\mathbb{S}\}.\)
A real ordered vector space \(\mathbb{X}\) is called _vector lattice_ provided \(\sup\{x,y\}\) exists in \(\mathbb{X}\) for every pair \(x\) and \(y\in\mathbb{X}.\) In a vector lattice, we write: \(x\lor y=\sup\{x,y\},x\wedge y=\inf\{x,y\}\) and \(|x|=x\vee(-x).\)
A vector lattice \(\mathbb{X}\) is called Dedekind complete if supremum of every non-empty bounded above subset of \(\mathbb{X}\) exists in \(\mathbb{X}.\)
Let \((\mathbb{X},\mathbb{X}^{+})\) be a vector lattice with a norm \(\|\cdot\|\) such that \((\mathbb{X},\|\cdot\|)\) forms a Banach space. Then \((\mathbb{X},\mathbb{X}^{+})\) called a _\(AM\)-space_ provided the following two conditions hold:
1. \(|x|\leq|y|\) implies \(\|x\|\leq\|y\|\) for every pair \(x,y\in\mathbb{X}.\)
2. For \(x,y\in\mathbb{X}^{+},\) we have \(\|x\lor y\|=\max\{\|x\|,\|y\|\}.\)
Let's recall the notion of absolutely ordered spaces introduced by Karn as a possible non-commutative model for vector lattices [17].
**Definition 2.1**.: _[_17_, Definition 3.4]_ _Let \((\mathbb{X},\mathbb{X}^{+})\) be a real ordered vector space and let \(|\cdot|:\mathbb{X}\to\mathbb{X}^{+}\) be a mapping satisfying the following conditions:_
1. \(|x|=x\) _if_ \(x\in\mathbb{X}^{+}.\)__
2. \(|x|\pm x\in\mathbb{X}^{+}\) _for all_ \(x\in\mathbb{X}.\)__
3. \(|\alpha\cdot x|=|\alpha|\cdot|x|\) _for all_ \(x\in\mathbb{X}\) _and_ \(\alpha\in\mathbb{R}.\)__
4. _If_ \(x,y\) _and_ \(z\in\mathbb{X}\) _with_ \(|x-y|=x+y\) _and_ \(0\leq z\leq y,\) _then_ \(|x-z|=x+z.\)__
5. _If_ \(x,y\) _and_ \(z\in\mathbb{X}\) _with_ \(|x-y|=x+y\) _and_ \(|x-z|=x+z,\) _then_ \(|x-|y\pm z||=x+|y\pm z|.\)__
_Then \((\mathbb{X},\mathbb{X}^{+},|\cdot|)\) is said to be an absolutely ordered space._
The following result explains that absolutely ordered space is very near to a lattice structure that is why Karn called it possible non-commutative model for vector lattices.
**Theorem 2.2**.: _Let \((\mathbb{X},\mathbb{X}^{+},|\cdot|)\) be an absolutely ordered space. For \(y,z\in\mathbb{X},\) put_
\[y\dot{\vee}z:=\frac{1}{2}(y+z+|y-z|).\]
_Then the following statements are equivalent:_
1. \(y\dot{\vee}z=\sup\{y,z\}\) _for all_ \(y,z\in\mathbb{X}.\)__
2. \(\vee\) _is associative in_ \(\mathbb{X}.\)__
3. \(\pm y\leq x\) _implies_ \(|y|\leq x\) _for all_ \(x,y\in\mathbb{X}.\)__
4. \(|y+z|\leq|y|+|z|.\)__
Next, we recall some variants of orthogonalities in absolutely ordered spaces.
**Definition 2.3** ([17], Definition 3.6).: _Let \((\mathbb{X},\mathbb{X}^{+},|\cdot|)\) be an absolutely ordered space and let \(\|\cdot\|\) be a norm on \(\mathbb{X}.\)_
1. _For_ \(x,y\in\mathbb{X}^{+}\)_, we say that_ \(x\) _is_ orthogonal _to_ \(y\) _(_\(x\perp y\)_) if,_ \(|x-y|=x+y.\) _Put_ \(x^{+}:=\frac{1}{2}(|x|+x)\) _and_ \(x^{-}:=\frac{1}{2}(|x|-x).\) _In this case,_ \(x=x^{+}-x^{-}\) _and_ \(|x|=x^{+}+x^{-}\) _so that_ \(x^{+}\perp x^{-}.\) _This decomposition turns out to be unique in the sense:_ \(x=x_{1}-x_{2}\) _such that_ \(x_{1}\perp x_{2}\) _implies_ \(x_{1}=x^{+}\) _and_ \(x_{2}=x^{-}.\) _Therefore each element in_ \(\mathbb{X}\) _owns a unique orthogonal decomposition in_ \(\mathbb{X}^{+}.\)__
2. _For_ \(x,y\in\mathbb{X}^{+}\)_, we say that_ \(x\) _is_ \(\infty\)-orthogonal _to_ \(y\) _(_\(x\perp_{\infty}y\)_) if,_ \(\|\alpha x+\beta y\|=\max\{\|\alpha u\|,\|\beta v\|\}\) _for all_ \(\alpha,\beta\in\mathbb{R}.\)__
3. _For_ \(x,y\in\mathbb{X}^{+}\)_, we say that_ \(x\) _is_ absolutely \(\infty\)-orthogonal _to_ \(y\) _(_\(x\perp_{\infty}^{a}y\)_) if,_ \(x_{1}\perp_{\infty}y_{1}\) _whenever_ \(0\leq x_{1}\leq x\) _and_ \(0\leq y_{1}\leq y.\)__
Now, we recall absolute order unit spaces.
**Definition 2.4** ([17], Definition 3.8).: _Let \((\mathbb{X},\mathbb{X}^{+},|\cdot|)\) be an absolutely ordered space and let \(\|\cdot\|\) be an order unit norm on \(\mathbb{X}\) determined by the order unit \(e\) such that \(\mathbb{X}^{+}\) is \(\|\cdot\|\)-closed. Then \((\mathbb{X},\mathbb{X}^{+},|\cdot|,e)\) is called an absolute order unit space if \(\bot=\perp_{\infty}^{a}\) on \(\mathbb{X}^{+}.\)_
Note that the self-adjoint part of a unital C\({}^{*}\)-algebra is an absolute order unit space [17, Remark 3.9(1)]. More generally, every unital \(JB\)-algebra is also an absolute order unit space.
## 3. Functions of bounded variation and their properties
Let \(\mathbb{X}\) be a real ordered vector space and \(x,y\in\mathbb{X}\) such that \(x\leq y,\) the ordered interval \([x,y]\) in \(\mathbb{X}\) is defined by \([x,y]=\{z\in\mathbb{X}:x\leq z\leq y\}.\)
A partition \(\mathcal{P}\) of \([x,y]\) is collection of points in \([x,y]\) such that \(\mathcal{P}=\{x=x_{0}<x_{1}<x_{2}<\cdots<x_{n_{\mathcal{P}}-1}<x_{n_{\mathcal{ P}}}=y\}.\)
Let \(\mathbb{Y}\) be an absolutely ordered space and \(f:[x,y]\rightarrow\mathbb{Y}\) be a function. Let \(\mathcal{P}\) be a partion of \([x,y].\) We consider the following summation over \(\mathcal{P}:\)
\[\sum_{i=1}^{n_{\mathcal{P}}}|f(x_{i})-f(x_{i-1})|.\]
We denote it by \(\Sigma_{\mathcal{P}}^{x,y}[f].\) Most of the times, we denote \(\Sigma_{\mathcal{P}}^{x,y}[f]\) by \(\Sigma_{\mathcal{P}}[f]\) if there is no ambuigity.
**Proposition 3.1**.: _Let \(\mathbb{Y}\) be a vector lattice structure and \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) be partitions of \([x,y]\) such that \(\mathcal{P}_{1}\subset\mathcal{P}_{2},\) then \(\Sigma_{\mathcal{P}_{1}}[f]\leq\Sigma_{\mathcal{P}_{2}}[f].\) In particular, \(|f(y)-f(x)|\leq\Sigma_{\mathcal{P}}[f]\) for every partition \(\mathcal{P}\) of \([x,y].\)_
Proof.: Let \(\mathcal{P}_{1}=\{x=x_{0}<x_{1}<x_{2}<\cdots<x_{n-1}<x_{n}=y\}.\) Without loss of generality, we assume that \(\mathcal{P}_{2}\) contains exactly one more point than \(P_{1}.\) In this case, we have \(\mathcal{P}_{2}=\{x=x_{0}<x_{1}<x_{2}<\cdots<x_{i-1}<z<x_{i}<\cdots<x_{n-1}<x_{ n}=y\}.\) By Theorem 2.2, we get that
\[|f(x_{i})-f(x_{i-1})| \leq |(f(x_{i})-f(z))+(f(z)-f(x_{i-1}))|\] \[\leq |f(x_{i})-f(z)|+|f(z)-f(x_{i-1})|\]
so that \(\Sigma_{\mathcal{P}_{1}}[f]\leq\Sigma_{\mathcal{P}_{2}}[f].\)
Now, we introduce the notion of functions of bounded variation in absolutely ordered spaces.
**Definition 3.2**.: _Let \(\mathbb{Y}\) be an absolutely ordered space and \(f:[x,y]\rightarrow\mathbb{Y}\) be a function. Then \(f\) is said to be of bounded variation, if_
\[\sup\,\{\sum_{\mathcal{P}}[f]:\mathcal{P}\text{ is a partition of }[x,y]\}\]
_exists in \(\mathbb{Y}.\) If \(f\) is of bounded variation, we say_
\[\mathcal{V}(f,x,y)=\sup\,\{\sum_{\mathcal{P}}[f]:\mathcal{P}\text{ is a partition of }[x,y]\}\]
_the total variation of \(f\) and we also write:_
\[\mathcal{BV}[x,y]=\{f:[x,y]\rightarrow\mathbb{Y}\text{ is a function of bounded variation }\}.\]
_Most of the times, we denote \(\mathcal{V}(f,x,y)\) and \(\mathcal{BV}[x,y]\) by \(\mathcal{V}(f)\) and \(\mathcal{BV}\) if there is no ambiguity._
The following result is immediate from Proposition 3.1.
**Corollary 3.3**.: _Let \(f\in\mathcal{BV}[x,y].\) Then \(|f(y)-f(x)|\leq\mathcal{V}(f).\)_
Next, we study algebra of functions of bounded variations.
**Theorem 3.4**.: _Let \(\mathbb{Y}\) be a vector lattice and \(f,g\in\mathcal{BV}[x,y].\) Then_
1. \(f\) _is bounded._
2. \(\alpha f\) _is also of bounded variation with_ \(\mathcal{V}(\alpha f)=|\alpha|\mathcal{V}(f)\) _for any_ \(\alpha\in\mathbb{R}\)_. In particular,_ \(-f\) _is also of bounded variation with_ \(\mathcal{V}(-f)=\mathcal{V}(f).\)__
_Moreover, if \(\mathbb{Y}\) is dedekind complete, then_
1. \(f\pm g\) _is also of bounded variation with_ \(\mathcal{V}(f\pm g)\leq\mathcal{V}(f)+\mathcal{V}(g).\)__
2. \(|f|\) _is also of bounded variation and_ \(\mathcal{V}(|f|)\leq\mathcal{V}(f).\)__
Proof.:
1. Let \(f\in\mathcal{BV}[x,y].\) For \(z\in[x,y],\) we have \[|f(z)| = |(f(z)-f(x))+f(x))|\] \[\leq |f(z)-f(x)|+|f(x))|\] \[\leq |f(y)-f(z)|+|f(z)-f(x)|+|f(x))|\] \[\leq \mathcal{V}(f)+|f(x)|.\] Hence \(f\) is bounded as \(\mathcal{V}(f)+|f(x)|\) is a fix element in \(\mathbb{Y}.\)
2. For any \(\alpha\in\mathbb{R}\) and partion \(\mathcal{P}=\{x=x_{0}<x_{1}<x_{2}<\cdots<x_{n_{p}-1}<x_{n_{p}}=y\}\), we have \[\sum_{i=1}^{n_{p}}|\alpha f(x_{i})-\alpha f(x_{i-1})|=|\alpha|\sum_{i=1}^{n_{p}} |f(x_{i})-f(x_{i-1})|.\] Thus \(\alpha f\) is also of bounded variation with \(\mathcal{V}(\alpha f)=|\alpha|\mathcal{V}(f)\).
3. Let \(g\in\mathcal{BV}[x,y]\). By Theorem 2.2, we get that \[\sum_{i=1}^{n_{p}}|(f+g)(x_{i})-(f+g)(x_{i-1})| = \sum_{i=1}^{n_{p}}|(f(x_{i})-f(x_{i-1}))+(g(x_{i})-g(x_{i-1}))|\] \[\leq \sum_{i=1}^{n_{p}}(|f(x_{i})-f(x_{i-1})|+|g(x_{i})-g(x_{i-1})|)\] \[= \sum_{i=1}^{n_{p}}|f(x_{i})-f(x_{i-1})|+\sum_{i=1}^{n_{p}}|g(x_{i })-g(x_{i-1})|\] \[\leq \mathcal{V}(f)+\mathcal{V}(g).\] Thus \(f+g\) is of bounded variation with \(\mathcal{V}(f+g)\leq\mathcal{V}(f)+\mathcal{V}(g)\). Next, \(g\) is of bounded variation, by (2), we get that \(-g\) is also of bounded variation with \(\mathcal{V}(-g)=\mathcal{V}(g)\). Since \(f\) and \(-g\) are of bounded variation, we get that \(f-g=f+(-g)\) is also of bounded variation with \(\mathcal{V}(f-g)\leq\mathcal{V}(f)+\mathcal{V}(-g)=\mathcal{V}(f)+\mathcal{V}(g)\).
4. For any \(i\), we have \[|f(x_{i})| = |f(x_{i})-f(x_{i-1})+f(x_{i-1})|\] \[\leq |f(x_{i})-f(x_{i-1}|+|f(x_{i-1})|\] so that \[|f(x_{i})|-|f(x_{i-1})|\leq|f(x_{i})-f(x_{i-1})|.\] Interchanging \(x_{i}\) by \(x_{i-1}\), we also get that \(|f(x_{i-1})|-|f(x_{i})|\leq|f(x_{i-1})-f(x_{i})|\). Finally, we get that \(\pm(|f(x_{i})|-|f(x_{i-1})|)\leq|f(x_{i})-f(x_{i-1})|\). By Theorem 2.2, we have \(||f(x_{i})|-|f(x_{i-1})||\leq|f(x_{i})-f(x_{i-1})|\). Since \(f\) is a function of bounded variation and \(\sum_{i=1}^{n_{p}}||f(x_{i})|-|f(x_{i-1})||\leq\sum_{i=1}^{n_{p}}|f(x_{i})-f(x_ {i-1})|\) for any partition \(\mathcal{P}\) of \([x,y]\), we conclude that \(|f|\) is also of bounded variation and \(\mathcal{V}(|f|)\leq\mathcal{V}(f)\).
The following result tells that every monotone function turns out to be a function of bounded variation.
**Proposition 3.5**.: _Let \(\mathbb{Y}\) be an absolutely ordered space and \(f:[x,y]\rightarrow\mathbb{Y}\) is monotone. Then \(f\in\mathcal{BV}\) with \(\mathcal{V}(f)=|f(y)-f(x)|\)._
Proof.: Let \(f:[x,y]\rightarrow\mathbb{Y}\) be monotonically increasing and let \(P=\{x=x_{0}<x_{1}<x_{2}<\cdots<x_{n-1}<x_{n_{P}}=y\}\) be a partition of \([x,y].\) Then
\[f(x_{i})-f(x_{i-1})\geq 0\text{ for all }i\]
so that
\[\sum_{i=1}^{n_{P}}|f(x_{i})-f(x_{i-1})| = \sum_{i=1}^{n_{P}}(f(x_{i})-f(x_{i-1}))\] \[= f(x_{n_{P}})-f(x_{0})\] \[= f(y)-f(x).\]
For any partition \(P,\) we get that \(\sum_{\mathcal{P}}[f]=f(y)-f(x).\) Hence \(f\in\mathcal{BV}\) and \(\mathcal{V}(f)=f(y)-f(x).\)
Next, if \(f\) is monotonically decreasing, then \(-f\) is monotonically increasing and so \(-f\in\mathcal{BV}[x,y]\) with \(\mathcal{V}(-f)=f(x)-f(y).\) Consequently, by Theorem 3.4(2), \(f\in\mathcal{BV}[x,y]\) with \(\mathcal{V}(f)=\mathcal{V}(-f)=f(x)-f(y)=-(f(y)-f(x)).\) Finally, we conclude that every monotone function \(f\) is of bounded variation with \(\mathcal{V}(f)=|f(y)-f(x)|.\)
The notion of \(|\cdot|\)-prserving maps between absolutely ordered spaces has been introduced and studied by Karn and the author in [18]. The next result describes that every \(|\cdot|\)-prserving map is a function of bounded variation.
**Corollary 3.6**.: _Let \(\mathbb{X}\) and \(\mathbb{Y}\) be absolutely ordered spaces, and \(f:\mathbb{X}\rightarrow\mathbb{Y}\) be an \(|\cdot|\)-preserving map. Then \(f:[x,y]\rightarrow\mathbb{Y}\) is of bounded variation with \(\mathcal{V}(f)=f(y)-f(x)\) for any \(x,y\in\mathbb{X}\) with \(x<y.\)_
Proof.: Assume that \(f:\mathbb{X}\rightarrow\mathbb{Y}\) be an \(|\cdot|\)-preserving map. Let \(z,w\in\mathbb{X}\) with \(z<w.\) Then \(f(w)-f(z)=f(w-z)=f(|w-z|)=|f(w-z)|\geq 0\) so that \(f(w)\geq f(z).\) Thus \(f:[x,y]\rightarrow\mathbb{Y}\) is a monotonically increasing for any \(x,y\in\mathbb{X}\) with \(x<y.\) By Proposition 3.5, we get that \(f\in\mathcal{BV}[x,y]\) with \(\mathcal{V}(f)=|f(y)-f(x)|=f(y)-f(x).\)
Now, we prove one of the main theorem of this paper which elaborates that every function of bounded variation remains function of bounded variation on sub-intervals.
**Theorem 3.7**.: _Let \(\mathbb{Y}\) be a vector lattice which is dedekind complete and \(x\leq z\leq y.\) Then \(f\in\mathcal{BV}[x,y]\) if and only if \(f\in\mathcal{BV}[x,z]\) and \(f\in\mathcal{BV}[z,y].\) In this case, \(\mathcal{V}(f,x,y)=\mathcal{V}(f,x,z)+\mathcal{V}(f,z,y).\)_
Proof.: First assume that \(f:[x,y]\rightarrow\mathbb{Y}\) is a function of bounded variation. Let \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) be partitions of \([x,z]\) and \([z,y]\) respectively. Then \(\mathcal{P}=\mathcal{P}_{1}\cup\mathcal{P}_{2}\) is partition of \([x,y].\) We have
\[\sum_{\mathcal{P}_{1}}[f]+\sum_{\mathcal{P}_{2}}[f]=\sum_{\mathcal{P}}[f]\leq V (f,x,y)\]
so that
\[\sum_{\mathcal{P}_{1}}[f]\leq V(f,x,y)\text{ and }\sum_{\mathcal{P}_{2}}[f]\leq V (f,x,y).\]
Thus \(f\in\mathcal{BV}[x,z]\) and \(f\in\mathcal{BV}[z,y]\) with \(\mathcal{V}(f,x,z)+\mathcal{V}(f,z,y)\leq\mathcal{V}(f,x,y)\).
Conversely assume that \(f\in\mathcal{BV}[x,z]\) and \(f\in\mathcal{BV}[z,y].\) Let \(\mathcal{P}\) be a partition of \([x,y].\) Put \(\mathcal{P}^{*}=\mathcal{P}\cup\{z\}.\) Then \(\mathcal{P}^{*}\) is also a partition of \([x,y]\) and there exist \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) partitions of \([x,z]\) and \([z,y]\) respectively such that \(\mathcal{P}^{*}=\mathcal{P}_{1}\cup\mathcal{P}_{2}.\) By Proposition 3.1, we have
\[\sum_{\mathcal{P}}[f]\leq\sum_{\mathcal{P}^{*}}[f]=\sum_{\mathcal{P}_{1}}[f]+ \sum_{\mathcal{P}_{2}}[f]\leq\mathcal{V}(f,x,z)+\mathcal{V}(f,z,y).\]
Thus \(f\in\mathcal{BV}[x,y]\) with \(\mathcal{V}(f,x,y)\leq\mathcal{V}(f,x,z)+\mathcal{V}(f,z,y).\) Hence, in this case, we get that \(\mathcal{V}(f,x,y)=\mathcal{V}(f,x,z)+\mathcal{V}(f,z,y).\)
Next result characterize all the functions of bounded variation with zero total variation.
**Lemma 3.8**.: _Let \(f\in\mathcal{BV}[x,y].\) Then \(f\) is constant if and only if \(\mathcal{V}(f)=0.\)_
Proof.: Assume that \(f\) is constant. For any partition \(\mathcal{P}\) of \([x,y],\) we get that \(\sum_{i=1}^{n_{p}}|f(x_{i})-f(x_{i-1})|=0\) so that \(\mathcal{V}(f)=0.\) Conversely assume that \(\mathcal{V}(f)=0.\) By Proposition 3.1, we have \(0\leq|f(z)-f(x)|\leq\mathcal{V}(f)=0\) for all \(z\in[x,y].\) Then \(|f(z)-f(x)|=0\) for all \(z\in[x,y].\) In this case, \(f(z)=f(x)\) for all \(z\in[x,y].\) Thus \(f\) is a constant function.
## 4. Norms on functions of bounded variations
In this section, we show that the collection of functions of bounded variation forms ordered normed spaces.
Let \(\mathbb{Y}\) be an absolutely ordered space and \(f:[x,y]\rightarrow\mathbb{Y}\) be a function. For any partition \(\mathcal{P}\) of \([x,y],\) we write: \(\Sigma_{\mathcal{P}}^{+}[f]=\sum_{i=1}^{n_{p}}[f(x_{i})-f(x_{i-1})]^{+}\) and \(\Sigma_{\mathcal{P}}^{-}[f]=\sum_{i=1}^{n_{p}}[f(x_{i})-f(x_{i-1})]^{-}.\) If \(\sup\left\{\sum_{\mathcal{P}}^{+}[f]:\mathcal{P}\text{ is a partition of }[x,y]\right\}\) and \(\sup\left\{\sum_{\mathcal{P}}^{-}[f]:\mathcal{P}\text{ is a partition of }[x,y]\right\}\) exist, we write
\[\mathcal{V}^{+}(f)=\sup\left\{\sum_{\mathcal{P}}^{+}[f]:\mathcal{P}\text{ is a partition of }[x,y]\right\}\]
and
\[\mathcal{V}^{-}(f)=\sup\left\{\sum_{\mathcal{P}}^{-}[f]:\mathcal{P}\text{ is a partition of }[x,y]\right\}.\]
**Proposition 4.1**.: _Let \(\mathbb{Y}\) be an absolutely ordered space and \(f:[x,y]\rightarrow\mathbb{Y}\) be a function. Then_
1. _For any partition_ \(\mathcal{P}\) _of_ \([x,y],\) _we have:_ \(\Sigma_{\mathcal{P}}[f]=\Sigma_{\mathcal{P}}^{+}[f]+\Sigma_{\mathcal{P}}^{-}[f]\) _and_ \(f(y)-f(x)=\Sigma_{\mathcal{P}}^{+}[f]-\Sigma_{\mathcal{P}}^{-}[f].\)__
_If \(\mathbb{Y}\) is a vector lattice, then_
1. _For any partitions_ \(\mathcal{P}_{1}\) _and_ \(\mathcal{P}_{1}\) _of_ \([x,y]\) _such that_ \(\mathcal{P}_{1}\subset\mathcal{P}_{2}\) _we have:_ \(\sum_{\mathcal{P}_{1}}^{\pm}[f]\leq\sum_{\mathcal{P}_{2}}^{\pm}[f].\)__
_Moreover, if \(\mathbb{Y}\) is vector lattice which is dedekind complete and \(f\in\mathcal{BV}[x,y],\) then_
1. \(\mathcal{V}(f)=\mathcal{V}^{+}(f)+\mathcal{V}^{-}(f).\)
Proof.:
1. For any partion \(\mathcal{P}\) of \([x,y]\), let \(f(x_{i})-f(x_{i-1})=[f(x_{i})-f(x_{i-1})]^{+}-[f(x_{i})-f(x_{i-1})]^{-}\) be orthogonal decomposition of \(f\) for the sub-interval \([x_{i-1},x_{i}]\) for each \(i.\) Then \(|f(x_{i})-f(x_{i-1})|=[f(x_{i})-f(x_{i-1})]^{+}+[f(x_{i})-f(x_{i-1})]^{-}\) and consequently, we have: \[\Sigma_{\mathcal{P}}[f]^{+}+\Sigma_{\mathcal{P}}[f]^{-} = \sum_{i=1}^{n_{\mathcal{P}}}[f(x_{i})-f(x_{i-1})]^{+}+\sum_{i=1}^ {n_{\mathcal{P}}}[f(x_{i})-f(x_{i-1})]^{-}\] \[= \sum_{i=1}^{n_{\mathcal{P}}}\{[f(x_{i})-f(x_{i-1})]^{+}+[f(x_{i}) -f(x_{i-1})]^{-}\}\] \[= \sum_{i=1}^{n_{\mathcal{P}}}|f(x_{i})-f(x_{i-1})|\] \[= \Sigma_{\mathcal{P}}[f]\] and \[\Sigma_{\mathcal{P}}[f]^{+}-\Sigma_{\mathcal{P}}[f]^{-} = \sum_{i=1}^{n_{\mathcal{P}}}[f(x_{i})-f(x_{i-1})]^{+}-\sum_{i=1}^ {n_{\mathcal{P}}}[f(x_{i})-f(x_{i-1})]^{-}\] \[= \sum_{i=1}^{n_{\mathcal{P}}}\{[f(x_{i})-f(x_{i-1})]^{+}-[f(x_{i} )-f(x_{i-1})]^{-}\}\] \[= \sum_{i=1}^{n_{\mathcal{P}}}f(x_{i})-f(x_{i-1})\] \[= f(y)-f(x).\]
2. Let \(\mathcal{P}_{1}=\{x=x_{0}<x_{1}<x_{2}<\cdots<x_{n-1}<x_{n}=y\}.\) Without loss of generality, we assume that \(\mathcal{P}_{2}\) contains exactly one more point than \(P_{1}.\) In this case, we have \(\mathcal{P}_{2}=\{x=x_{0}<x_{1}<x_{2}<\cdots<x_{i-1}<z<x_{i}<\cdots<x_{n-1}<x_ {n}=y\}.\) Then \[(f(x_{i})-f(x_{i-1}))^{\pm} = \frac{1}{2}(|f(x_{i})-f(x_{i-1})|\pm(f(x_{i})-f(x_{i-1})))\] \[\leq \frac{1}{2}(|f(z)-f(x_{i-1})|\pm(f(z)-f(x_{i-1})))\] \[+ \frac{1}{2}(|f(x_{i})-f(z)|\pm(f(x_{i})-f(z)))\] \[= (f(z)-f(x_{i-1}))^{\pm}+(f(x_{i})-f(z))^{\pm}\] so that \(\sum_{\mathcal{P}_{1}}^{\pm}[f]\leq\sum_{\mathcal{P}_{2}}^{\pm}[f].\)
3. By (1), we have \(\Sigma_{\mathcal{P}}[f]=\Sigma_{\mathcal{P}}^{+}[f]+\Sigma_{\mathcal{P}}^{-}[f]\) for every partition \(\mathcal{P}\) of \([x,y].\) Thus \(\Sigma_{\mathcal{P}}[f]\leq\mathcal{V}^{+}(f)+\mathcal{V}^{-}(f)\) so that \(\mathcal{V}(f)\leq\mathcal{V}^{+}(f)+\mathcal{V}^{-}(f).\) Let \(\mathcal{P}_{1}\) and \(\mathcal{P}_{2}\) be partitions of \([x,y].\) Put \(\mathcal{P}=\mathcal{P}_{1}\cup\mathcal{P}_{2}.\) By (1) and (2), we get that \(\Sigma_{\mathcal{P}_{1}}^{+}[f]+\Sigma_{\mathcal{P}_{2}}^{-}[f]\leq\Sigma_{ \mathcal{P}}^{+}[f]+\Sigma_{\mathcal{P}}^{-}[f]=\Sigma_{\mathcal{P}}[f].\) Then \(\Sigma_{\mathcal{P}_{1}}^{+}[f]+\Sigma_{\mathcal{P}_{2}}^{-}[f]\leq\mathcal{V} (f)\) so that \(\mathcal{V}^{+}(f)+\mathcal{V}^{-}(f)\leq\mathcal{V}(f).\) Hence \(\mathcal{V}(f)=\mathcal{V}^{+}(f)+\mathcal{V}^{-}(f).\)
Now, we define the notion of variation function corresponding to a function of bounded variation.
**Definition 4.2**.: _Let \(\mathbb{Y}\) be a vector lattice which is dedekind complete and \(f\in\mathcal{BV}.\) By Theorem 3.7, we define a function \(\mathcal{V}_{f}:[x,y]\rightarrow\mathbb{Y}\) such that \(\mathcal{V}_{f}(z)=\mathcal{V}(f,x,z)\) for each \(z\in[x,y].\) We call this function the variation function of \(f.\)_
Let's study some properties of the variation function.
**Theorem 4.3**.: _Let \(\mathbb{Y}\) be a vector lattice which is dedekind complete and \(f\in\mathcal{BV}.\) Then the following statements hold:_
1. \(\mathcal{V}_{f}\) _is monotonically increasing such that_ \(\mathcal{V}_{f}(x)=0.\)__
2. \(\mathcal{V}_{f}(z)\geq|f(z)-f(x)|.\)__
3. \(\mathcal{V}_{f}=f\) _for every monotonically increasing function_ \(f\) _such that_ \(f(x)=0.\)__
_For \(\mathcal{V}_{f}^{\pm}(z)=\frac{1}{2}(\mathcal{V}_{f}(z)\pm(f(z)-f(x))),\) we also have:_
1. \(\mathcal{V}_{f}^{\pm}(z)\geq 0.\)__
2. \(\mathcal{V}_{f}^{\pm}\) _are monotonically increasing._
3. \(\mathcal{V}_{f}=\mathcal{V}_{f}^{+}+\mathcal{V}_{f}^{-}.\)__
4. \(f=f(x)+\mathcal{V}_{f}^{+}-\mathcal{V}_{f}^{-}.\)__
Proof.: It is routine to verify (6) and (7). Next, we prove other statements.
1. By Theorem 3.7, we have \(\mathcal{V}(f,x,z_{2})=\mathcal{V}(f,x,z_{1})+\mathcal{V}(f,z_{1},z_{2})\) for \(z_{2}\geq z_{1}.\) Since \(\mathcal{V}(f,z_{1},z_{2})\geq 0,\) we get that \(\mathcal{V}_{f}(z_{2})\geq\mathcal{V}_{f}(z_{1})\) for \(z_{2}\geq z_{1}.\) Thus \(\mathcal{V}_{f}\) is monotonically increasing.
2. By Theorem 3.7, we have \(f\in\mathcal{BV}[x,z]\) for every \(z\in[x,y].\) Now by Corollary 3.3, we get that \(\mathcal{V}_{f}(z)\geq|f(z)-f(x)|.\)
3. Let \(f\) be monotonically increasing and \(f(x)=0.\) By Proposition 3.5, we get that \(\mathcal{V}_{f}(z)=f(z)-f(x)=f(z)\) for all \(z\in[x,y].\) Thus \(\mathcal{V}_{f}=f.\)
4. For each \(z\in[x,y],\) we have \(\pm(f(z)-f(x))\leq|f(z)-f(x)|\leq\mathcal{V}_{f}(z).\) Thus \(0\leq\mathcal{V}_{f}^{\pm}(z).\)
5. Let \(z_{1},z_{2}\in[x,y]\) such that \(z_{2}\geq z_{1}.\) Then \(2[\mathcal{V}_{f}^{\pm}(z_{2})-\mathcal{V}_{f}^{\pm}(z_{1})]=\mathcal{V}_{f}( z_{2})-\mathcal{V}_{f}(z_{1})\pm[f(z_{2})-f(z_{1})]=\mathcal{V}(f,z_{1},z_{2})\pm[ f(z_{2})-f(z_{1})]\geq 0.\) Thus \(\mathcal{V}_{f}^{\pm}\) are monotonically increasing.
**Corollary 4.4**.: _Let \(\mathbb{Y}\) be a vector lattice which is dedekind complete and \(f,g\in\mathcal{BV}.\) Then \(\pm(\mathcal{V}_{f}-\mathcal{V}_{g})\leq|\mathcal{V}_{f}-\mathcal{V}_{g}|\leq \mathcal{V}_{f\pm g}\leq\mathcal{V}_{f}+\mathcal{V}_{g}.\) In particular, we have:_
1. \(\pm(\mathcal{V}(f)-\mathcal{V}(g))\leq|\mathcal{V}(f)-\mathcal{V}(g)|\leq \mathcal{V}(f\pm g).\)__
2. \(\pm(\mathcal{V}_{f}-\mathcal{V}_{g})\leq|\mathcal{V}_{f}-\mathcal{V}_{g}|\leq \mathcal{V}_{\mathcal{V}_{f}\pm\mathcal{V}_{g}}\leq\mathcal{V}_{f}+\mathcal{V}_ {g}.\)__
Proof.: By Theorems 3.4 and 3.7, we have \(\mathcal{V}_{f}(z)=\mathcal{V}(f,x,z)\leq\mathcal{V}(f\pm g,x,z)+\mathcal{V}( \mp g,x,z)=\mathcal{V}_{f\pm g}(z)+\mathcal{V}_{g}(z)\leq\mathcal{V}_{f}(z)+ \mathcal{V}_{g}(z).\) Then \(\mathcal{V}_{f}(z)-\mathcal{V}_{g}(z)\leq\mathcal{V}_{f}(z)\leq\mathcal{V}_{f }(z)+\mathcal{V}_{g}(z).\) Interchanging \(f\) and \(g,\) we also have \(\mathcal{V}_{g}(z)-\mathcal{V}_{f}(z)\leq\mathcal{V}_{f\pm g}(z)\leq\mathcal{V} _{f}(z)+\mathcal{V}_{g}(z).\) Thus \(\pm(\mathcal{V}_{f}(z)-\mathcal{V}_{g}(z))\leq\mathcal{V}_{f\pm g}(z)\leq \mathcal{V}_{f}(z)+\mathcal{V}_{g}(z).\) Then \(\pm(\mathcal{V}_{f}(z)-\mathcal{V}_{g}(z))\leq|\mathcal{V}_{f}(z)-\mathcal{V}_ {g}(z)|\leq\mathcal{V}_{f\pm g}(z)\leq\mathcal{V}_{f}(z)+\mathcal{V}_{g}(z)\) so that \(\pm(\mathcal{V}_{f}-\mathcal{V}_{g})\leq|\mathcal{V}_{f}-\mathcal{V}_{g}| \leq\mathcal{V}_{f\pm g}\leq\mathcal{V}_{f}+\mathcal{V}_{g}.\) Putting \(z=y,\) we immediately get (1). By Theorem 4.3(1), \(\mathcal{V}_{f}\) and \(\mathcal{V}_{g}\) are monotonically increasing functions such
that \(\mathcal{V}_{f}(x)=0\) and \(\mathcal{V}_{g}(x)=0.\) Using Theorem 4.3(3) and replacing \(f\) and \(g\) by \(\mathcal{V}_{f}\) and \(\mathcal{V}_{g}\) respectively, we also get (2).
Next, we show that \(\mathcal{BV}\) forms an ordered normed space under the order structure of \(\mathbb{Y}.\)
**Theorem 4.5**.: _Let \(\mathbb{Y}\) be a vector lattice which is dedekind complete and \(\mathcal{BV}^{+}=\{f\in\mathcal{BV}:f([x,y])\subseteq\mathbb{Y}^{+}\}.\) Put \(f_{0}=e.\) Then_
1. \((\mathcal{BV},\mathcal{BV}^{+})\) _forms an ordered space._
2. \((\mathcal{BV},\mathcal{BV}^{+},f_{0})\) _forms an order unit space._
_Moreover, for each \(z\in[x,y]\) and \(f\in\mathcal{BV},\) we write \(|f|(z)=|f(z)|\) and define \(|\cdot|:\mathcal{BV}\to\mathcal{BV}^{+}\) given by \(f\mapsto|f|.\) Then_
(3)_\((\mathcal{BV},\mathcal{BV}^{+},|\cdot|)\) forms an absolutely ordered space._
_Moreover, if \(\mathbb{Y}\) also forms an \(AM\)-space, then_
1. \((\mathcal{BV},\mathcal{BV}^{+},f_{0},|\cdot|)\) _also forms an_ \(AM\)_-space. In fact, in this case_ \(\bot=\bot\frac{a}{\infty}\) _holds on_ \(\mathcal{BV}^{+}\) _so that it becomes an absolute order unit space._
Proof.:
1. By Theorem 3.4(2) and (3), we get that \(\mathcal{BV}\) is a vector space. Next, let \(f,g\in\mathcal{BV}^{+}.\) Then \((f+g)[x,y]=f([x,y])+g([x,y])\subseteq\mathbb{Y}^{+}\) so that \(f+g\in\mathcal{BV}^{+}.\) Thus \((\mathcal{BV},\mathcal{BV}^{+})\) forms an ordered space.
2. let \(f\in\mathcal{BV}.\) By Theorem 3.4(1), there exists \(w\in\mathbb{Y}^{+}\) such that \(\pm f(z)\leq|f(z)|\leq w.\) Since \(w\leq\|w\|e,\) we get that \(\|w\|e\pm f(z)\geq 0\) for all \(z\in[x,y].\) Then \((\|w\|f_{0}\pm f)([x,y])\subseteq\mathbb{Y}^{+}.\) Thus \(f_{0}=e\) is order unit for \((\mathcal{BV},\mathcal{BV}^{+}).\) Next, assume that \(\pm f\in\mathcal{BV}^{+}.\) Then \(\pm f(z)\in\mathbb{Y}^{+}\) for all \(z\in[x,y].\) As \(\mathbb{Y}^{+}\) is proper, we get that \(f(z)=0\) for every \(z\in[x,y].\) Thus \(f=0\) so that \(\mathcal{BV}^{+}\) is proper. Finally, assume that \(\epsilon f_{0}+f\in\mathcal{BV}^{+}\) for all \(\epsilon>0.\) For each fixed \(z\in[x,y],\) we have \(\epsilon e+f(z)\in\mathbb{Y}^{+}\) for all \(\epsilon>0.\) Since \(\mathbb{Y}^{+}\) is Archimedean, we get that \(f(z)\in\mathbb{Y}^{+}\) for every \(z\in[x,y].\) Thus \(f\in\mathcal{BV}^{+}\) so that \(\mathcal{BV}^{+}\) is Archimedean. Hence \((\mathcal{BV},\mathcal{BV}^{+},f_{0})\) forms an order unit space. Since \(\mathcal{BV}^{+}\) is proper and Archimedean, we get that the order unit \(f_{0}\) defines a norm on \(\mathcal{BV}\) in the following way: \[\|f\|_{0} = \inf\{\epsilon>0:\epsilon f_{0}\pm f\in\mathcal{BV}^{+}\}\] \[= \inf\{\epsilon>0:\epsilon e\pm f\in\mathcal{BV}^{+}\}\] \[= \inf\{\epsilon>0:\epsilon e\pm f(z)\in\mathbb{Y}^{+}\text{ for all }z \in[x,y]\}\] \[= \inf\{\epsilon>0:\epsilon\geq\|f(z)\|\text{ for all }z\in[x,y]\}\] \[= \inf\{\epsilon>0:\epsilon\geq\max_{z\in[x,y]}\|f(z)\|\}\] \[= \inf\{\epsilon>0:\epsilon\geq\|f\|_{\infty}\}\] \[= \|f\|_{\infty}.\]
3. By Theorem 3.4(4), the map \(f\mapsto|f|\) is well defined from \(\mathcal{BV}\) to \(\mathcal{BV}^{+}.\) Let \(f,g,h\in\mathcal{BV}\) and \(z\in[x,y].\) Then 1. Let \(f\in\mathcal{BV}^{+}.\) Then \(f(z)\in\mathbb{Y}^{+}\) so that \(|f(z)|=f(z).\) Thus \(|f|=f.\) 2. \(|f|(z)\pm f(z)=|f(z)|\pm f(z)\in\mathbb{Y}^{+},\) we get that \(|f|\pm f\in\mathcal{BV}^{+}.\) 3. \(|\alpha f|(z)=|\alpha f(z)|=|\alpha||f(z)|=|\alpha||f|(z),\) we get that \(|\alpha f|=|\alpha||f|.\)
* Let \(f,g,h\in\mathcal{BV}^{+}\) such that \(|f-g|=f+g\) and \(g-h\in\mathcal{BV}^{+}.\) Then \(|f(z)-g(z)|=f(z)+g(z)\) and \(0\leq h(z)\leq g(z).\) Thus \(|f(z)-h(z)|=f(z)+h(z)\) so that \(|f-h|=f+h.\)
* Let \(f,g,h\in\mathcal{BV}^{+}\) such that \(|f-g|=f+g\) and \(|f-h|=f+h.\) Then \(|f(z)-g(z)|=f(z)+g(z)\) and \(|f(z)-h(z)|=f(z)+h(z).\) Thus \(|f(z)-|g(z)\pm h(z)||=f(z)+|g(z)\pm h(z)|\) so that \(|f-|g\pm h||=f+|g\pm h|.\) Thus \((\mathcal{BV},\mathcal{BV}^{+},|\cdot|)\) forms an absolutely ordered space.
* Finally assume that \(\mathbb{Y}\) is an \(AM\)-space. Let \(f\) and \(g\in\mathcal{BV}.\) Observe that \(\||f|\|_{0}=\|f\|_{0}.\) Without loss of generality, assume that \(f\) and \(g\in\mathcal{BV}^{+}.\) If \(f\leq g,\) then \(f(z)\leq g(z)\leq\|g\|_{\infty}e\) so that \(\|f\|_{0}\leq\|g\|_{0}.\) Next, we also have the following: \[\|f\dot{\vee}g\|_{0} = \|f\dot{\vee}g\|_{\infty}\] \[= \max_{z\in[x,y]}\|(f\dot{\vee}g)(z)\|\] \[= \max_{z\in[x,y]}\|f(z)\dot{\vee}g(z)\|\] \[\leq \max_{z\in[x,y]}\{\max\left\{\|f(z)\|,\|g(z)\|\right\}\}\] \[= \max\{\|f\|_{\infty},\|g\|_{\infty}\}\] \[= \max\{\|f\|_{0},\|g\|_{0}\}.\] Thus \(\mathcal{BV}\) also forms \(AM\)-space. By Kakutani Theorem (see [13]), we conclude that \(\mathcal{BV}\cong C(K,\mathbb{R})\) for some compact Hausdorff space \(K.\) Hence \(\mathcal{BV}\) forms an absolute order unit space.
In the next result, we induce a new order structure on \(\mathcal{BV}.\)
**Theorem 4.6**.: _Let \(\mathbb{Y}\) be a vector lattice which is dedekind complete. Put \(\mathcal{BV}_{0}^{+}=\{f\in\mathcal{BV}^{+}:f\text{ is monotonically increasing}\}.\) Then_
* \((\mathcal{BV},\mathcal{BV}_{0}^{+})\) _forms an ordered space._
_For \(f\) and \(g\in\mathcal{BV},\) by \(f\leq_{0}g,\) we mean that \(g-f\in\mathcal{BV}_{0}^{+}\) and we also define \(|\cdot|_{\mathcal{V}}:\mathcal{BV}\rightarrow\mathcal{BV}_{0}^{+}\) given by \(f\mapsto|f(x)|+\mathcal{V}_{f}.\) Then_
* \(|f|_{\mathcal{V}}=f\) _for every_ \(f\in\mathcal{BV}_{0}^{+}.\)__
* \(|f|_{\mathcal{V}}\pm f\in\mathcal{BV}_{0}^{+}\) _for every_ \(f\in\mathcal{BV}.\)__
* \(|\alpha f|_{\mathcal{V}}=|\alpha||f|_{\mathcal{V}}\) _for every_ \(f\in\mathcal{BV}\) _and_ \(\alpha\in\mathbb{R}.\)__
* _For_ \(f,g\) _and_ \(h\in\mathcal{BV}_{0}^{+}\) _such that_ \(|f-g|_{\mathcal{V}}=f+g\) _and_ \(h-g\in\mathcal{BV}_{0}^{+},\) _we have_ \(|f-h|_{\mathcal{V}}=f+h.\)__
* \(|f\pm g|_{\mathcal{V}}\leq|f|_{\mathcal{V}}+|g|_{\mathcal{V}}\) _for every pair_ \(f\) _and_ \(g\in\mathcal{BV}.\)__
_For every pair \(f\) and \(g\in\mathcal{BV},\) we write: \(f\dot{\vee}g:=\frac{1}{2}(f+g+|f-g|_{\mathcal{V}})\) and \(f\dot{\wedge}g:=-\{(-f)\dot{\vee}(-g)\}=\frac{1}{2}(f+g-|f-g|_{\mathcal{V}}).\) Then_
* \(f\dot{\wedge}g\leq_{0}f,g\leq_{0}f\dot{\vee}g.\)__
Proof.:
* Let \(f,g\in\mathcal{BV}_{0}^{+}.\) Then \(f+g\) is also monotonically increasing and \(f+g\in\mathcal{BV}^{+}\) so that \(f+g\in\mathcal{BV}_{0}^{+}.\) Thus \((\mathcal{BV},\mathcal{BV}_{0}^{+})\) forms an ordered space.
By Theorem 4.3(1), the map \(f\mapsto|f(x)|+\mathcal{V}_{f}\) is well defined from \(\mathcal{BV}\) to \(\mathcal{BV}_{0}^{+}\).
1. Let \(f\in\mathcal{BV}_{0}^{+}\). Then \(f\) is monotonically increasing and \(|f(x)|=f(x)\). By Proposition 3.5, we get that \(\mathcal{V}_{f}(z)=\mathcal{V}(f,x,z)=f(z)-f(x)\). Thus \(|f|_{\mathcal{V}}=|f(x)|+\mathcal{V}_{f}=f\).
2. By Theorem 4.3(2) and (4), we get that \[|f|_{\mathcal{V}}(z)\pm f(z) = |f(x)|+\mathcal{V}_{f}(z)\pm f(z)\] \[= \mathcal{V}_{f}(z)\pm f(z)\mp f(x)+|f(x)|\pm f(x)\] \[= (\mathcal{V}_{f}(z)\pm(f(z)-f(x)))+(|f(x)|\pm f(x))\] \[= 2\mathcal{V}_{f}^{\pm}(z)+(|f(x)|\pm f(x))\] \[\geq 0\] for every \(f\in\mathcal{BV}\) and \(z\in[x,y]\). By Theorem 4.3(5), we conclude that \(|f|_{\mathcal{V}}\pm f\in\mathcal{BV}_{0}^{+}\) for all \(f\in\mathcal{BV}\).
3. By Theorem 3.4(2), we have \[|\alpha f|_{\mathcal{V}} = |\alpha f(x)|+\mathcal{V}(\alpha f)\] \[= |\alpha||f(x)|+|\alpha|\mathcal{V}(f)\] \[= |\alpha|(|f(x)|+\mathcal{V}(f))\] \[= |\alpha||f|_{\mathcal{V}}\] for every \(f\in\mathcal{BV}\) and \(\alpha\in\mathbb{R}\).
Next, let \(f,g\) and \(h\in\mathcal{BV}_{0}^{+}\).
1. Assume that \(g-h\in\mathcal{BV}_{0}^{+}\) and \(|f-g|_{\mathcal{V}}=f+g\). Then \(|f(x)-g(x)|+\mathcal{V}_{f-g}=f+g\). Since \(\mathcal{V}_{f}(x)=0\), we have \(|f(x)-g(x)|=f(x)+g(x)\). By Proposition 3.5, we get that \(\mathcal{V}_{f-g}=\mathcal{V}_{f}+\mathcal{V}_{g}\). As \(g-h\in\mathcal{BV}_{0}^{+}\), again by Proposition 3.5, we also get that \(\mathcal{V}_{g-h}=(g-h)-(g(x)-h(x))=\mathcal{V}_{g}-\mathcal{V}_{h}\). Now, by Corollary 4.4, it turns out that \(\mathcal{V}_{f}+\mathcal{V}_{h}\geq\mathcal{V}_{f-h}\geq\mathcal{V}_{f-g}- \mathcal{V}_{g-h}=\mathcal{V}_{f}+\mathcal{V}_{h}\). Thus \(\mathcal{V}_{f-h}=\mathcal{V}_{f}+\mathcal{V}_{h}\). For \(0\leq h(x)\leq g(x)\), we also have \(|f(x)-h(x)|=f(x)+h(x)\). Finally, we conclude that \(|f-h|_{\mathcal{V}}=f+h\).
2. Again by Corollary 4.4, we get that \[|f\pm g|_{\mathcal{V}} = |f(x)\pm g(x)|+\mathcal{V}_{f\pm g}\] \[\leq |f(x)|+|g(x)|+\mathcal{V}_{f}+\mathcal{V}_{g}\] \[= |f|_{\mathcal{V}}+|g|_{\mathcal{V}}.\]
3. By (3), we have \(f\dot{\vee}g-f=\frac{1}{2}(|f-g|_{\mathcal{V}}-(f-g))\in\mathcal{BV}_{0}^{+}\) and \(f-f\dot{\wedge}g=\frac{1}{2}(|f-g|_{\mathcal{V}}+(f-g))\in\mathcal{BV}_{0}^{+}\) so that \(f\dot{\wedge}g\leq_{0}f\leq_{0}f\dot{\vee}g\). Since \(f\dot{\vee}g=g\dot{\vee}f\) and \(f\dot{\wedge}g=g\dot{\wedge}f\), we also get that \(f\dot{\wedge}g\leq_{0}g\leq_{0}f\dot{\vee}g\).
The following result states that under the new ordering \(\mathcal{BV}\) also forms an ordered normed space.
**Corollary 4.7**.: _Given \(f\in\mathcal{BV},\) we write: \(\|f\|=\inf_{g\in\mathcal{BV}_{0}^{+}}\{\|g\|_{\infty}:g\pm f\in\mathcal{BV}_{0}^{+}\}.\) Then \((\mathcal{BV},\|\cdot\|)\) is an ordered normed space._
Finally, we induce another norm on \(\mathcal{BV}\) by new ordering. Under certain condition, this norm turns out to be a complete norm.
**Theorem 4.8**.: _Let \(\mathbb{Y}\) be an absolute order unit space having vector lattice structure which is dedekind complete. For each \(f\in\mathcal{BV},\) we write: \(\|f\|_{\mathcal{BV}}=\||f(x)|+\mathcal{V}(f)\|.\) Then \((\mathcal{BV},\|\cdot\|_{\mathcal{BV}})\) forms a normed space. Moreover, we have_
1. \(\|f\|_{\mathcal{BV}}=\||f(x)|+\mathcal{V}_{f}\|_{0}.\)__
2. \(\|f\|_{0}\leq\|f\|_{\mathcal{BV}}.\)__
3. _If_ \(\mathbb{Y}\) _is order complete, then_ \((\mathcal{BV},\|\cdot\|_{\mathcal{BV}})\) _is complete._
4. _If_ \(|f|_{\mathcal{V}}\leq|g|_{\mathcal{V}},\) _then_ \(\|f\|_{\mathcal{BV}}\leq\|g\|_{\mathcal{BV}}.\)__
5. \(\|f\dot{\vee}g\|_{0}\leq\|f\|_{\mathcal{BV}}+\|g\|_{\mathcal{BV}}\) _for all_ \(f,g\in\mathcal{BV}^{+}.\)__
_In this case, \((\mathcal{BV},\|\cdot\|_{\mathcal{BV}})\) forms an ordered normed space._
Proof.: Let \(f,g\in\mathcal{BV}\) and \(\alpha\in\mathbb{R}.\) By Lemma 3.8, we have \(f=0\) if and only if \(f(x)=0\) and \(\mathcal{V}(f)=0\) if and only if \(\|f\|_{\mathcal{BV}}=0.\) Next, by Theorems 2.2 and 3.4(3), we get that \(|f(x)+g(x)|+\mathcal{V}(f+g)\leq|f(x)|+|g(x)|+\mathcal{V}(f)+\mathcal{V}(g).\) Then \(\|f+g\|_{\mathcal{BV}}\leq\||f(x)|+|g(x)|+\mathcal{V}(f)+\mathcal{V}(g)\|\leq \||f(x)|+\mathcal{V}(f)\|_{\mathcal{BV}}+\||g(x)|+\mathcal{V}(g)\|=\|f\|_{ \mathcal{BV}}+\|g\|_{\mathcal{BV}}.\) Finally, by Theorem 3.4(2), we conclude that \(|\alpha f|+\mathcal{V}(\alpha f)=|\alpha||f(x)|+|\alpha|\mathcal{V}(f)=|\alpha| (|f(x)|+\mathcal{V}(f))\) so that \(\|\alpha f\|_{\mathcal{BV}}=|\alpha|\|f\|_{\mathcal{BV}}.\) Thus \((\mathcal{BV},\|\cdot\|_{\mathcal{BV}})\) forms a normed space. Now, we prove other properties.
1. By Theorem 4.3(1), \(\mathcal{V}_{f}\) is monotonically increasing function. For \(z_{1},z_{2}\in[x,y]\) such that \(z_{1}\leq z_{2},\) we have \(0\leq|f(x)|+\mathcal{V}_{f}(z_{1})\leq|f(x)|+\mathcal{V}_{f}(z_{2})\leq|f(x)| +\mathcal{V}_{f}(y)=|f(x)|+\mathcal{V}(f)\) so that \(\||f(x)|+\mathcal{V}_{f}(z_{1})\|\leq\||f(x)|+\mathcal{V}_{f}(z_{2})\|\leq\||f (x)|+\mathcal{V}_{f}(y)\|=\||f(x)|+\mathcal{V}(f)\|.\) Thus \(\|f\|_{\mathcal{BV}}=\||f(x)|+\mathcal{V}_{f}\|_{0}.\)
2. By Theorem 4.3(2), we have \(|f(z)|\leq|f(z)-f(x)|+|f(x)|\leq\mathcal{V}_{f}(z)+|f(x)|.\) By (1), we get that \(\|f(z)\|\leq\|\mathcal{V}_{f}(z)+|f(x)|\|\leq\|f\|_{\mathcal{BV}}.\) Thus \(\|f\|_{0}\leq\|f\|_{\mathcal{BV}}.\)
3. Let \(\{f_{n}\}\) be a cauchey sequence in \(\mathcal{BV}.\) Then \(|f_{n}(x)-f_{m}(x)|\to 0\) and \(\mathcal{V}(f_{n}-f_{m})\to 0.\) In this case, \(|f_{n}(z)-f_{m}(z)|\to 0\) for every \(z\in[x,y].\) Define \(f(z)=\lim_{n\to\infty}f_{n}(z).\) For any partition \(\mathcal{P}\) of \([x,y],\) we have \(\sum_{i=1}^{n_{\mathcal{P}}}|f_{n}(x_{i})-f_{n}(x_{i-1})|\leq\mathcal{V}(f_{n} -f_{m})+\mathcal{V}(f_{m}).\) Since \(\{f_{n}\}\) is a cauchey sequence, given any \(c\in\mathbb{Y}^{+},\) we can find \(m_{0}\in\mathbb{N}\) such that \(\mathcal{V}(f_{n}-f_{m})\leq c\) and \(\mathcal{V}(f_{m})\leq c\) for all \(n,m\geq m_{0}.\) In this case, we have \(\sum_{i=1}^{n_{\mathcal{P}}}|f_{n}(x_{i})-f_{n}(x_{i-1})|\leq 2c\) for all \(n\geq m_{0}.\) Letting \(n\to\infty,\) we get that \(\sum_{i=1}^{n_{\mathcal{P}}}|f(x_{i})-f(x_{i-1})|\leq 2c\) so that \(f\in\mathcal{BV}.\) Similarily, letting \(n\to\infty\) and keeping \(m\) fixed in \(\sum_{i=1}^{n_{\mathcal{P}}}|(f_{n}(x_{i})-f_{m}(x_{i}))-(f_{n}(x_{i-1})-f_{m}(x _{i-1}))|\leq\mathcal{V}(f_{n}-f_{m})\leq c,\) we
conclude that \(\sum_{i=1}^{n_{p}}|(f(x_{i})-f_{m}(x_{i}))-(f(x_{i-1})-f_{m}(x_{i-1}))|\leq c\) for all \(m\geq m_{0}.\) Thus \(f_{n}\to f\) so that \((\mathcal{BV},\|\cdot\|_{\mathcal{BV}})\) forms a complete space.
4. It is trivial to verify.
5. By Corollary 4.4, for \(f,g\in\mathcal{BV}^{+}\) and \(z\in[x,y],\) we have \[\|(f\dot{\vee}g)(z)\| = \frac{1}{2}\|f(z)+g(z)+|f(x)-g(x)|+\mathcal{V}_{f-g}(z)\|\] \[\leq \frac{1}{2}\|f(z)+g(z)+|f(x)|+|g(x)|+\mathcal{V}_{f}(z)+\mathcal{ V}_{g}(z)\|\] \[= \frac{1}{2}\|f(z)+g(z)+|f(x)|+|g(x)|+\mathcal{V}_{f}(z)+\mathcal{ V}_{g}(z)\|\] \[\leq \frac{1}{2}(\|f(z)\|+\|g(z)\|+\||f(x)|+\mathcal{V}_{f}(z)\|+\||g( x)|+\mathcal{V}_{g}(z)\|)\] \[\leq \frac{1}{2}(\|f\|_{0}+\|g\|_{0}+\|f\|_{\mathcal{BV}}+\|g\|_{ \mathcal{BV}})\] \[\leq \frac{1}{2}(\|f\|_{\mathcal{BV}}+\|g\|_{\mathcal{BV}}+\|f\|_{ \mathcal{BV}}+\|g\|_{\mathcal{BV}})\] \[= \|f\|_{\mathcal{BV}}+\|g\|_{\mathcal{BV}}\] so that \(\|f\dot{\vee}g\|_{0}\leq\|f\|_{\mathcal{BV}}+\|g\|_{\mathcal{BV}}.\)
**Corollary 4.9**.: _Given \(f\in\mathcal{BV},\) we write: \(\|f\|=\inf\limits_{g\in\mathcal{BV}_{0}^{+}}\{\|g\|_{\mathcal{BV}}:g\pm f\in \mathcal{BV}_{0}^{+}\}.\) Then \((\mathcal{BV},\|\cdot\|)\) is an ordered normed space._
|
2305.04505 | Target-Side Augmentation for Document-Level Machine Translation | Document-level machine translation faces the challenge of data sparsity due
to its long input length and a small amount of training data, increasing the
risk of learning spurious patterns. To address this challenge, we propose a
target-side augmentation method, introducing a data augmentation (DA) model to
generate many potential translations for each source document. Learning on
these wider range translations, an MT model can learn a smoothed distribution,
thereby reducing the risk of data sparsity. We demonstrate that the DA model,
which estimates the posterior distribution, largely improves the MT
performance, outperforming the previous best system by 2.30 s-BLEU on News and
achieving new state-of-the-art on News and Europarl benchmarks. Our code is
available at https://github.com/baoguangsheng/target-side-augmentation. | Guangsheng Bao, Zhiyang Teng, Yue Zhang | 2023-05-08T07:01:18Z | http://arxiv.org/abs/2305.04505v2 | # Target-Side Augmentation for Document-Level Machine Translation
###### Abstract
Document-level machine translation faces the challenge of data sparsity due to its long input length and a small amount of training data, increasing the risk of learning spurious patterns. To address this challenge, we propose a target-side augmentation method, introducing a data augmentation (DA) model to generate many potential translations for each source document. Learning on these wider range translations, an MT model can learn a smoothed distribution, thereby reducing the risk of data sparsity. We demonstrate that the DA model, which estimates the posterior distribution, largely improves the MT performance, outperforming the previous best system by 2.30 s-BLEU on News and achieving new state-of-the-art on News and Europarl benchmarks. Our code is available at [https://github.com/baoguangsheng/target-side-augmentation](https://github.com/baoguangsheng/target-side-augmentation).
## 1 Introduction
Document-level machine translation Gong et al. (2011); Hardmeier et al. (2013); Werlen et al. (2018); Maruf et al. (2019); Bao et al. (2021); Feng et al. (2022) has received increasing research attention. It addresses the limitations of sentence-level MT by considering cross-sentence co-references and discourse information, and therefore can be more useful in the practical setting. Document-level MT presents several unique technical challenges, including significantly longer inputs Bao et al. (2021) and relatively smaller training data compared to sentence-level MT Junczys-Dowmunt (2019); Liu et al. (2020); Sun et al. (2022). The combination of these challenges leads to increased data sparsity Gao et al. (2014); Koehn and Knowles (2017); Liu et al. (2020), which raises the risk of learning spurious patterns in the training data Belkin et al. (2019); Savoldi et al. (2021) and hinders generalization Li et al. (2021); Dankers et al. (2022).
To address these issues, we propose a target-side data augmentation method that aims to reduce sparsity by automatically smoothing the training distribution. The main idea is to train the document MT model with many plausible potential translations, rather than forcing it to fit a single human translation for each source document. This allows the model to learn more robust and generalizable patterns, rather than being overly reliant on features of particular training samples. Specifically, we introduce a data augmentation (DA) model to generate possible translations to guide MT model training. As shown in Figure 1, the DA model is trained to understand the relationship between the source and possible translations based on one observed translation (Step 1), and then used to sample a set of potentially plausible translations (Step 2). These translations are fed to the MT model for training, smoothing the distribution of target translations (Step 3).
We use standard document-level MT models including Transformer Vaswani et al. (2017) and G-Transformer Bao et al. (2021) for both our DA and MT models. For the DA model, in order to effectively capture a _posterior_ target distribution given a reference target, we concatenate each source sentence with a latent token sequence as the new input, where the latent tokens are sampled from the observed translation. A challenge to the DA model is that having the reference translation in the input can potentially decrease diversity. To address this issue, we introduce the intermediate latent variable on the encoder side by using rules to generate n-gram samples, so that posterior sampling Wang and Park (2020) can be leveraged to yield diverse translations.
Results on three document-level MT benchmarks demonstrate that our method significantly outperforms Transformer and G-Transformer baselines, achieving an improvement of 1.33 and 1.75 s-BLEU on average, respectively, and the state
of-the-art results on News and Europarl. Further analysis shows that high diversity among generated translations and their low deviation from the gold translation are the keys to improved performance. To our knowledge, we are the first to do _target-side_ augmentation to enrich _output_ variety for document-level machine translation.
## 2 Related Work
**Data augmentation (DA)** increases training data by synthesizing new data (Van Dyk and Meng, 2001; Shorten and Khoshgoftaar, 2019; Shorten et al., 2021; Li et al., 2022). In neural machine translation (NMT), the most commonly used data augmentation techniques are **source-side augmentations**, including easy data augmentation (EDA) (Wei and Zou, 2019), subword regularization (Kudo, 2018), and back-translation (Sennrich et al., 2016), which generates pseudo sources for monolingual targets enabling the usage of widely available monolingual data. These methods generate more source-target pairs with different silver source sentences for the same gold-target translation. On the contrary, **target-side augmentation** is more challenging, as approaches like EDA are not effective for the target side because they corrupt the target sequence, degrading the autoregressive modeling of the target language.
Previous approaches on target-side data augmentation in NMT fall into three categories. The first is based on _self-training_(Bogoychev and Sennrich, 2019; He et al., 2019; Zoph et al., 2020), which generates pseudo translations for monolingual source text using a trained model. The second category uses either a pre-trained language model (Fadaee et al., 2017; Wu et al., 2019) or a pre-trained generative model (Raffel et al., 2020; Khayrallah et al., 2020) to generate _synonyms_ for words or _paraphrases_ of the target text. The third category relies on reinforcement learning (Norouzi et al., 2016; Wang et al., 2018), introducing a reward function to evaluate the quality of translation candidates and to regularize the likelihood objective. In order to explore possible candidates, a sampling from the model distribution or random noise is used. Unlike these approaches, our method is a target-side data augmentation technique that is trained using supervised learning and does not rely on external data or large-scale pretraining. More importantly, we generate document-level instead of word, phrase, or sentence-level alternatives.
Previous target-side input augmentation (Xie et al., 2022) appears to be similar to our target-side augmentation. However, besides the literal similarity, they are quite different. Consider the token prediction \(P(y_{i}|x,y_{<i})\). The target-side input augmentation augments the condition \(y_{<i}\) to increase the model's robustness to the conditions,
Figure 1: Illustration of target-side data augmentation (DA) using a very simple example. A DA model is trained to estimate the distribution of possible translations \(y\) given a source \(x_{i}\) and an observed target \(y_{i}\), and the MT model is trained on the sampled translations \(\hat{y}_{j}\) from the DA model for each source \(x_{i}\). Effectively training the DA model with the target \(y_{i}\), which is also a conditional input, can be challenging, but it is achievable after introducing an intermediate latent variable between the translation \(y\) and the condition \(y_{i}\).
which is more like source-side augmentation on condition \(x\). In comparison, target-side augmentation augments the target \(y_{i}\), providing the model with completely new training targets.
**Paraphrase models.** Our approach generates various translations for each source text, each of which can be viewed as a paraphrase of the target. Unlike previous methods that leverage paraphrase models for improving MT (Madnani et al., 2007; Hu et al., 2019; Khayrallah et al., 2020), our DA model exploits parallel corpus and does not depend on external paraphrase data, similar to Thompson and Post (2020). Instead, it takes into account the source text when modeling the target distribution. More importantly, while most paraphrase models operate at the sentence level, our DA model can generate translations at the document level.
**Conditional auto-encoder.** The DA model can also be seen as a conditional denoising auto-encoder (c-DAE), where the latent variable is a noised version of the ground-truth target, and the model is trained to reconstruct the ground-truth target from a noisy latent sequence. c-DAE is similar to the conditional variational autoencoder (c-VAE) (Zhang et al., 2016; Pagnoni et al., 2018), which learns a latent variable and generates diverse translations by sampling from it. However, there are two key differences between c-VAE and our DA model. First, c-VAE learns both the prior and posterior distributions of the latent variable, while the DA model directly uses predefined rules to generate the latent variable. Second, c-VAE models the prior distribution of the target, while the DA model estimates the posterior distribution.
**Sequence-level knowledge distillation.** Our DA-MT process is also remotely similar in form to sequence-level knowledge distillation (SKD) (Ba and Caruana, 2014; Hinton et al., 2016; Kim and Rush, 2016; Gordon and Duh, 2019; Lin et al., 2020), which learns the data distribution using a large teacher and distills the knowledge into a small student by training the student using sequences generated by the teacher. However, our method differs from SKD in three aspects. First, SKD aims to compress knowledge from a large teacher to a small student, while we use the same or smaller size model as the DA model, where the knowledge source is the training data rather than the big teacher. Second, the teacher in SKD estimates the prior distribution of the target given source, while our DA model estimates the posterior distribution of the target given source and an observed target. Third, SKD generates one sequence for each source, while we generate multiple diverse translations with controlled latent variables.
## 3 Target-Side Augmentation
The overall framework is shown in Figure 1. Formally, denote a set of training data as \(D=\{(x_{i},y_{i})\}_{i=1}^{N}\), where \((x_{i},y_{i})\) is the \(i\)-th source-target pair and \(N\) is the number of pairs. We train a data augmentation (DA) model (Section 3.1) to generate samples with new target translations (Section 3.2), which are used to train an MT model (Section 3.3).
### The Data Augmentation Model
We learn the posterior distribution \(P_{da}(y|x_{i},y_{i})\) from parallel corpus by introducing latent variables
\[P_{da}(y|x_{i},y_{i})=\sum_{z\in\mathcal{Z}_{i}}P_{\varphi}(y|x_{i},z)P_{\alpha }(z|y_{i}), \tag{1}\]
where \(z\) is the latent variable to control the translation output and \(\mathcal{Z}_{i}\) denotes the possible space of \(z\), \(\varphi\) denotes the parameters of the DA model, and \(\alpha\) denotes the hyper-parameters for determining the distribution of \(z\) given \(y_{i}\).
The space \(\mathcal{Z}_{i}\) of possible \(z\) is exponentially large compared to the number of tokens of the target, making it intractable to sum over \(\mathcal{Z}_{i}\) in Eq. 1. We thus consider a Monte Carlo approximation, sample a group of instances from \(p_{\alpha}(z|y_{i})\), and calculate the sample mean
\[P_{da}(y|x_{i},y_{i})\approx\frac{1}{|\hat{\mathcal{Z}}_{i}|}\sum_{z\in\hat{ \mathcal{Z}}_{i}}P_{\varphi}(y|x_{i},z), \tag{2}\]
where \(\hat{\mathcal{Z}}_{i}\) denotes the sampled instances.
There are many possible choices for the latent variable, such as a continuous vector or a categorical discrete variable, which also could be either learned by the model or predefined by rules. Here, we simply represent the latent variable as a sequence of tokens and use predefined rules to generate the sequence, so that the latent variable can be easily incorporated into the input of a seq2seq model without the need for additional parameters.
Specifically, we set the value of the latent variable \(z\) to be a group of sampled n-grams from the observed translation \(y_{i}\) and concatenate \(x_{i}\) and \(z\) into a sequence of tokens. We assume that the generated translations \(y\) can be consistent with the
observed translation \(y_{i}\) on these n-grams. To this end, we define \(\alpha\) as the ratio of tokens in \(y_{i}\) that is observable through \(z\), naming _observed ratio_. For a target with \(|y_{i}|\) tokens, we uniformly sample n-grams from \(y_{i}\) to cover \(\alpha\times|y_{i}|\) tokens that each n-gram has a random length among \(\{1,2,3\}\). For example, given that \(\alpha=0.1\) and a target \(y_{i}\) with \(20\) tokens, we can sample one 2-gram or two uni-grams from the target to reach 2 (\(0.1\times 20\)) tokens.
**Training.** Given a sample \((x_{i},y_{i})\), the training loss is rewritten as
\[\begin{split}\mathcal{L}_{da}&=-\sum_{i=1}^{N}\log P _{da}(y=y_{i}|x_{i},y_{i})\\ &\approx-\sum_{i=1}^{N}\log\frac{1}{|\hat{\mathcal{Z}}_{i}|}\sum_ {z\in\hat{\mathcal{Z}}_{i}}P_{\varphi}(y=y_{i}|x_{i},z)\\ &\leq-\sum_{i=1}^{N}\frac{1}{|\hat{\mathcal{Z}}_{i}|}\sum_{z\in \hat{\mathcal{Z}}_{i}}\log P_{\varphi}(y=y_{i}|x_{i},z),\end{split} \tag{3}\]
where the upper bound of the loss is provided by Jensen inequality. The upper bound sums log probabilities, which can be seen as sums of the standard negative log-likelihood (NLL) loss of each \((x_{i},z,y_{i})\). As a result, when we optimize this upper bound as an alternative to optimizing \(\mathcal{L}_{da}\), the DA model is trained using standard NLL loss but with \(|\hat{\mathcal{Z}}_{i}|\) times more training instances.
**Discussion.** As shown in Figure 1, given a sample \((x_{i},y_{i})\), we adopt a new estimation method using the posterior distribution \(P_{da}(y|x_{i},y_{i})\) for our DA model. The basic intuition is that by conditioning on both the source \(x_{i}\) and the observed translation \(y_{i}\), the DA model can estimate the data distribution \(P_{data}(y|x_{i})\) more accurately than an MT model. Logically, an MT model learns a prior distribution \(P_{mt}(y|x_{i})\), which estimates the data distribution \(P_{data}(y|x_{i})\) for modeling translation probabilities. This prior distribution works well when the corpus is large. However, when the corpus is sparse in comparison to the data space, the learned distribution overfits the sparsely distributed samples, resulting in poor generalization to unseen targets.
### The Data Augmentation Process
The detailed data augmentation process is shown in Figure 2 and the corresponding algorithm is shown in Algorithm 1. Below we use one training example to illustrate.
**DA model training.** We represent the latent variable \(z\) as a sequence of tokens and concatenate \(z\) to the source, so a general seq2seq model can be used to model the posterior distribution. Compared to general MT models, the only difference is the structure of the input.
Specifically, as the step B shown in the figure, for a given sample \((x_{i},y_{i})\) from the parallel data, we sample a number of n-grams from \(y_{i}\) and extend the input to \((x_{i},z)\), where the number is determined according to the length of \(y_{i}\). Take the target sentence "_most free societies accept such limits as reasonable, but the law has recently become more restrictive._" as an example. We sample "_societies_" and "_has recently_" from the target and concatenate them to the end of the source sentence to form the first input sequence. We then sample "_the law_" and "_as reasonable_" to form the second input sequence. These new input sequences pair with the original target sequence to form new parallel data. By generating different input sequences, we augment the data multiple times.
Figure 2: The detailed data augmentation process, where the parallel data is augmented multiple times.
**Target-side data augmentation.** Using the data "C. Extended Input" separated from the extended data in step B, we generate new translations by running a beam search with the trained DA model, where for each extended input sequence, we obtain a new translation. Here, we reuse the sampled \(z\) from step B. However, we can also sample new \(z\) for inference, which does not show an obvious difference in the MT performance. By pairing the new translations with the original source sequence, we obtain "E. Augmented Data". The details are described in Algorithm 1, which inputs the original parallel data and outputs the augmented data.
```
Input:\(D=\{(x_{i},y_{i})\}_{i=1}^{N}\)\(\triangleright\) A. Parallel data Output:\(D^{\prime}=\{(x_{i},y_{i})\}_{i=1}^{N\times(M+1)}\)\(\triangleright\) Aug \(M\) times
1:functionTargetaug(\(D\))
2:\(D^{\prime}\leftarrow\{\}\)
3:for\(i\gets 1\) to \(N\)do
4:\((x_{i},y_{i})\gets D[i]\)\(\triangleright\) For each sample
5:\(D^{\prime}\gets D^{\prime}\cup\{(x_{i},y_{i})\}\)\(\triangleright\) Add the gold pair
6:for\(j\gets 1\) to \(M\)do
7:\(\alpha\sim Beta(a,b)\)\(\triangleright\) Sample an observed ratio
8:\(z_{j}\sim P_{\alpha}(z|y_{i})\)\(\triangleright\) Sample a latent value
9:\(\hat{y}_{j}\sim P_{\varphi}(y|x_{i},z_{j})\)\(\triangleright\) Sample a translation
10:\(D^{\prime}\gets D^{\prime}\cup\{(x_{i},\hat{y}_{j})\}\)\(\triangleright\) Add the new pair
11:return\(D^{\prime}\)\(\triangleright\) E. Augmented data
```
**Algorithm 1** Target-side data augmentation.
### The MT Model
We use Transformer Vaswani et al. (2017) and G-Transformer Bao et al. (2021) as the baseline MT models. The Transformer baseline models the sentence-level translation and translates a document sentence-by-sentence, while the G-Transformer models the whole document translation and directly translates a source document into the corresponding target document. G-transformer improves the naive self-attention in Transformer with group-attention (Appendix A) for long document modeling, which is a recent state-of-the-art document MT model.
**Baseline Training.** The baseline methods are trained on the original training dataset \(D\) by the standard NLL loss
\[\mathcal{L}_{mt}=-\sum_{i=1}^{N}\log P_{mt}(y=y_{i}|x_{i}). \tag{4}\]
**Augmentation Training.** For our target-side augmentation method, we force the MT model to match the posterior distribution estimated by the DA model
\[\mathcal{L}_{mt}=-\sum_{i=1}^{N}\sum_{y\in\mathcal{Y}_{i}}P_{da}(y|x_{i},y_{i} )\log P_{mt}(y|x_{i}), \tag{5}\]
where \(\mathcal{Y}_{i}\) is the possible translations of \(x_{i}\).
We approximate the expectation over \(\mathcal{Y}_{i}\) using a Monte Carlo method. Specifically, for each sample \((x_{i},y_{i})\), we first sample \(z_{j}\) from \(P_{\alpha}(z|y_{i})\) and then run beam search with the DA model by taking \(x_{i}\) and \(z_{j}\) as its input, obtaining a feasible translation. Repeating the process \(M\) times, we obtain a set of possible translations
\[\hat{\mathcal{Y}_{i}}=\{\arg\max_{y}P_{\varphi}(y|x_{i},z_{j})|z_{j}\sim P_{ \alpha}(z|y_{i})\}_{j=1}^{M}, \tag{6}\]
as the step D in Figure 2 and Algorithm 1 in Section 3.2 illustrate.
Subsequently, the loss function for the MT model is rewritten as follows, which approximates the expectation using the average NLL loss of the sampled translations
\[\mathcal{L}_{mt}\approx-\sum_{i=1}^{N}\frac{1}{|\hat{\mathcal{Y}_{i}}|}\sum_ {y\in\hat{\mathcal{Y}_{i}}}\log P_{\theta}(y|x_{i}), \tag{7}\]
where \(\theta\) denotes the parameters of the MT model. The number \(|\hat{\mathcal{Y}_{i}}|\) could be different for each sample, but for simplicity, we choose a fixed number \(M\) in our experiments.
## 4 Experiments
**Datasets.** We experiment on three benchmark datasets - TED, News, and Europarl Maruft et al. (2019), representing different domains and data scales for English-German (En-De) translation. The detailed statistics are displayed in Table 1, and the detailed descriptions are in Appendix B.1.
**Metrics.** We follow Liu et al. Liu et al. (2020) to use sentence-level BLEU score (s-BLEU) and document-level BLEU score (d-BLEU) as the major metrics for the _performance_. We further define two metrics, including Deviation and Diversity, to measure the quality of generated translations from
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Dataset** & **Sentences** & **Documents** \\ & **train/dev/test** & **train/dev/test** \\ \hline TED & 0.21M/9K/2.3K & 1.7K/92/22 \\ News & 0.24M/2K/3K & 6K/80/154 \\ Europarl & 1.67M/3.6K/5.1K & 118K/239/359 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Datasets statistics.
the DA model for _analysis_. The detailed description and definition are in Appendix B.2.
**Baselines.** We apply target-side augmentation to two baselines, including sentence-level Transformer Vaswani et al. (2017) and document-level G-transformer Bao et al. (2021). We further combine back-translation and target-side augmentation, and apply it to the two baselines.
**Training Settings.** For both Transformer and G-Transformer, we generate \(M\) new translations (9 for TED and News, and 3 for Europarl) for each sentence and augment the data to its \(M+1\) times. For back-translation baselines, where the training data have already been doubled, we further augment the data 4 times for TED and News, and 1 for Europarl, so that the total times are still 10 for TED and News, and 4 for Europarl.
We obtain the translations by sampling latent \(z\) with an observed ratio from a Beta distribution \(Beta(2,3)\) and running a beam search with a beam size of 5. We run each main experiment three times and report the median. More details are described in Appendix B.3.
### Main Results
As shown in Table 2, target-side augmentation significantly improves all the _baselines_. Particularly, it improves G-Transformer (fnt.) by 1.75 s-BLEU on average over the three benchmarks, where the improvement on News reaches 2.94 s-BLEU. With the augmented data generated by the DA model, the gap between G-Transformer (rnd.) and G-Transformer (fnt.) narrows from 1.26 s-BLEU on average to 0.18, suggesting that fine-tuning on sentence MT model might not be necessary when augmented data is used. For the Transformer baseline, target-side augmentation enhances the performance by 1.33 s-BLEU on average. These results demonstrate that target-side augmentation can significantly improve the baseline models, especially on small datasets.
Comparing with _previous work_, G-Transformer (fnt.)+Target-side augmentation outperforms the best systems SMDT, which references retrieved similar translations, with a margin of 1.40 s-BLEU on average. It outperforms previous competitive RecurrentMem, which gives the best score on TED, with a margin of 1.58 s-BLEU on average. Compared with MultiResolution, which is also a data augmentation approach that increases the training data by splitting the documents into different resolutions (e.g., 1, 2, 4, 8 sentences per training instance), target-side augmentation obtains higher performance with a margin of 1.72 s-BLEU on average. With target-side augmentation, G-Transformer (fnt.) achieves the best-reported s-BLEU on all
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline \multirow{2}{*}{**Method**} & \multicolumn{3}{c}{**TED**} & \multicolumn{3}{c}{**News**} & \multicolumn{2}{c|}{**Europarl**} & **Average** \\ & s-BLEU & d-BLEU & s-BLEU & d-BLEU & s-BLEU & d-BLEU & s-BLEU \\ \hline HAN Miculicich et al. (2018) & 24.58 & - & 25.03 & - & 28.60 & - & 26.07 \\ SAN Maruf et al. (2019) & 24.42 & - & 24.84 & - & 29.75 & - & 26.34 \\ Hybrid Context Zheng et al. (2020) & 25.10 & - & 24.91 & - & 30.40 & - & 26.80 \\ Flat-Transformer Ma et al. (2020) & 24.87 & - & 23.55 & - & 30.09 & - & 26.17 \\ G-Transformer (rnd.) Bao et al. (2021) & 23.53 & 25.84 & 23.55 & 25.23 & 32.18 & 33.87 & 26.42 \\ G-Transformer (fnt.) Bao et al. (2021) & 25.12 & 27.17 & 25.52 & 27.11 & 32.39 & 34.08 & 27.68 \\ MultiResolution Sun et al. (2022) & 25.24 & 29.27 & 25.00 & 26.71 & 32.11 & 34.48 & 27.45 \\ RecurrentMem Feng et al. (2022) & 25.62 & **29.47** & 25.73 & 27.78 & 31.41 & 33.50 & 27.59 \\ SMDT Zhang et al. (2022) & 25.12 & - & 25.76 & - & 32.42 & - & 27.77 \\ \hline Transformer (sent baseline) \(\diamond\) & 24.91 & - & 24.82 & - & 31.22 & - & 26.98 \\ + Target-side data augmentation (ours) & 26.14* & - & 27.03* & - & 31.75* & - & 28.31 \\ G-Transformer (fnt.) (doc baseline) \(\diamond\) & 25.20 & 27.94 & 25.12 & 27.02 & 31.93 & 33.88 & 27.42 \\ + Target-side augmentation (ours) & **26.59* & 29.20* & 28.06* & 29.83* & **32.85*** & **34.76*** & **29.17** \\ \hline Transformer + Back-translation (sent) \(\diamond\) & 25.03 & - & 26.07 & - & 31.12 & - & 27.41 \\ Target-side augmentation (ours) & 26.13 & - & 28.01 & - & 31.27 & - & 28.47 \\ G-Transformer + Back-translation (doc) \(\diamond\) & 25.45 & 28.06 & 26.25 & 28.21 & 32.60 & 33.94 & 27.90 \\ Target-side augmentation (ours) & 26.21 & 28.58 & **28.69** & **30.41** & 32.52 & 34.50 & 29.14 \\ \hline \multicolumn{7}{c}{**Pre-training Setting for Comparison**} \\ \hline Flat-Transformer+BERT Ma et al. (2020) & 26.61 & - & 24.52 & - & 31.99 & - & 27.71 \\ G-Transformer+BERT Bao et al. (2021) & 26.81 & - & 26.14 & - & 32.46 & - & 28.47 \\ G-Transformer+mBART Bao et al. (2021) & 28.06 & 30.03 & 30.34 & 31.71 & 32.74 & 34.31 & 30.38 \\ \hline \end{tabular}
\end{table}
Table 2: Main results evaluated on English-German document-level translation, where “*” indicates a significant improvement upon the baseline with \(p<0.01\). (rnd.) – parameters are randomly initialized. (fnt.) – parameters are initialized using a trained sentence model. \(\diamond\) – we adjust the hyper-parameters for augmented datasets. \(\diamond\) – we augment the training data by back-translating each target to a new source instead of introducing additional monolingual targets.
three datasets.
Compared to the _pre-training setting_, target-side augmentation with G-Transformer (fnt.) outperforms Flat-Transformer+BERT and G-Transformer+BERT, which are fine-tuned on pretrained BERT, with margins of 1.46 and 0.70 s-BLEU, respectively, on an average of the three benchmarks, where the margins on News reaches 3.54 and 1.92, respectively. The score on bigger dataset Europarl even excels strong large pre-training G-Transformer+mBART, suggesting the effectiveness of target-side augmentation for both small and large datasets.
_Back-translation_ does not enhance the performance on TED and Europarl by an adequate margin, but enhances the performance on News significantly, compared to the Transformer and G-Transformer baselines. Upon the enhanced baselines, target-side augmentation further improves the performance on News to a new level, reaching the highest s/d-BLEU scores of 28.69 and 30.41, respectively. The results demonstrate that target-side augmentation complements the back-translation technique, where a combination may be the best choice in practice.
### Posterior vs Prior Distribution
We first compare the MT performance of using a posterior distribution \(P(y|x_{i},y_{i})\) in the DA model (Eq. 5 in Section 3.3) against using the prior distribution \(P(y|x_{i})\). As shown in Table 3, when using a prior-based augmentation, the performance improves by 0.64 s-BLEU on average compared to using the original data. After replacing the DA model with the posterior distribution, the performance improves by 1.75 s-BLEU on average, which is larger than the improvements obtained by the prior distribution. The results suggest that using a DA model (even with a simple prior distribution) to augment the target sequence is effective, and the posterior distribution further gives a significant boost.
**Generated Translations.** We evaluate the distribution of generated translations, as shown in Table 4. Using prior distribution, we obtain translations with higher Diversity than posterior distribution. However, higher Diversity does not necessarily lead to better performance if the generated translations are not consistent with the target distribution. As the Deviation column shows, the translations sampled from the posterior distribution have a much smaller Deviation than that from the prior distribution, which confirms that the DA model estimating posterior distribution can generate translations more similar to the gold target.
**Accuracy of Estimated Distribution.** As more direct evidence to support the DA model with a posterior distribution, we evaluate the perplexity (PPL) of the model on a multiple-reference dataset, where a better model is expected to give a lower PPL on the references (Appendix C.1). As shown in the column PPL in Table 4, we obtain an average PPL (per token) of 7.00 for the posterior and 8.68 for the prior distribution, with the former being 19.4% lower than the latter, confirming our hypothesis that the posterior distribution can estimate the data distribution \(P_{data}(y|x_{i})\) more accurately.
### Sampling of Latent z
**Scale.** The sampling scale \(|\mathcal{\hat{Y}}|\) in Eq. 7 is an important influence factor on the model performance. Theoretically, the larger the scale is, the more accurate the approximation will be. Figure 3 evaluates the performance on different scales of generated translations. The overall trends confirm the theoretical expectation that the performance improves when the scale increases. At the same time, the contribution of the gold translation drops when the scale increases, suggesting that with more generated translations, the gold translation provides
\begin{table}
\begin{tabular}{l|c c|c} \hline
**Method** & **Diversity \(\uparrow\)** & **Deviation \(\downarrow\)** & **PPL \(\downarrow\)** \\ \hline Prior distribution & **78.68** & 76.55 & 8.68 \\ Posterior distribution & 45.42 & **47.14** & **7.00** \\ \hline \end{tabular}
\end{table}
Table 4: Quality of generated translations and accuracy of the estimated distributions from the DA model, evaluated on _News_.
Figure 3: Impact of the sampling scale for \(z\), trained on G-Transformer (fnt.) and evaluated in _s-BLEU_ on _News_. (gen+gold) – trained on both generated and gold translations. (gen only) – trained on generated translations.
less additional information. In addition, the performance of scale \(\times 1\) and \(\times 9\) have a gap of 0.75 s-BLEU, suggesting that the MT model requires sufficient samples from the DA model to match its distribution. In practice, we need to balance the performance gain and the training costs to decide on a suitable sampling scale.
**Observed Ratio.** Using the observed ratio (\(\alpha\) in Eq. 1), we can control the amount of information provided by the latent variable \(z\). Such a ratio influences the quality of generated translations. As Figure 3(a) shows, a higher observed ratio produces translations with a lower Deviation from the gold reference, which shows a monotonic descent curve. In comparison, the diversity of the generated translations shows a convex curve, which has low values when the observed ratio is small or big but high values in the middle. The diversity of the generated translations represents the degree of smoothness of the augmented dataset, which has a direct influence on the model performance.
As Figure 3(b) shows, the MT model obtains the best performance around the ratio of 0.4, where it has a balanced quality of Deviation and Diversity. When the ratio further increases, the performance goes down. Comparing the MT models trained with/without the gold translation, we see that the performance gap between the two settings is closing when the observed ratio is bigger than 0.6, where the generated translations have low Deviation from the gold translations.
The Diversity can be further enhanced by mixing the generated translations from different observed ratios. Therefore, instead of using a fixed ratio, we sample the ratio from a predefined Beta distribution. As Figure 3(c) shows, we compare the performance on different Beta distributions. The performance on TED peaks at \(Beta(1,1)\) but does not show a significant difference compared to the other two, while the performance on News peaks at \(Beta(2,3)\), which has a unimodal distribution with an extremum between the ratio 0.3 and 0.4 and has a similar shape as the curve of Diversity in Figure 3(a). Compared to \(Beta(2,2)\), which is also a unimodal distribution but with an extremum at the ratio 0.5, the performance with \(Beta(2,3)\) is higher by 0.66 s-BLEU.
**Granularity of N-grams.** The granularity of n-grams determines how much order information between tokens is observable through the latent \(z\) (in comparison, the observed ratio determines how many tokens are observed). We evaluate different ranges of n-grams, where we sample n-grams according to a number uniformly sampled from the range. As Figure 5 shows, the performance peaks at \([1,2]\) for TED and \([1,3]\) for News. However, the differences are relatively small, showing that the performance is not sensitive to the token order of the original reference. A possible reason may be that the DA model can reconstruct the order according to the semantic information provided by the source sentence.
### Different Augmentation Methods
**Source-side and Both-side Augmentation.** We compare target-side augmentation with the source-side and both-side augmentations, by applying the DA model to the source and both sides. As Table 5 shows, the source-side augmentation improves the baseline by 1.12 s-BLEU on average of TED and News but is still significantly lower than the target-side augmentation, which improves the baseline by 2.17 s-BLEU on average. Combining the
Figure 4: Impact of the observed ratio for \(z\), trained on G-Transformer (fnt.) and evaluated in _s-BLEU_. Beta(a,b) – The function curves are shown in Appendix B.3.
generated data from both the source-side and target-side augmentations, we obtain an improvement of 2.42 s-BLEU on average, whereas the source-side augmented data further enhance the target-side augmentation by 0.25 s-BLEU on average. These results suggest that the DA model is effective for source-side augmentation but more significantly for target-side augmentation.
**Paraphrasing.** Target-side augmentation augments the parallel data with new translations, which can be seen as paraphrases of the original gold translation. Such paraphrasing can also be achieved by external paraphrasers. We compare target-side augmentation with a pre-trained T5 paraphraser on a sentence-level MT task, using the settings described in Appendix C.3.
As shown in Table 6, the T5 paraphraser performs lower than the Transformer baseline on both the dev and test sets, while target-side augmentation outperforms the baseline by 1.57 and 1.55 on dev and test, respectively. The results demonstrate that a DA model is effective for sentence MT but a paraphraser may not, which can be because of missing translation information.
In particular, the generated paraphrases from the T5 paraphraser have a Diversity of 40.24, which is close to the Diversity of 37.30 from the DA model. However, when we compare the translations by calculating the perplexity (PPL) on the baseline Transformer, we get a PPL of 3.40 for the T5 paraphraser but 1.89 for the DA model. The results suggest that compared to an external paraphraser, the DA model generates translations more consistent with the distribution of the gold targets.
### Further Analysis
**Size of The DA model.** The condition on an observed translation simplifies the DA model for predicting the target. As a result, the generated translations are less sensitive to the capacity of the DA model. Results with different sizes of DA models confirm the hypothesis and suggest that the MT performance improves even with much smaller DA models. The details are in Appendix C.2.
**Case Study.** We list several word, phrase, and sentence cases of German-English translations, and two documents of English-German translations, demonstrating the diversity of the generated translations by the DA model. The details are shown in Appendix C.4.
## 5 Conclusion
We investigated a target-side data augmentation method, which introduces a DA model to generate many possible translations and trains an MT model on these smoothed targets. Experiments show our target-side augmentation method reduces the effect of data sparsity issues, achieving strong improvement upon the baselines and new state-of-the-art results on News and Europarl. Analysis suggests that a balance between high Diversity and low Deviation is the key to the improvements. To our knowledge, we are the first to do target-side augmentation in the context of document-level MT.
### Limitations
Long documents, intuitively, have more possible translations than short documents, so a dynamic number of generated translations may be a better choice when augmenting the data, which balances the training cost and the performance gain. Another potential solution is to sample a few translations and force the MT model to match the dynamic distribution of the DA model using these translations as decoder input, similar to Khayrallah et al. (2020). Such dynamic sampling and matching could potentially be used to increase training efficiency. We do not investigate the solution in this paper and leave the exploration of this topic to future work.
Target-side augmentation can potentially be applied to other seq2seq tasks, where the data sparsity is a problem. Due to the limitation of space in a conference submission, we will leave investigations on other tasks for future work.
## Acknowledgements
We would like to thank the anonymous reviewers for their valuable feedback. This work is funded by the China Strategic Scientific and Technological Innovation Cooperation Project (grant No. SQ2022YFE020038) and the National Natural Science Foundation of China (grant NSFC No. 62161160339). Zhiyang Teng is partially supported by CAAI-Huawei MindSore Open Fund (CAAIXSJLJJ-2021-046A).
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & **Dev** & **Test** \\ \hline Transformer (base) & 34.85 & 33.87 \\ + T5 paraphraser \(\diamondsuit\) & 34.01 & 33.10 \\ + Target-side augmentation & **36.42** & **35.42** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Target-side augmentation vs paraphraser on sentence-level MT, evaluated on IWSLT14 German-English (De-En). \(\diamondsuit\) – nucleus sampling with \(p=0.95\). |
2304.13350 | Neuro-symbolic Zero-Shot Code Cloning with Cross-Language Intermediate
Representation | In this paper, we define a neuro-symbolic approach to address the task of
finding semantically similar clones for the codes of the legacy programming
language COBOL, without training data. We define a meta-model that is
instantiated to have an Intermediate Representation (IR) in the form of
Abstract Syntax Trees (ASTs) common across codes in C and COBOL. We linearize
the IRs using Structure Based Traversal (SBT) to create sequential inputs. We
further fine-tune UnixCoder, the best-performing model for zero-shot
cross-programming language code search, for the Code Cloning task with the SBT
IRs of C code-pairs, available in the CodeNet dataset. This allows us to learn
latent representations for the IRs of the C codes, which are transferable to
the IRs of the COBOL codes. With this fine-tuned UnixCoder, we get a
performance improvement of 12.85 MAP@2 over the pre-trained UniXCoder model, in
a zero-shot setting, on the COBOL test split synthesized from the CodeNet
dataset. This demonstrates the efficacy of our meta-model based approach to
facilitate cross-programming language transfer. | Krishnam Hasija, Shrishti Pradhan, Manasi Patwardhan, Raveendra Kumar Medicherla, Lovekesh Vig, Ravindra Naik | 2023-04-26T07:41:26Z | http://arxiv.org/abs/2304.13350v1 | # Neuro-symbolic Zero-Shot Code Cloning with Cross-Language Intermediate Representation
###### Abstract
In this paper, we define a neuro-symbolic approach to address the task of finding semantically similar clones for the codes of the legacy programming language COBOL, without training data. We define a meta-model that is instantiated to have an Intermediate Representation (IR) in the form of Abstract Syntax Trees (ASTs) common across codes in C and COBOL. We linearize the IRs using Structure Based Traversal (SBT) to create sequential inputs. We further fine-tune UnixCoder, the best-performing model for zero-shot cross-programming language code search, for the Code Cloning task with the SBT IRs of C code-pairs, available in the CodeNet dataset. This allows us to learn latent representations for the IRs of the C codes, which are transferable to the IRs of the COBOL codes. With this fine-tuned UnixCoder, we get a performance improvement of 12.85 MAP@2 over the pre-trained UniXCoder model, in a zero-shot setting, on the COBOL test split synthesized from the CodeNet dataset. This demonstrates the efficacy of our meta-model based approach to facilitate cross-programming language transfer.
## 1 Introduction
Recent advancements in pre-training Language Models (LMs) for learning code representations have lead to improvements in the down-stream code understanding and generation tasks Ahmad et al. (2021); Feng et al. (2020); Guo et al. (2021); Lu et al. (2021); Wang et al. (2021); Guo et al. (2022); Wang et al. (2022); Wang et al. (2022); Qi et al. (2021); Jiang et al. (2021); Wang et al. (2022). These LMs are trained with large volumes of monolingual code data (in the order of hundreds of GBs) with pre-training objectives such as Masked Language Modeling (MLM), Replaced Token Detection (RTD), Denoising objectives (DNS), etc.
The focus of this paper is on the task of Code Clone detection specifically for the legacy low-resource Programming Language (PL) COBOL. There is a very high volume of COBOL application code still in use by organizations and institutions worldwide. Research suggests that there are more than 800 billion lines of COBOL code currently in use and therefore it is crucial to maintain and enhance the COBOL code, until it is modernized using digital transformation. Code clone detection is used to measure the _similarity_ of code fragments and has direct applications in code reuse, code compaction by replacing code snippets with more compact code clones, copyright infringement detection etc. It aims to detect whether different code fragments have the same behavior (i.e. give similar outputs on similar inputs) irrespective of their surface form or structure and syntax. The task is typically achieved by learning the _semantic representation_ of the code.
In the literature, there is work done on Code Clone detection Guo et al. (2021); Ahmad et al. (2021); Lu et al. (2021); Wang et al. (2021); Guo et al. (2022); Wang et al. (2022) and code search tasks Feng et al. (2020); Guo et al.
[2021], Lu et al. [2021], Wang et al. [2021a], Guo et al. [2022a], Wang et al. [2022a] for PLs such as Java, Javascript, Python, Ruby, Go, etc. by pre-training and task specific fine-tuning of language models. The pre-training data used here is typically the CodeSearchNet dataset Husain et al. [2019] consisting of 352 GB of Java and 224 GB of python codes. For code-clone detection task specific fine-tuning typically POJ-104 Mou et al. [2016] and BigCloneBench Svajlenko et al. [2014] datasets are used. POJ-104 dataset consists of 104 problems including 500 C/C++ codes for each problem. BigCloneBench dataset includes \(\sim\)900K training examples for Java programming language from 10 different functionalities. CSN dataset Husain et al. [2019], which is typically used for fine-tuning for code search task has \(\sim\)906K training samples for Java, Javascript, PHP, Python, Go and Ruby PLs. Thus, to pre-train and fine-tune these language models, a huge amount of PL-specific data is used. On the other hand, for legacy languages like COBOL the above non-COBOL datasets Mou et al. [2016], Svajlenko et al. [2014], Husain et al. [2019] are not useful and to the best of our knowledge, there are no publicly available datasets other than CodeNet with COBOL codes as part of the dataset. Even in the CodeNet dataset, there are only 727 COBOL codes for 325 problem descriptions. Codes belonging to the same problem descriptions can form clones of each other as they share semantics. However, this tiny COBOL data is insufficient for pre-training as well as fine-tuning LMs for COBOL code representation learning and for downstream tasks such as code cloning.
In this paper, we use an Intermediate Representation (IR) synthesized using a pre-defined meta-model, which is common across C and COBOL PLs. CodeNet dataset contains \(\sim\)300K accepted C code submissions for \(\sim\)3K problem descriptions. We perform the following transforms (i) To avoid any inductive biases created because of meaningful function and variable names and thus to help the model focus on the underlying logic of the code, we perform a semantics preserving transformation, by replacing function and variable names from the C codes with more generic non-meaningful words from the vocabulary such as \(FUNC\) or \(VAR\) (ii) Transform C codes to our pre-defined IR, which is a form of Abstract Syntax Tree (AST) and an instance of a predefined _meta-model_, by a language-specific parser, (iii) Map specific code tokens in C which appear at the leaf nodes of the IR to the equivalent COBOL tokens with the help of pre-defined C-COBOL syntactical token mappings (Table 3), and (iv) Use Structure Based Traversal (SBT) Hu et al. [2018], Ahmad et al. [2020] of the IR to generate a sequence of IR tokens(SBT-IR). On similar lines, we apply transformations (i), (ii) and (iv) to the CodeNet COBOL codes. We further fine-tune UniXCoder Guo et al. [2022a], the best-performing model in the literature for zero-shot code-to-code search, with the transformed C codes (C SBT-IRs) from the CodeNet dataset for the code-cloning task. We test this model on the transformed COBOL codes (COBOL SBT-IRs) for clone detection, in a zero-shot setting.
With a best zero-shot test map@2 score of 48.19 and map@1 score of 82.76 for the Code Cloning task with CodeNet COBOL codes, our approach showcased improvement of (a) 32.79 map@2 (212.92%) and 55.18 map@1 (246.23%) over a vanilla-transformer Vaswani et al. [2017] auto-encoded with C-Code SBT-IRs for structure learning, and subsequently trained via contrastive loss for learning code semantics, (b) 12.85 map@2 (36.36%) and 24.14 map@1 (45.16%) over a pre-trained Unix-coder and (c) 11.32 map@2 (30.70%) and 15.52 map@1 (25.00%) over a pre-trained Unix-coder fine-tuned on the Code Cloning task with the original C-codes. Following are some of the important observations of our study:
1. (a) demonstrates the efficacy of usage of the model pre-trained on a generic set of PLs, over a model trained from scratch, for better performance on the downstream code-cloning task.
2. Though UnixCoder is pre-trained on the code data, it has not seen all the tokens of our IR as well as COBOL tokens. Thus, as depicted in (b), fine-tuning of UniXCoder with C-SBT-IR leads to better performance for the downstream task.
3. (c) demonstrates that fine-tuning with C-SBT-IRs helps more for the zero-shot transfer for low-resource COBOL language as compared to fine-tuning with the original C-codes, proving the efficacy of our approach, which uses a common IR across programming languages.
## 2 Related work
**Code Representation Learning:** Different representations of source code provide distinct perspectives of code understanding. For instance, the Abstract Syntax Tree (AST) provides structural information, Control Flow Graph (CFG) and Data Flow Graph (DFG) provide information about the flow of control and data in the code, and the tokens of the source code itself provide useful syntactic information. Recent studies of neural network-based code learning have tried to learn the semantic latent representation of the code, which are useful for downstream tasks such as code cloning and code-to-code search. CODESCRIBe Guo et al. [2022b] models the hierarchical syntax structure of code by introducing a triplet position for nodes in the AST. GypSum Wang et al. [2022b] generates intermediate representation by introducing control-flow related edges into the AST and then uses graph attention neural networks to generate the encoding. DeepCom Hu et al. [2018] converts the input ASTs into specially formatted sequences using a structure-based
traversal method. TPTrans Peng et al. (2021) encodes the path between tokens of source code and also the path from leaf to root node for each token in the syntax tree and explores the interaction between them. GraphCodeBERT Guo et al. (2021) leverages data flow graph for pre-training since it is less complex and hierarchical compared to ASTs.UniXoder Guo et al. (2022) proposes a one-to-one mapping method to transform AST into a sequence structure while retaining all the structural information of the tree. We notice that all the approaches use huge data for pre-training (in order of \(\sim\)80 to \(\sim\)350 GB) as well as fine-tuning for the downstream code-search tasks. However, with the availability of only tiny data for a large legacy language like COBOL, it is impossible to train language models and avail benefits of the above approaches.
**Code Clone Detection:** Code Clone detection aims to detect whether two pieces of code have the same semantics or not. Many recent works that use pre-trained models for PLs support the task. CodeT5 Wang et al. (2021) learns code semantics through the use of identifier-aware pre-training tasks such as masked identifier prediction (MIP) and identifier-aware denoising, that help the model to distinguish identifiers from other code tokens. PLBART Ahmad et al. (2021) uses the same architecture as BART along with a denoising autoencoding pre-training task. Recently, there have been attempts to improve code semantics through the use of contrastive learning. To generate positive pairs, Contracode Jain et al. (2020) uses transformations such as identifier modifications and code compression (e.g. precomputation of constant expressions) and Corder Bui et al. (2021) uses semantic-preserving transformations such as dead code insertion, statement permutation, identifier renaming etc. To build hard negative pairs, DISCO Ding et al. (2022), a self-supervised pre-training model injects real-world security bugs in programs through the misuse of pointers, variables and data-types, and positive pairs are generated by statement permutations and identifier renaming. Syncobert Wang et al. (2021) and CodeMVP Wang et al. (2022) build positive pairs using the intermediate representations (IR) of programs such as AST,CFG,DFG which are generated through the compilation process (lexical,syntax,semantic analysis) of the programs. UniXcoder Guo et al. (2022) achieves state-of-the-art performance on code cloning task
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multirow{2}{*}{**Problem Description**} & \multicolumn{1}{c|}{You will turn on the air conditioner if, and only if, the temperature of the room is 30 degrees Celsius or above.} \\ & \multicolumn{1}{c|}{The current temperature of the room is X degrees Celsius. Will you turn on the air conditioner?} \\ & \multicolumn{1}{c|}{Print ”Yes” if you will turn on the air conditioner; print ”No” otherwise.} \\ \hline
**C Code** & **Synthesized SPT-IR for C Code** & **CORDI Code** & **Synthesized SPT-IR for COBOL Code** \\ \hline \multirow{3}{*}{int main()()} & (CompUnit(has\_directive(Func\_name(has\_stmtmt)\) & PROCEDURE & (CompUnit(has\_directive(Func\_name(has\_stmt\) \\ int x; & (has\_stmtmt(Exprs(tmtm\_as\_cgrc(Call)\_1\_name(ACEPT))\_LL\_name & ACCEPT X. & (CompUnit(has\_stmt(Exprs(tm\_as\_cgrc(Call\_1\_name(ACEPT))\_ACCEPT)\_LL\_name & (LI\_param((ACEPT))\_ACCEPT)\_LL\_name \\ scanf("Gd",\&x)); & (LI\_param(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(Unary(
surpassing ROBERTa Liu et al. (2019), CodeBERT Feng et al. (2020), GraphCodeBERT Guo et al. (2021), SynCoBERT Wang et al. (2021), PLBART Ahmad et al. (2021), CodeT5 Wang et al. (2021), DISCO Ding et al. (2022), Corder Bui et al. (2021) and CodeRetriever Li et al. (2022). More importantly, UniXcoder achieves state-of-the-art results for the zero-shot code-to-code search. In this work, as we are interested in a zero-shot Code-Cloning task for a low-resource COBOL programming language, we choose the best-performing UniXcoder as our base model.
**Cross-language Code Learning:** Cross-programming language code learning is a relatively unexplored field and there are some recent works that are aimed at learning language-independent representations for source code. Bui et al. (2019) propose a bilateral neural network (Bi-NN) to learn representations of pieces of code in different languages, which can be then used to identify algorithm classes of the code. Wang et al. (2022c) propose the Unified Abstract Syntax Tree (UAST) neural network for the cross-language program classification task, by unifying AST traversal and the vocabulary across PLs. MISIM (Machine inferred code similarity) Ye et al. (2020) use a context-aware semantic structure (CASS) and a neural-based code semantics similarity scoring algorithm to learn language-independent representations. Code Transformer Zugner et al. (2021) learns language-agnostic features from both, the structure and the context of programs by training with multiple PLs, which leads to larger improvements in performance for languages with lower resources. However, these approaches which learn unified representations across programming languages require some amount of data for each language and thus prove not to be useful for legacy languages like COBOL in a zero-shot setting.
## 3 Dataset
We form our dataset using the C and COBOL codes available in CodeNet Puri et al. (2021). The dataset consists of 4053 Problem Descriptions (PD) (as a.html file) that have a combined total of 313360 C codes. The codes which belong to the same problem description form clones of each other as they share the same semantics. CodeNet is pre-processed by removing the PDs with: (i) no or empty.html files (ii) no accepted (correctly executing to provide the right output) C codes (771 PDs) (iii) only one C code (to detect clones there must be at the least two C codes per PD to form a positive pair). Thus, the final dataset consists of 3221 PDs with a total of 303193 C codes.
Generating the IRs and SBTs for the codes (explained in section 4) using our meta-model is a time-consuming task. It takes on an average of \(\sim\)3 seconds per code. We generate IRs and SBTs of C codes belonging to randomly selected 1693 PDs (\(\sim\)50% of 3221 PDs - taking us 10 days) and create the train-val-test splits (Table 2). Though we are using partial data, using all the C-codes available in CodeNet dataset would only lead to improvement in our overall results. The token length of generated SBT-IRs is much more than the original C-codes. This is because, in addition to the leaf-nodes of the AST based IR, which are the actual code tokens, the SBT-IRs have the non-leaf tokens as well. Since the maximum sequence length after tokenization accepted by UniXcoder model is 512; we create a separate dataset of codes whose SBT-IRs fit into this token length. This allows us to evaluate the drop in the performance of our approach due to truncated SBT-IRs for the codes.
We use MAP@R as the metric to evaluate the performance of the Code-cloning task. Each problem statement in the test set consists of a total of 'R + 1' codes which have the same semantics. For each code, 'R' most semantically similar codes are retrieved from the test set based on the model's predictions and the average precision score is evaluated. The mean of all the average precision scores is then taken to get the final MAP@R score. We form our test splits such that we can compute MAP@R for certain value of R based on the availability of data.
### Creating Test Dataset for COBOL Code Cloning
Out of 727 accepted COBOL submissions for 325 PDs in CodeNet there are there are 92 PDs that have 3 or more accepted COBOL codes. We randomly sample 3 codes from each one of these 92 PDs to obtain a total of 276 COBOL codes forming our zero-shot COBOL test set. As each of the PDs in this test set has 3 COBOL codes, this allows us to compute MAP@2 for this test set. Henceforth, we refer to this test set set as _COBOL-Test-MAP@2_. For generating the second test set of COBOL codes that fit into 512 tokens length, we filter out the SBT-IRs of the 727 accepted COBOL codes that exceed this limit. There are 29 PDs for which at least 2 accepted COBOL codes are present that do not exceed this limit of 512 token length. We sample 2 codes randomly from each of these PDs to obtain a total of 58 codes. This allows us to compute MAP@1 for this test set. Henceforth, we term this test split of COBOL as _COBOL-Test-MAP@1_.
### Creating Dataset for C Code Cloning
To create the train-val-test splits for C code IR-SBTs for 1693 PDs (discussed prior in this section), we first remove all the C codes belonging to the PDs used to form the COBOL test sets. This is done to ensure that there is no information leakage in the train-val splits of C with the test splits of COBOL in the zero-shot test setting. Taking the union of the
PDs from the two COBOL test splits, we have 99 PDs. There are 87 PDs which are common in 1693 PDs for which we have C SBT-IRs and the 99 COBOL test PDs. We split the remaining 1606 PDs in the ratio of 90-10 to form the train-validation splits for the C codes, which we refer to as _Train-C-ALL_ and _Val-C-ALL_, respectively. We use the remaining 87 PDS to form the test split for C. We observe that 29 out of 87 PDs, have at least 300 C codes available. We form the test split of C (_Test-C-MAP@299_) by randomly sampling 300 codes from these 29 PDs. For the second test split for C, we choose 11 PDs out of 87 PDs that have at least 100 C codes each with a maximum token length of 512. We term this split as _Test-C-MAP@99_. The details of the splits are provided in Table 2.
## 4 Approach
**Intermediate Representation**: Figure 1 illustrates key parts of our IR meta-model, which is common for both C and COBOL PLs. The meta-model is designed in such a way that language-specific C and COBOL syntactic constructs having the same semantics are represented using similar IR elements (_objects_). This design is influenced by static program analysis tasks. At a high level, our meta-model captures different kinds of IR elements as _classes_(represented in boxes in the Figure) and relationships among them (represented as arrows). In this Figure, the dotted boxes represent abstract classes and the solid boxes represent concrete classes. The dotted arrows represent the generalization relationship between abstract and concrete classes, whereas solid arrows represent various relationships among the concrete classes. An IR of a program is constructed by instantiating relevant IR elements in the meta-model. A program in a file is represented as _compilation unit_ class that consists of an _Abstract syntax tree_(AST) and a _Symbol table_(ST). AST is a tree of _nodes_ (ASTNode), where each node represents an abstract class in the program. The symbol table stores symbols used in the program. There are several _types_ of nodes such as statements, expressions, functions, variables used in expressions, labels, and constants used in the program. The central elements are _statements_ and _expressions_. They are further classified into different types and the relationship among them is shown in Figure 1. For example, consider a statement \(x=y\). It has a _ExprStmt_ node representing the whole statement, a _Binary_ node with '=' as its operator, with _Ident_AST_ node with node symbol \(x\) as its _LHS_expr_ and another _Ident_AST_ node with symbol \(y\) as its _RHS_expr_. The symbols (variables, labels, and constants) used in the expressions are linked to corresponding symbol table objects. For example, the same variable \(x\) used in two different expressions has two _Ident_AST_ nodes in AST but linked to one symbol \(x\) in the symbol table.
The C and COBOL language front-ends parse the program and transform the concrete syntax tree (CST) into IR by instantiating relevant objects from their classes and relationships of the meta-model. For our work, we only consider the AST of the IR. For semantically equivalent COBOL and C codes in Table 1 their corresponding ASTs are illustrated in Figure 2. We further linearize the generated AST using the Structure-Based Traversal (SBT) of the AST. The SBT algorithm starts with the root node of the AST and recursively traverses the child nodes using Depth First Search (DFS). The algorithm emits the node types at the entry and exit of a visiting node. If the node is an intermediate node of a tree, it proceeds to process the child node. If the node is a leaf node, the algorithm emits the leaf value after applying the transformations given in Table 3, if required. The algorithm terminates when all the nodes of the AST are visited. Examples of sequences generated by SBT traversal of a COBOL and a C code IRs (SBT-IRs) are illustrated in Table 1.
While transforming the C Codes, we replace the C code tokens listed in Table 3, which appear as the leaf nodes of the AST (IR), with the semantically equivalent COBOL tokens to facilitate zero-shot transfer. Depending on language syntax, distinct surface forms of the codes written for the same PD may use distinct functions and variable names. To retain consistency across distinct surface forms and to avoid the inductive bias created by semantically meaningful function and variable names in both C and COBOL codes, we replace them with generic tokens such as \(Func\) and \(VAR\) (Refer to Table 1 SBT-IR for an example). We ensure that this replacement does not hamper the data and control flow of the code, by keeping track of original names and replacing every instance of it with a distinct generic names for each of the original names. This transformation forces the model to learn the semantic similarity between positive code pairs by understanding the underlying logic of the code and not by just exploiting the mappings between the semantically meaningful function and variable names to learn the similarity.
To fine-tune the UniXCoder Guo et al. (2022) model for Code-Clone detection, we form the positive and negative pairs of the C codes in _Train-C-ALL_ and _Val-C-ALL_. The positive (clones) and negative (not-clones) pairs are the codes belonging to the same and distinct problem descriptions, respectively. The task is treated as a binary classification task, where the class label is 1 if the pair of codes provided are clones of each other or is 0, otherwise. The UniXCoder Guo et al. (2022) model is trained in an encoder-only setting, with the cross-entropy loss, a batch size of 8, and a learning rate of 5e-5. We save the model with the best F1-score on the validation set _Val-C-ALL_. We repeat the experiments with
\begin{table}
\begin{tabular}{l l|l l}
**C Code Tokens** & **COBOL Code Tokens** & **C Code Tokens** & **COBOL Code Tokens** \\ \hline scanf & ACCEPT & printf & DISPLAY \\ sttok & UNSTRING &, & DELMITED \\ = & INTO & strlen & LENGTH OF \\ strcat & STRING & strlen & STORED-CHAR-LENGTH \\ qport & SORT & strlen & COUNT \\ fread & READ & stdin/stdout & CONSOLE \\ lsearch/ besarch & SEARCH & statistical & ORD \\ \% & REAM MOD & round & ROUNDED \\ + & SUM & memset & INITIALIZE \\ \end{tabular}
\end{table}
Table 3: C-COBOL Token Mapping
(i) original C-codes (ii) C-SBT-IRs and (iii) C-SBT-IRs truncated with maximum allowable token length by UniXcoder (512) (_Train-C-512_ and _Val-C-512_).
We also train a vanilla transformer model in an auto-encoder setting by reconstructing the input C-SBT-IRs with a reconstruction loss for making the model learn the syntax and then fine-tune the trained encoder of the transformer in a Siamese setting with a contrastive loss using positive and negative pairs of code SBT-IRs as explained above to make the model understand the code semantics. Thus, we have a total of four models trained: (i) _UniXcoder-C-SBT-IR-ALL_: UniXcoder fine-tuned with all C-SBT-IRs (ii) _UniXcoder-C-SBT-IR-512_: UniXcoder fine-tuned with C-SBT-IRs which fit into 512 token length (iii) _Transformer-C-SBT-IR-ALL_: Vanilla transformer trained with all C-SBT-IRs (iv) _UniXcoder-C-Code-ALL_: UniXcoder fine-tuned with all C codes. We perform inference with our four test splits described in the prior section: (i) _Test-COBOL-MAP@2_, (ii) _Test-COBOL-MAP@1_, (iii) _Test-C-MAP@299_, (iv) _Test-C-MAP@99_.
## 5 Results and Discussions
Table 4 shows the performance comparison on all the test datasets. We use the algorithm by Mou et al. (2016) for MAP@R score computations. Testing for COBOL is done in a zero-shot setting as training is performed only by using C-SBT-IRs or C codes. We have computed random MAP scores as one of the benchmarks. Random MAP is the MAP score that is obtained if the selection of similar codes in the test split is done in a random manner. For both the COBOL test splits, the average is taken for ten-thousand runs to get the final scores as 0.54 for _Test-COBOL-MAP@2_ and 1.72 for _Test-COBOL-MAP@1_. The C test splits being larger than COBOL, the average is taken over thousand runs to get the final scores as 0.19 for _Test-C-MAP@299_ and 1.23 for _Test-C-MAP@99_. As illustrated in Table 4, our results are significantly better than random MAPs on all test splits.
For both test splits, the performance of the pre-trained UniXcoder is significantly better than that of the vanilla transformer trained using C-SBT-IRs. The MAP scores for _Test-COBOL-MAP@2_ and _Test-COBOL-MAP@1_ are increased by the magnitude of 19.94 and 31.04, respectively. This demonstrates the efficacy of the use of a model pre-trained with a large amount of code data yielding superior performance over a model trained from scratch with a comparatively small amount of data.
There is a very small amount of improvement in the performance (1.53 for _Test-COBOL-MAP@2_ and 8.62 for _Test-COBOL-MAP@1_) after task-specific fine-tuning of UniXcoder with C-codes over the pre-trained model. This shows that the model fine-tuned with C-codes is not generalizable for unseen COBOL codes. UniXcoder trained on C-SBT-IRs yields the best results on the COBOL test splits in the zero-shot setting. There is an increase in MAP score of 12.85 (36.36% rise) for _Test-COBOL-MAP@2_, 24.14 (45.16% rise) for _Test-COBOL-MAP@1_ over the pre-trained UniXcoder without any programming language-specific fine-tuning. This demonstrates that the model trained with the common SBT-IR representations of C code facilitates the transferability of code understanding from C to COBOL, yielding much better zero-shot performance on COBOL. The drop in performance on _Test-COBOL-MAP@2_, when tested on the model trained with _Train-C-SBT-IR-512_ (45.56%) as opposed to the model trained with the complete data (_Train-C-SBT-IR-ALL_) (48.56%), is very less. All the above results showcase that the C-SBT-IRs fitting into maximum token lengths provides better supervision than the ones which don't. This can be because of the ones which exceed the maximum token length, on truncation, might be losing the semantic information and thus may not provide correct supervision for the code-cloning task. Moreover, the results show that the model trained with _Train-C-ALL_ yields the best results for _Test-COBOL-MAP@2_ that is the test set with no restriction on the token lengths. Similarly, when the model is trained with C-IR-SBTs that have lesser than 512 sequence length i.e. _Train-C-SBT-IR-512_ it gives the best results for _Test-COBOL-MAP@1_ that is the test set containing COBOL-IR-SBTs with sequence length lesser than 512. Thus, there is a huge performance gap (MAP@2 of 48.19 vs MAP@1 of 82.76) between the results of the
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \hline Trained Model & Test Split & MAP & Test Split & MAP \\ \hline Random & _Test-COBOL-MAP@2_ & 0.54 & _Test-COBOL-MAP@1_ & 1.72 \\ _Transformer-C-SBT-IR-ALL_ & _Test-COBOL-MAP@2_ & 15.40 & _Test-COBOL-MAP@1_ & 22.41 \\ _UniXcoder-Pertuation_ & _Test-COBOL-MAP@2_ & 35.34 & _Test-COBOL-MAP@1_ & 53.45 \\ _UniXcoder-C-Code-ALL_ & _Test-COBOL-MAP@2_ & 36.87 & _Test-COBOL-MAP@1_ & 62.07 \\ _UniXcoder-C-SBT-IR-ALL_ & _Test-COBOL-MAP@2_ & **48.19** & _Test-COBOL-MAP@1_ & 77.59 \\ _UniXcoder-C-SBT-IR-512_ & _Test-COBOL-MAP@2_ & 45.56 & _Test-COBOL-MAP@1_ & **82.76** \\ \hline Random & _Test-C-MAP@299_ & 0.197 & _Test-C-MAP@59_ & 1.23 \\ _Transformer-C-SBT-IR-ALL_ & _Test-C-MAP@299_ & 31.05 & _Test-C-MAP@99_ & 40.55 \\ _UniXcoder-Pertuation_ & _Test-C-MAP@299_ & 27.63 & _Test-C-MAP@99_ & 46.78 \\ _UniXcoder-C-Code-ALL_ & _Test-C-MAP@299_ & 32.91 & _Test-C-MAP@99_ & 52.94 \\ _UniXcoder-C-SBT-IR-ALL_ & _Test-C-MAP@299_ & **64.75** & _Test-C-MAP@99_ & 89.12 \\ _UniXcoder-C-SBT-IR-512_ & _Test-C-MAP@299_ & 63.44 & _Test-C-MAP@99_ & **90.82** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results
COBOL codes not fitting into 512 token lengths, versus the COBOL codes fitting into the maximum token length. This performance gap can be attributed to the fact that truncation of the test COBOL codes exceeding the sequence length limit will lead to lesser understanding of the code semantics by the model and also a higher random MAP value for the MAP@1 test split.
Similar observations can be made on the test splits of C-SBT-IRs. Moreover, UniXCoder fine-tuned with C-SBT-IRs gives better results for C-SBT-IRs as compared to COBOL-SBT-IRs. This slight deprecation in overall scores for COBOL over C is due to the inherent difference in the structure of code in the two programming languages, leading to a model trained with C-SBT-IRs performing better on C-SBT-IRs as compared to COBOL-SBT-IRs. The Random MAP scores for C test splits are also lower than that for COBOL which makes the results on C even stronger.
## 6 Conclusion
We define a neuro-symbolic approach to address the Code-Cloning task for legacy COBOL codes. We define a meta-model that is instantiated to generate an Intermediate Representation common across no-resource COBOL and high-resource C codes. We use the C code IRs to fine-tune the UniXCoder model for the Code-Cloning task and perform inference on IRs of COBOL codes. The fine-tuned model leads us to a performance improvement of 12.85 MAP@2 for COBOL over the pre-trained UniXCoder model in zero-shot setting and an improvement of 32.69 MAP@2 over a vanilla transformer model trained with C code IRs. This demonstrates the efficacy of our approach of performing zero-shot transfer with the common IR and also the use of the model pre-trained on a lot of code data over a model trained from scratch to better facilitate the transfer.
## 7 Future Work
More recently, Large Language Models (LLMs) such as T5Roberts et al. (2019), GPT3Brown et al. (2020), CodexChen et al. (2021), PaLMChowdhery et al. (2022), pre-trained with massive volumes (in the order on 50TB) of data have shown improved performance for code understanding tasks using in-context learning Brown et al. (2020), Huang and Chang (2022), which work in zero-shot or few-shot setting, with no or very less number of available samples for a given task. However, in-context learning does not work well with smaller Language Models (LMs) Qiu et al. (2022), Hosseini et al. (2022) and they require task-specific fine-tuning to yield acceptable performance. As future work, we want to utilize the in-context learning capabilities of LLMs, which is the upcoming paradigm to address code understanding in low-resource setting. We plan to compare our UniXCoder based zero-shot performance of COBOL code-cloning task facilitated by a common meta-model and IR, with a zero-shot performance for the task using LLMs pre-trained with a lot of code data.
|
2308.02188 | Kernelization of Counting Problems | We introduce a new framework for the analysis of preprocessing routines for
parameterized counting problems. Existing frameworks that encapsulate
parameterized counting problems permit the usage of exponential (rather than
polynomial) time either explicitly or by implicitly reducing the counting
problems to enumeration problems. Thus, our framework is the only one in the
spirit of classic kernelization (as well as lossy kernelization). Specifically,
we define a compression of a counting problem $P$ into a counting problem $Q$
as a pair of polynomial-time procedures: $\mathsf{reduce}$ and $\mathsf{lift}$.
Given an instance of $P$, $\mathsf{reduce}$ outputs an instance of $Q$ whose
size is bounded by a function $f$ of the parameter, and given the number of
solutions to the instance of $Q$, $\mathsf{lift}$ outputs the number of
solutions to the instance of $P$. When $P=Q$, compression is termed
kernelization, and when $f$ is polynomial, compression is termed polynomial
compression. Our technical (and other conceptual) contributions concern both
upper bounds and lower bounds. | Daniel Lokshtanov, Pranabendu Misra, Saket Saurabh, Meirav Zehavi | 2023-08-04T08:11:23Z | http://arxiv.org/abs/2308.02188v1 | # Kernelization of Counting Problems
###### Abstract
We introduce a new framework for the analysis of preprocessing routines for parameterized counting problems. Existing frameworks that encapsulate parameterized counting problems permit the usage of exponential (rather than polynomial) time either explicitly or by implicitly reducing the counting problems to enumeration problems. Thus, our framework is the only one in the spirit of classic kernelization (as well as lossy kernelization). Specifically, we define a compression of a counting problem \(P\) into a counting problem \(Q\) as a pair of polynomial-time procedures: reduce and lift. Given an instance of \(P\), reduce outputs an instance of \(Q\) whose size is bounded by a function \(f\) of the parameter, and given the number of solutions to the instance of \(Q\), lift outputs the number of solutions to the instance of \(P\). When \(P=Q\), compression is termed kernelization, and when \(f\) is polynomial, compression is termed polynomial compression. Our technical (and other conceptual) contributions can be classified into two categories:
**Upper Bounds.** We prove two theorems: _(i)_ The #Vertex Cover problem parameterized by solution size admits a polynomial kernel; _(ii)_ Every problem in the class of #Planar \(\mathcal{F}\)-Deletion problems parameterized by solution size admits a polynomial compression.
**Lower Bounds.** We introduce two new concepts of cross-compositions: EXACT-cross-composition and SUM-cross-composition. We prove that if a #P-hard counting problem \(P\) EXACT-cross-composes into a parameterized counting problem \(Q\), then \(Q\) does not admit a polynomial compression unless the polynomial hierarchy collapses. We conjecture that the same statement holds for SUM-cross-compositions. Then, we prove that: _(i)_ #Min \((s,t)\)-Cut parameterized by treewidth does not admit a polynomial compression unless the polynomial hierarchy collapses; _(ii)_ #Min \((s,t)\)-Cut parameterized by minimum cut size, #ODd Cycle Transversal parameterized by solution size, and #Vertex Cover parameterized by solution size minus maximum matching size, do not admit polynomial compressions unless our conjecture is false.
Introduction
Preprocessing is an integral part of almost any application, ranging from lossless data compression to microarray data analysis for the classification of cancer types. Therefore, _kernelization_ (or, more generally, _compression_), the mathematical paradigm to analyze preprocessing procedures, is termed "the lost continent of polynomial time" [22]. Formally, a decision problem \(P\) admits a _compression_ into a decision problem \(Q\) if there exists a polynomial-time algorithm that, given an instance \((I,k)\) of \(P\), translates it into an equivalent1 instance \((I^{\prime},k^{\prime})\) of \(Q\) of size \(f(k)\) for some computable function \(f\) that depends only on \(k\). When \(P=Q\), a compression is termed _kernelization_. It is known that a (decidable) problem admits a kernel if and only if it is in _fixed-parameter tractable (FPT)_[8].2 Thus, the most central question in kernelization is: Which problems admit compressions (or kernels) of size \(f(k)\) where \(f\) is polynomial in \(k\), termed _polynomial compressions_? Techniques to show upper bounds on (polynomial or other) kernel sizes have already emerged in the early 1990s [26]. On the other hand, Bodlaender et al. [3] proved that, unless the polynomial hierarchy collapses, there exist problems that do not admit a polynomial compression (and, hence, neither a polynomial kernel).
Footnote 1: That is, \((I,k)\) is a yes-instance if and only if \((I^{\prime},k^{\prime})\) is a yes-instance.
Footnote 2: We refer to Section 3 for basic definitions in parameterized complexity and graph theory.
Due to the centrality and mathematical depth of compression/kernelization, the underlying framework has been extended to capture optimization problems, and, more generally, the computation of approximate (rather than only exact) solutions for optimization problems, by Lokshtanov et al. [38] (building upon [23]). In particular, a compression of an optimization problem \(P\) into an optimization problem \(Q\) is a pair of polynomial-time procedures: reduce and lift. Given an instance of \(P\), reduce outputs an instance of \(Q\) whose size is bounded by a function \(f\) of the parameter, and given an optimal solution to the instance of \(Q\), lift outputs an optimal solution to the instance of \(P\). More generally, to encompass the computation of approximate solutions with a loss of factor \(\alpha\geq 1\), given a \(\beta\)-approximate solution to the instance of \(Q\), for any \(\beta\geq 1\), lift must output an \(\alpha\cdot\beta\)-approximate solution to the instance of \(P\). Since its introduction, this notion of compression/kernelization (termed _lossy compression/kernelization_) has already found a wide range of applications; see, e.g., [40, 28, 21, 36, 35, 1, 47, 20] for just a few illustrative examples.
In this paper, we introduce a new framework for the analysis of preprocessing routines for parameterized counting problems. Existing frameworks that encapsulate parameterized counting problems permit the usage of exponential (rather than polynomial) time either explicitly or by implicitly reducing counting problems to enumeration problems (see Section 1.1). Thus, our framework is the only one in the spirit of classic compression/kernelization in particular, and lossy compression/kernelization in general. Specifically, we define a compression of a counting problem \(P\) into a counting problem \(Q\) as a pair of polynomial-time procedures: reduce and lift. Given an instance of \(P\), reduce outputs an instance of \(Q\) whose size is bounded by a function \(f\) of the parameter, and given the number of solutions to the instance of \(Q\), lift outputs the number of solutions to the instance of \(P\). We demonstrate the depth of our framework by proofs of both positive and negative results (see Section 1.2). In particular, in terms of conceptual contribution, in addition to the framework itself, we also introduce two new types of cross-compositions, termed EXACT- and SUM-cross-compositions, aiming to provide analogs to the classic OR- and AND-cross-compositions used to derive negative results for (classic) kernels.
Over the past two decades, the body of works on parameterized counting problems has grown quite rapidly (see, e.g., [24, 12, 39, 6, 16, 14, 13] for a few illustrative examples of recent developments). In both theory and practice, there are various scenarios where counting the number of solutions might be equally (or more) important than only detecting a single solution (if one exists) [11]. This includes, for example, the computation of graph motifs to observe certain phenomena in social and biological networks [41], and determination of thermodynamic properties of discrete systems by partition functions [30]. However, most natural counting problems are not known to (and unlikely to) admit polynomial-time algorithms: Beyond problems whose decision versions are NP-hard, there also exist numerous problems whose decision versions are solvable in polynomial time, but whose counting versions are unlikely to be (e.g, a prime example of such problems is the Maximum Matching problem
on bipartite graphs [46, 45]). Naturally, this makes the study of the parameterized complexity of counting problems very attractive.
### Related Frameworks
Prior to our work, there existed three frameworks relevant to the analysis of preprocessing routines for parameterized counting problems. However, all of these three frameworks (explicitly or implicitly) correspond to computation in exponential (rather than polynomial) time, as well as to either enumeration (rather than counting) or data reduction other than compression/kernelization. Thus, they serve purposes that are very different than what _compression/kernelization_ of parameterized _counting_ problems should be (though, of course, they are of interest on their own right). Below, we elaborate on each of these three frameworks.
Among the three aforementioned frameworks, the one whose utility is most similar to ours was developed by Thurley [44], yet, even this framework concerns, implicitly, enumeration and computation in exponential time (indeed, it is referred to as a formalization of so-called enumeration compactors in [33], and as a reduction of counting to enumeration in [27]). Roughly speaking, the definition of Thurley [44] can be interpreted as follows when using two polynomial-time procedures (as we do), reduce and lift. Here, given an instance of a _counting_ problem \(P\), reduce outputs an instance of an _enumeration_ problem \(Q\) whose size is bounded by a function \(f\) of the parameter. We suppose that each solution to the instance of \(Q\) corresponds to a set of solutions to the instance of \(P\); then, the collection of sets of solutions to the instance \(P\) corresponding to the different solutions to the instance of \(Q\) should form a partition of the set of solutions to the instance of \(P\). Accordingly, given a particular _solution_\(s\) to the instance of \(Q\), lift outputs the number of solutions to the instance of \(P\) that correspond to \(s\). In particular, given an _enumeration_ of the solutions to the instance of \(Q\), by calling lift for each one of them, we can obtain (in exponential time, depending on the number of solutions) the number of solutions to the instance of \(P\).
The second framework is explicitly designed for enumeration problems. Still, we briefly discuss it here, since it shares some similarity to the framework of Thurley [44]. This framework was introduced by Creignou et al. [10] and refined by Golovach et al. [27]. Roughly speaking, in its latter incarnation, we are also given two polynomial-time procedures, reduce and lift. Here, given an instance of an _enumeration_ problem \(P\), reduce outputs an instance of an _enumeration_ problem \(Q\) whose size is bounded by a function \(f\) of the parameter. Then, lift is defined similarly as before, except that now, given a particular solution \(s\) to the instance of \(Q\), it enumerates (either in polynomial time or with polynomial delay) the solutions to the instance of \(P\) that correspond to \(s\). Like before, to derive the number of solutions to the instance of \(P\), it is required to spend exponential time.
The third framework is designed specifically for counting, but it is less in the spirit of compression/kernelization, and, accordingly, it is termed _compaction_. Additionally and similarly to the two aforementioned frameworks, it corresponds to computation in exponential time. This framework was introduced by Kim et al. [33] (and further surveyed in [43]). Roughly speaking, here we consider a polynomial-time procedure compactor (that can be thought of as reduce) and an _exponential-time (or worse)_ procedure extractor (that is very different in spirit than lift). Here, given an instance of a counting problem \(P\), compactor outputs an instance of a counting problem \(Q\) whose size is bounded by a function \(f\) of the parameter. Having computed the output instance, one can essentially discard all knowledge of the input instance, yet call the procedure extractor to solve the input instance. In a sense, the definition of compaction can be viewed as an "intermediate" concept that lies in between those of a fixed-parameter algorithm and a compression algorithm, which is of interest on its own right. Perhaps the main drawback of this third framework is that, because extractor is allowed (and must be allowed, if we deal with a #P-hard problem) to spend exponential-time (or worse) in the size of the output of compactor, we might often want to employ, in the first place, a fixed-parameter algorithm directly on the instance of \(P\).
We note that we are not aware, with respect to any of the three frameworks discussed above, of the establishment of any non-trivial lower bound--that is, a lower bound that does not simply follows from fixed-parameter intractability.
**Remark.** We were very recently made aware that independently of our work, Jansen and van der Steenhoven [31] have just presented results that are more in-lined in spirit with ours: Specifically, they either solve the given instance, or output an instance of size polynomial in the parameter and with the same number of solutions. They also speculate on developing a meaningful theory of counting kernelization. We answer this speculation as in this paper, as we develop a framework counting kernelization, along with a framework for proving lower-bounds.
### Our Contribution
Our technical (and other conceptual) contributions can be classified into two categories: upper bounds and lower bounds. Here, we discuss the statements our results, and the new concepts that we introduce in the context of lower bounds. The technical aspects of our work are overviewed later, in Section 2. (We remark that some additional simple statements concerning our notion of compression/kernelization are proved in Section 4.)
**Upper Bounds.** Let us start with the discussion of our upper bounds. We begin by the analysis of the \(\#k\)-Vertex Cover problem, whose decision version is the most well studied problem in parameterized complexity [15, 18]. The objective is to count the number of vertex covers of size at most \(k\) in a given graph \(G\). Here, it is important to note that we count _all_ vertex covers of size at most \(k\), and not only the minimal ones (which is a significantly easier task; see Section 5). For the \(\#k\)-Vertex Cover problem, we prove the following theorem in Section 5.
**Theorem 1**.: \(\#k\)-Vertex Cover _admits a polynomial kernel._
Next, we turn to consider a wide class of parameterized counting problems, termed the class of \(\#k\)-Planar \(\mathcal{F}\)-Deletion problems. In particular, the class of \(k\)-Planar \(\mathcal{F}\)-Deletion problems encompasses a wide variety of well-known problems that have been extensively studied from the viewpoint of parameterized complexity, such as Vertex Cover, Feedback Vertex Set, Treewidth \(\eta\)-Deletion, and more [25]. While we present a meta-theorem that resolves every problem in this class, we do not generalize our previous theorem--our meta-theorem yields compressions rather than kernelizations. Formally, the class of \(\#k\)-Planar \(\mathcal{F}\)-Deletion problems contains one problem for every (finite) set of connected graphs \(\mathcal{F}\) that contains at least one planar graph--here, given a graph \(G\) and \(k\in\mathbb{N}_{0}\), the objective is to count the number of vertex sets of size at most \(k\) whose removal from \(G\) yields a graph that does not contain any graph from \(\mathcal{F}\) as a minor. For the class of \(\#k\)-Planar \(\mathcal{F}\)-Deletion problems, we prove the following theorem in Section 6.
**Theorem 2**.: \(\#k\)-Planar \(\mathcal{F}\)-Deletion _admits a polynomial compression._
**Lower Bounds.** We present two new types of cross-compositions, which we term EXACT-cross-composition and SUM-cross-composition. To understand the roots of these notions, let us first briefly present the classic notion of OR-cross-composition. Roughly speaking, we say that a decision problem \(P\) OR-cross-composes into a parameterized problem \(Q\) if, given a set of instances \(x_{1},x_{2},\ldots,x_{t}\) of \(P\), we can, in polynomial time, output a single instance \((y,k)\) of \(Q\) with the following properties: _(i)_ the parameter \(k\) is bounded by a polynomial function of \(\max_{i=1}^{t}|x_{i}|\) and \(\log t\), and _(ii)_\((y,k)\) is a yes-instance if and only if at least one \(x_{i}\) is a yes-instance. The importance of the notion of OR-cross-composition to compression/kernelization is rooted at the following theorem: If an NP-hard problem \(P\) OR-cross-composes into a parameterized problem \(Q\), then, \(Q\) does not admit a polynomial compression (and, hence, neither a polynomial kernel), unless \(\mathrm{coNP}\subseteq\mathrm{NP}/\mathrm{poly}\)[4, 5]. The intuition behind the correctness of this theorem is that, if \(Q\) did admit a polynomial compression, then that would have meant that, in polynomial time, we are able to turn \(t\) instances of an NP-hard problem to a single instance whose size depends (roughly) only on that size of a polylogarithmic number of them rather than all of them--intuitively, this means that we were able to resolve instances of an NP-hard problem in polynomial time.
Now, let us first discuss our notion of EXACT-cross-composition.3 Roughly speaking, we say that a counting problem \(P\) EXACT-cross-composes into a parameterized counting problem \(Q\) if, given a set of instances \(x_{1},x_{2},\ldots,x_{t}\) of \(P\), we can, in polynomial time, output a single instance \((y,k)\) of \(Q\) with the following properties: _(i)_ the parameter \(k\) is bounded by a polynomial function of \(\max_{i=1}^{t}|x_{i}|\) and \(\log t\), and _(ii)_ given the number of solutions to \((y,k)\), we can output, in polynomial time, the number of solutions to \(x_{i}\) for every \(i\in\{1,2,\ldots,t\}\). For EXACT-cross-composition, we prove the following theorem in Section 8.
Footnote 3: In the manuscript, we consider SUM-cross-composition first since the reduction we give in the context of EXACT-cross-composition builds upon one of the reductions that we give in the context of SUM-cross-composition.
**Theorem 3**.: _Assume that a #P-hard counting problem \(P\) EXACT-cross-composes into a parameterized counting problem \(Q\). Then, \(Q\) does not admit a polynomial compression, unless #P \(\subseteq\) "NP/poly" (which implies that coNP \(\subseteq\) NP/poly)._
For an application of Theorem 3, we consider the classic #Min \((s,t)\)-Cut problem. Here, given a graph \(G\) and two vertices \(s,t\) in \(G\), the objective is to count the number of minimum \((s,t)\)-cuts in \(G\). Notably, the decision version of this problem is solvable in polynomial time [9] (and, hence, it trivially admits a polynomial, and even constant-size, kernel, with respect to any parameter). Moreover, it is easy to see that #Min \((s,t)\)-Cut parameterized by treewidth is in FPT. So, it is natural to ask whether #Min \((s,t)\)-Cut parameterized by treewidth admits a polynomial kernel (or at least a polynomial compression). We answer this question negatively in Section 8.
**Theorem 4**.: #\(w\)-Min \((s,t)\)-Cut _does not admit a polynomial compression, unless #P \(\subseteq\) "NP/poly" (which implies that coNP \(\subseteq\) NP/poly)._
Lastly, let us discuss our notion of SUM-cross-composition. Roughly speaking, we say that a counting problem \(P\) SUM-cross-composes into a parameterized counting problem \(Q\) if, given a set of instances \(x_{1},x_{2},\ldots,x_{t}\) of \(P\), we can, in polynomial time, output a single instance \((y,k)\) of \(Q\) with the following properties: _(i)_ the parameter \(k\) is bounded by a polynomial function of \(\max_{i=1}^{t}|x_{i}|\) and \(\log t\), and _(ii)_ the number of solutions to \((y,k)\) is equal to the sum of the number of solutions to \(x_{i}\) over every \(i\in\{1,2,\ldots,t\}\). For SUM-cross-composition, we have the following conjecture, termed the SUM-conjecture: If a #P-hard counting problem \(P\) SUM-cross-composes into a parameterized counting problem \(Q\), then \(Q\) does not admit a polynomial compression. The reason why we believe that this conjecture is true is rooted at the exact same intuition mentioned earlier for the correctness of the corresponding theorem for OR-cross-composition.
As applications of our conjecture, we again consider the #Min \((s,t)\)-Cut problem, now parameterized by the size of a minimum \((s,t)\)-cut (which is in FPT [2]). Additionally, we consider the #Odd Cycle Transversal problem parameterized by solution size and the #Vertex Cover problem parameterized by solution size minus either its LP-value or the size of a maximum matching (we refer to Section 3 for formal definitions). We remark that the decision versions of these parameterized counting problems are known to admit polynomial kernels [34]. For the aforementioned parameterized counting problems, we prove the following theorem in Section 7.
**Theorem 5**.: #\(k\)-Min \((s,t)\)-Cut_, #\(k\)-Odd Cycle Transversal, #\(\ell\)-Vertex Cover and #\(m\)-Vertex Cover do not admit polynomial compressions, unless the SUM-conjecture is false._
## 2 Overview of Our Proofs
In what follows, we present an overview for the proofs of our main theorems.
Proof of Theorem 1.: Our reduction consists of two steps. Here, we note that most of our efforts are invested in the second step. The first step yields two graphs: \(G_{1}\) and \(G_{2}\). We begin by an exhaustive application of the classic Buss rule (Definition 5.1) on the input instance \((G,k)\). In particular, unless the answer is \(0\), this yields an instance \((G_{1},k_{1})\) with \(k_{1}\leq k\) and \(|E(G_{1})|\leq k_{1}^{2}\) whose number of solution
equals the number of solutions to \(G\). At this point, we do not have a kernel (or compression)--\(|V(G_{1})|\) can contain arbitrarily many isolated vertices. So, we define \(G_{2}\) as \(G_{1}\) where all isolated vertices are removed. However, the number of solutions (denoted by \(x_{2}\)) to \((G_{2},k_{1})\) can be very different than the number of solutions (denoted by \(x_{1}\)) to \((G_{1},k_{1})\), and it is unclear how to derive the second from the first. Specifically, suppose that \(y_{i}\), \(i\in\{1,2,\ldots,k_{2}\}\), is the number of solutions to \((G_{2},k_{1})\) of size exactly \(i\). It is easy to see that \(x_{1}=\sum_{i=0}^{k_{2}}(y_{i}\cdot\sum_{j=0}^{k_{2}-i}\binom{|V(G_{1})|-|V(G_ {2})|}{j})\). However, by knowing \(x_{2}\), we cannot know the individual values of the \(y_{i}\)'s! Although \(x_{2}=\sum_{i=0}^{k_{2}}y_{i}\), there can be more than one choice (in fact, there can be exponentially many choices) for the \(y_{i}\)'s given only the knowledge of \(x_{2}\).
Due to the above difficulty, we perform the second step of our reduction. Roughly speaking, we define \(G_{3}\) (in Definition 5.4) by the replacement of each vertex of \(G_{2}\) by \(d\) copies (false twins) of that vertex, and the addition of \(t\) new isolated vertices. To make latter calculations work, we pick \(d=|V(G_{2})|\leq\mathcal{O}((k_{2})^{2})\), and we pick \(t\) to be "large enough" compared to \(d\). Now, our main objective is to prove how from the number of solutions to \((G_{3},k_{3})\) (denoted by \(x_{3}\)), we can derive the individual values of the \(y_{i}\)'s.
To achieve the above-mentioned objective, we define a mapping from the set of solutions to \((G_{2},k_{1})\) to the power set of the set of solutions to \((G_{3},k_{3})\). Specifically, each vertex subset (in a collection denoted by \(\mathsf{Map}(X)\)) of \(G_{3}\) that is mapped to a solution \(X\) to \((G_{2},k_{1})\) is the union of all "copies" of each vertex in \(X\) as well as at most \(k_{3}-d\cdot|X|\) many other vertices from \(G_{3}\) so that there does not exist a vertex outside \(X\) having all of its copies chosen (Definition 5.7). We first assert that this mapping corresponds to a partition of the set of solutions to \((G_{3},k_{3})\) (Lemma 5.8). Then, we turn to analyze the sizes of the mapped collections. Towards this, we begin with a simple proof that for every \(X\) is of size \(i\), for some \(i\in\{0,1,\ldots,k_{2}\}\), the size of \(|\mathsf{Map}(X)|\) is the same (denoted by \(w_{i}\)), captured by an explicit formula (Lemma 5.9). In particular, \(x_{3}=\sum_{i=0}^{k_{2}}y_{i}\cdot w_{i}\). Consider this equality as Equation (*).
The main property of the \(w_{i}\)'s is that, for every \(i\in\{0,1,\ldots,k_{2}\}\), \(w_{i}\) is "significantly" larger than the sum of all \(w_{j}\)'s for \(j<i\) (proved in Lemma 5.10). In particular, based on Equation (*) and this property, we can derive, from \(x_{3}\), the individual values of the \(y_{i}\)'s. Specifically, this can be done by the following loop. For \(i=0,1,2,\ldots,k_{2}\), we let \(y_{i}\leftarrow\lfloor x_{3}/w_{i}\rfloor\), and update \(x_{3}\gets x_{3}-y_{i}\cdot w_{i}\). This computation can be performed efficiently (in polynomial time), since the \(w_{i}\)'s can be computed efficiently by dynamic programming (Lemma 5.11). In turn, this computation is the main part of the procedure lift, presented in Section 5.3.
Proof of Theorem 2.: At a high level, we follow the approach of [25] who give a polynomial kernel for Planar-\(\mathcal{F}\) Deletion. Given an instance \((G,k)\), we compute a modulator \(X\) using an approximation algorithm [25] (see Proposition 6.3). This modulator has size \(k^{\mathcal{O}(1)}\), assuming that \(G\) has a \(\mathcal{F}\)-deletion set of size at most \(k\). Next, we consider the components of \(G-X\). A component \(C\) is _irrelevant_, if it is disjoint from every minimal \(\mathcal{F}\)-deletion set of size at most \(k\). Using the properties of \(\mathcal{F}\)-free graphs (see Proposition 6.1 and Proposition 6.3), we obtain that all but \(k^{\mathcal{O}(1)}\) components of \(G-X\) are irrelevant (see Lemma 6.6). We delete all irrelevant components in the first phase of the reduction step. Let \(G^{\prime}\) be the resulting graph.
The next reduction step, considers each component of \(G^{\prime}-X\). For each such component \(C\), we observe that it is a _near-protrusion_[25], i.e. a subgraph that has constant-treewidth and after the removal of a \(\mathcal{F}\)-deletion set from \(G^{\prime}\), has a constant sized boundary. We then apply several powerful results on _boundaried graphs_, summarized in Section 6.1 (also see [26] for details), to show that the information required to count the number of \(\mathcal{F}\)-deletion sets of size \(k^{\prime}\) in \(G^{\prime}\), for every \(k^{\prime}\leq k\), can stored in a compressed form using \(k^{\mathcal{O}(1)}\) space.
Briefly, a boundaried graph is a graph \(H\) where a subset of vertices \(B\) are marked as boundary vertices. These boundary vertices are labeled with integers. Given two boundaried graphs \(H_{1}\) and \(H_{2}\), whose boundary vertices are labeled using the same set of integers, we can "glue" them to obtain a graph \(H_{1}\oplus H_{2}\), which is obtained by first taking a disjoint union of the two graphs and then identifying boundary vertices with the same label. Using the notion of boundaried graphs and gluing, we can define an equivalence relation, \(\equiv_{\mathcal{F}}\) such that \(H_{1}\equiv_{\mathcal{F}}H_{2}\) if and only if for any other boundaried graph \(H_{3}\), \(H_{1}\oplus H_{3}\) is \(\mathcal{F}\)-minor free \(\iff\)\(H_{2}\oplus H_{3}\) is \(\mathcal{F}\)-minor free. It is known that this equivalence relation
has finitely many equivalence class for any fixed \(\mathcal{F}\) (see Proposition 6.5).
Intuitively, our compression for a connected component \(C\) of \(G^{\prime}-X\), considers the effect of deleting a \(\mathcal{F}\)-deletion set \(S\) from \(G^{\prime}\), and records the number of ways this can happen. Since \(C\) is a near protrusion, it has constant-treewidth and a constant size boundary in \(G-S\) that is a subset of \(X\setminus S\). We treat \(G[(V(C)\cup N(C))\setminus S]\) as a boundaried graph with boundary \(N(C)\setminus S\), and note that \(N(C)\subseteq X\). Note that \(G[(V(C)\cup N(C))\setminus S]\) lies in an equivalence class \(\mathcal{R}\) of \(\equiv_{\mathcal{F}}\). Then, for each choice of \(\mathcal{R}\), \(N[C]\setminus S\) and \(|S\cap V(C)|\) we record the number of subsets \(S_{C}\) of \(V(C)\) such that \(G[(V(C)\cup N(C))\setminus S]\) with boundary \(N[C]\setminus S\) forms a boundaried graph that lies in \(\mathcal{R}\). We compute and store this information in a table \(T_{C}\) for each component \(C\). We show that the number of such choices is bounded by \(k^{\mathcal{O}(1)}\), and each entry of \(T_{C}\) can be computed polynomial time. We then argue that the information stored in the table is sufficient to compute \(\mathsf{count}(k^{\prime})\) which is the number of \(\mathcal{F}\)-deletion sets in \(G^{\prime}\) of size at most \(k^{\prime}\), for every \(k^{\prime}\leq k\). Note that computing \(\mathsf{count}(k^{\prime})\) takes time exponential in \(k\). See Section 6.2.2 for details.
The output of the reduce procedure for #Planar-\(\mathcal{F}\) Deletion, given an instance \((G,k)\), is a modulator \(X\) of size \(k^{\mathcal{O}(1)}\) and a collection of tables \(\{T_{C}\}\), one for each non-irrelevant component of \(G-X\). Note that the size of the output is \(k^{\mathcal{O}(1)}\). Next, the lift procedure is given the instance \((G,k)\), the modulator \(X\), the collection of tables \(T_{C}\) for each component of \(G^{\prime}-X\), and finally the values \(\{\mathsf{count}(k^{\prime})\mid k^{\prime}\leq k\}\). The lift procedure first computes \(\tau_{irr}\) which denotes the total number of vertices in the irrelevant components of \(G-X\). Then, from \(\{\mathsf{count}(k^{\prime})\mid k^{\prime}\leq k\}\) and \(\tau_{irr}\) it is easy to count the total number of solutions of size at most \(k\) in \(G\) in polynomial time. The reduce and lift procedures together prove this theorem. We refer to Section 6 for details.
**Proof of Theorem 5.** We start with the proof that #Min \((s,t)\)-Cut (which is #P-hard [42]) SUM-cross-composes into #\(k\)-Min \((s,t)\)-Cut (Lemma 7.4). Suppose that we are given \(\ell\) instances of #Min \((s,t)\)-Cut, \((G_{1},s_{1},t_{1}),(G_{2},s_{2},t_{2}),\ldots,(G_{\ell},s_{\ell},t_{\ell})\), where the size of a minimum \((s_{i},t_{i})\)-cut in \(G_{i}\) is assumed to be equal to the size of a minimum \((s_{j},t_{j})\)-cut in \(G_{j}\), for every \(i,j\in[\ell]\). (This assumption is justified by the more general definition of cross-compositions that makes use of equivalence relations.) Then, the reduction to a single instance \((G,s,t)\) of is performed as follows: We take the disjoint union of the input graphs, and unify \(t_{i}\) with \(s_{i+1}\), for every \(i\in\{1,2,\ldots,\ell-1\}\); additionally, we let \(s=s_{1}\) and \(t=t_{\ell}\). With this construction at hand, it is easy to see that each minimum \((s,t)\)-cut in \(G\) corresponds to a minimum \((s_{i},t_{i})\)-cut in one of the \(G_{i}\)'s, and vice versa. Thus, we derive that the number of minimum \((s,t)\)-cuts in \(G\) equals the sum of the number of minimum \((s_{i},t_{i})\)-cuts in \(G_{i}\), over every \(i\in\{1,2,\ldots,\ell\}\). Moreover, the parameter \(k\) is trivially bounded from above by \(\max_{i=1}^{\ell}|E(G_{i})|\).
Having asserted that #\(k\)-Min \((s,t)\)-Cut does not admit a polynomial compression under the SUM-conjecture, we transfer its hardness to the #\(k\)-Odd Cycle Transversal problem by the design of a _polynomial parameter transformation_ (Definition 3.11) in Lemma 7.8. Suppose that we are given an instance \((G,s,t)\) of #\(k\)-Min \((s,t)\)-Cut where \(G\) is a connected graph. Then, we first turn \(G\) into a graph \(G_{1}\) be subdividing each edge once. In particular, we thus derive that all paths in \(G_{1}\) between vertices that correspond to vertices (rather than edges) in \(G\) are of even length. Next, we turn \(G_{1}\) into a graph \(G_{2}\) by replacing each vertex of \(G_{1}\) that corresponds to a vertex of \(G\) by \(k+1\) copies (false twins). Intuitively, this will have the effect that no minimal solution being of size at most \(k\) to our instance of #\(k\)-Odd Cycle Transversal (defined immediately) will pick any vertex in \(G_{2}\) that corresponds to a vertex in \(G\) (since we deal with edge-cuts, this property must be asserted for our proof of correctness). Complementary to this, we will (implicitly) prove that our instance has no solution of size smaller than \(k\), so every solution of size at most \(k\) is of size exactly \(k\) and a minimal one. The last step of the reduction is to turn \(G_{2}\) into a graph \(G^{\prime}\) by adding two new adjacent vertices, \(x_{i}\) and \(y_{i}\), for every \(i\in\{1,2,\ldots,k+1\}\), and making all the \(x_{i}\)'s adjacent to all the copies of \(s\), and all the \(y_{i}\)'s adjacent to all the copies of \(t\). With this construction of \(G^{\prime}\) at hand (and keeping the parameter \(k\) unchanged), we are able to prove that: _(i)_ every odd cycle in \(G^{\prime}\) contains at least one path from a copy of \(s\) to a copy of \(t\) that corresponds to an \((s,t)\)-path in \(G\), and _(ii)_ every \((s,t)\)-path in \(G\) can be translated to some particular set of odd cycles in \(G^{\prime}\) such that, to hit that set with at most \(k\) vertices, it only "makes sense" to pick vertices in \(G^{\prime}\) that correspond to edges in \(G\). From this, we are able to derive that the number of minimum \((s,t)\)-cuts in \(G\) equals the number of odd cycles
transversal of \(G^{\prime}\) of size at most \(k\).
Lastly, having asserted that \(\#k\)-Odd Cycle Transversal does not admit a polynomial compression under the SUM-conjecture, we transfer its hardness to the \(\#\ell\)-Vertex Cover problem (where the parameter is \(k\) minus the LP-value) and the \(\#m\)-Vertex Cover problem (where the parameter is \(k\) minus the maximum size of a matching) by the design of another polynomial parameter transformation. We remark that, since it always holds that \(m\geq\ell\), the hardness for \(\#m\)-Vertex Cover implies the hardness for \(\#\ell\)-Vertex Cover. While the transformation itself is the same as the known reduction from \(k\)-Odd Cycle Transversal to \(m\)-Vertex Cover (Lemma 3.10 in [15]), the analysis somewhat differs. In particular, for the correctness, we actually cannot use \(\#k\)-Odd Cycle Transversal as the source problem, but only restricted instances of it, where for every odd cycle transversal \(S\) of size at most \(k\), the removal of \(S\) from the input graph \(G\) yields a connected graph. Then, we are able to show that the number of odd cycle transversals of \(G\) of size at most \(k\) is exactly half the number of vertex covers of the output graph \(G^{\prime}\) of size at most \(k^{\prime}\). (The parameter of the output instance, \(k^{\prime}-m\), equals \(k\).)
**Proof of Theorems 3 and 4.** The proof of Theorem 3 follows the lines of, yet is not identical to, the proof of the analogous statement for OR-cross-composition (see Appendix C). For example, one notable difference concerns the part of the proof where we need to define a problem whose solution is a function of solutions of another problem. While for OR-cross-compositions, the chosen function is the logical OR of the given solutions, for us the chosen function is a weighted summation of the given solutions with weights chosen so that, from the weighted sum, we can derive each individual solution (that is similar to the spirit of the lift procedure given as part of the proof of Theorem 1).
For the proof of Theorem 4, we prove that \(\#\textsc{Min}\ (s,t)\)-Cut EXACT-cross-composes into \(\#w\)-Min \((s,t)\)-Cut (Lemma C.3). The reduction begins by taking the instance \((G,s,t)\) built in the proof of the SUM-cross-composition discussed earlier. We note that the treewidth of \(G\) equals the maximum treewidth of \(G_{i}\), over every \(i\in\{1,2,\ldots,\ell\}\). However, recall that this construction only yields that the number of solutions to \((G,s,t)\) (say, \(q\)) equals \(\sum_{i=1}^{\ell}q_{i}\) where \(q_{i}\) is the number of solutions to \((G_{i},s_{i},t_{i})\). So, by knowing only \(q\), we are not able to derive the individual \(q_{i}\)'s (there can be exponentially many options for their values). So, we further modify the graph \(G\) to obtain a graph \(G^{\prime}\) as follows. For the copy of each \(G_{i}\) in \(G\), we add a \(2^{m\cdot(\ell-1)}\) internally vertex-disjoint paths from \(s_{i}\) to \(t_{i}\), where \(m=2\max_{j=1}^{\ell}|E(G_{j})|\): \(m(i-1)\) of these paths have three internal vertices, and the rest have one internal vertex. Notice that, to separate \(s_{i}\) and \(t_{i}\) in this "extended" copy of \(G_{i}\), we need to pick at least one edge from each of the newly added paths, and we have two (resp., four) options for which edge to pick from each of the paths with one (resp., three) internal vertices. Having this insight in mind, we are able to show that the number of solutions to \((G^{\prime},s,t)\) (say, \(q^{\prime}\)) equals \(\sum_{i=1}^{\ell}q_{i}\cdot 2^{m(i-1)+m(\ell-1)}\). In particular, the coefficient of each \(q_{i}\) is "significantly" larger than the sum of the coefficients of all \(q_{j}\), \(j<i\). In turn, this allows us to derive, from \(q^{\prime}\), the individual values of the \(q_{i}\)'s (similarly, in this part, to the corresponding parts of the proofs of Theorem 1 and 3). Further, we show that the addition of the aforementioned paths does not increase the treewidth of the graph (unless it was smaller than 2).
## 3 Preliminaries
Let \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\). For \(n\in\mathbb{N}\), let \([n]=\{1,2,\ldots,n\}\). Given a universe \(U\), let \(2^{U}=\{A:A\subseteq U\}\).
**Graph Notation.** Throughout the paper, we consider finite, simple, undirected graphs. Given a graph \(G\), let \(V(G)\) and \(E(G)\) denote its vertex set and edge set, respectively. Given a subset \(U\subseteq E(G)\), let \(G-U\) denote the graph on vertex set \(V(G)\) and edge set \(E(G)\setminus U\). We say that \(S\subseteq V(G)\)_covers_\(U\subseteq E(G)\) if for every edge \(\{u,v\}\in U\), \(S\cap\{u,v\}\neq\emptyset\). A _vertex cover_ of \(G\) is a subset \(S\subseteq V(G)\) that covers \(E(G)\). The set \(S\) is said to be _minimal_ if every subset of it is not a vertex cover of \(G\). An _independent set_ of \(G\) is a subset \(S\subseteq V(G)\) such that \(E(G)\cap\{\{u,v\}:v\in S\}=\emptyset\). A _matching_ in \(G\) is a subset \(S\subseteq E(G)\) such that no two edges in \(S\) share an endpoint. Let \(\mu(G)\) denote the maximum size of a matching in \(G\). Given two distinct vertices \(s,t\in V(G)\), an _\((s,t)\)-cut in \(G\)_ is a subset \(S\subseteq E(G)\)
such that in \(G-S\), the vertices \(s\) and \(t\) belong to different connected components. An \((s,t)\)-cut in \(G\) is _minimum_ if there does not exist an \((s,t)\)-cut in \(G\) of smaller size. Given a subset \(U\subseteq V(G)\), let \(G[U]\) denote the subgraph of \(G\) induced by \(U\), and let \(G-U\) denote \(G[V(G)\setminus U]\). An _odd cycle transversal of \(G\)_ is a subset \(S\subseteq V(G)\) such that \(G-S\) does not contain any odd cycle (i.e., a cycle with an odd number of vertices, or, equivalently, of edges). The subdivision of an edge \(\{u,v\}\in E(G)\) is the operation that removes \(\{u,v\}\) from \(G\), adds a new vertex \(x\) to \(G\), and adds the edges \(\{u,x\}\) and \(\{v,x\}\) to \(G\). Given a graph \(H\), we write \(H\subseteq G\) to indicate that \(H\) is a subgraph of \(G\). A graph \(G\) is _bipartite_ if there exists a partition \((X,Y)\) of \(V(G)\) such that \(E(G)\subseteq\{\{x,y\}:x\in X,y\in Y\}\), that is, \(X\) and \(Y\) are independent sets. Note that a graph \(G\) is bipartite if and only if it does not contain any odd cycle [17]. We say that a graph \(H\) is a _minor_ of a graph \(G\) if there exists a series of vertex deletions, edge deletions and edge contractions in \(G\) that yields \(H\). We say that \(G\) is a _planar graph_ if it can be drawn on the Euclidean plane so that its edges can intersect only at their endpoints.
Treewidth is a structural parameter indicating how much a graph resembles a tree. Formally:
**Definition 3.1**.: _A tree decomposition of a graph \(G\) is a pair \(\mathcal{T}=(T,\beta)\) of a tree \(T\) and \(\beta:V(T)\to 2^{V(G)}\), such that_
1. _for any edge_ \(\{x,y\}\in E(G)\) _there exists a node_ \(v\in V(T)\) _such that_ \(x,y\in\beta(v)\)_, and_
2. _for any vertex_ \(x\in V(G)\)_, the subgraph of_ \(T\) _induced by the set_ \(T_{x}=\{v\in V(T):x\in\beta(v)\}\) _is a non-empty tree._
_The width of \((T,\beta)\) is \(\max_{v\in V(T)}\{|\beta(v)|\}-1\). The treewidth of \(G\), denoted by \(\mathsf{tw}(G)\), is the minimum width over all tree decompositions of \(G\)._
Problems and Counting Problems.A _decision problem_ (or _problem_ for short) is a language \(P\subseteq\Sigma^{\star}\). Here, \(\Sigma\) is a finite alphabet, and, without loss of generality, we can assume that \(\Sigma=\{0,1\}\). Often, some strings in \(\Sigma^{\star}\) are "irrelevant" to \(P\) (specifically, they clearly do not belong to \(P\))--e.g., when \(P\) concerns graphs and a given string does not encode a graph; so, the term _instance of \(P\)_ is loosely used for strings that are relevant to \(P\) in some such natural sense. An _algorithm for \(P\)_ is a procedure that, given \(x\in\Sigma^{\star}\), determines whether \(x\in P\). We say that an instance \(x\) of a problem \(P\) is _equivalent_ to an instance \(x^{\prime}\) of a problem \(Q\) if: \(x\in P\) if and only if \(x^{\prime}\in Q\). A _counting problem_ is a mapping \(F\) from \(\Sigma^{\star}\) to \(\mathbb{N}_{0}\). As before, the term _instance of \(F\)_ is loosely used--while one still needs to define the mapping of "irrelevant" strings, the consideration of this mapping will be immaterial to us. An _algorithm for \(F\)_ is a procedure that, given \(x\in\Sigma^{\star}\), outputs \(F(x)\). A counting problem \(F\) is a _counting version_ of a problem \(P\) if, for every \(x\in\Sigma^{\star}\), \(x\in P\) if and only if \(F(x)\geq 1\). When we refer to "the" counting version of a problem \(P\), we consider the counting version of \(P\) whose choice (among all counting versions of \(P\)) is widely regarded the most natural one, and it is denoted by \(\#P\).
Parameterized Complexity.We start with the definition of a parameterized problem.
**Definition 3.2** (**Parameterized Problem)**.: _A parameterized problem is a language \(P\subseteq\Sigma^{\star}\times\mathbb{N}_{0}\), where \(\Sigma\) is a fixed, finite alphabet. For an instance \((x,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\), \(k\) is called the parameter._
An _algorithm for \(P\)_ is a procedure that, given \((x,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\), determines whether \((x,k)\in P\). We say that \(P\) is _fixed-parameter tractable (FPT)_ if there exists an algorithm for \(P\) that runs in time \(f(k)\cdot|x|^{\mathcal{O}(1)}\) where \(f\) is some computable function of \(k\). Such an algorithm is called a _fixed-parameter algorithm_. The main tool to assert that one problem is in FPT based on an already known membership of another problem in FPT is the design of a PPT, defined as follows.
**Definition 3.3** (**Ppt)**.: _Let \(P,Q\subseteq\Sigma^{\star}\times\mathbb{N}_{0}\) be two parameterized problems. A polynomial-time algorithm \(A\) is a polynomial parameter transformation (PPT) from \(P\) to \(Q\) if, given an instance \((x,k)\) of \(P\), \(A\) outputs an equivalent instance \((x^{\prime},k^{\prime})\) of \(Q\) (i.e., \((x,k)\in P\) if and only if \((x^{\prime},k^{\prime})\in Q\)) such that \(k^{\prime}\leq p(k)\) for some polynomial function \(p\)._
A companion notion of FPT is that of a compression or a kernelization, defined as follows.
**Definition 3.4** (**Compression and Kernelization)**.: _Let \(P\) and \(Q\) be two parameterized problems. A compression (or compression algorithm for \(P\) is a polynomial-time procedure that, given an instance \((x,k)\) of \(P\), outputs an equivalent instance \((x^{\prime},k^{\prime})\) of \(Q\) where \(|x^{\prime}|,k^{\prime}\leq f(k)\) for some computable function \(f\). Then, we say that \(P\) admits a compression of size \(f(k)\). When \(f\) is polynomial, then we say that \(P\) admits a polynomial compression. Further, when \(P=Q\), we refer to compression also as kernelization._
Now, we state two central propositions that concern kernelization.
**Proposition 3.5** ([8]).: _Let \(P\) be a parameterized problem that is decidable. Then, \(P\) is FPT if and only if it admits a kernel._
**Proposition 3.6** (Folklore; See, e.g., Theorem 15.15 in [15]).: _Let \(P,Q\) be two parameterized problems such that there exists a PPT from \(P\) to \(Q\). If \(Q\) admits a polynomial compression, then \(P\) admits a polynomial compression._
Towards the statement of the main tool to refute the existence of polynomial compressions (and, hence, also polynomial kernels) for specific problems, we state the two following definitions.
**Definition 3.7** (**Polynomial Equivalence Relation)**.: _An equivalence relation \(R\) on a set \(\Sigma^{\star}\) is a polynomial equivalence relation if the following conditions are satisfied:_
* _There exists an algorithm that, given strings_ \(x,y\in\Sigma^{\star}\)_, resolves whether_ \(x\equiv_{R}y\) _in time polynomial in_ \(|x|+|y|\)_._
* _The relation_ \(R\) _restricted to the set_ \(\Sigma^{\leq n}\) _has at most_ \(p(n)\) _equivalence classes, for some polynomial function_ \(p\)_._
**Definition 3.8** (**OR-Cross-Composition)**.: _Let \(P\subseteq\Sigma^{\star}\) be a problem and \(Q\subseteq\Sigma^{\star}\times\mathbb{N}_{0}\) be a parameterized problem. We say that \(P\) OR-cross-composes into \(Q\) if there exists a polynomial equivalence relation \(R\) and an algorithm \(A\), called an OR-cross-composition, satisfying the following conditions. The algorithm \(A\) takes as input a sequence of strings \(x_{1},x_{2},\ldots,x_{t}\in\Sigma^{\star}\) that are equivalent with respect to \(R\), runs in time polynomial in \(\sum_{i=1}^{t}|x_{i}|\), and outputs one instance \((y,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\) such that:_
* \(k\leq p(\max_{i=1}^{t}|x_{i}|+\log t)\) _for some polynomial function_ \(p\)_, and_
* \((y,k)\in Q\) _if and only if there exists at least one index_ \(i\in[t]\) _such that_ \(x_{i}\in P\)_._
Now, we state the main tool to refute the existence of polynomial compressions for specific problems.
**Proposition 3.9** ([4, 5]).: _Assume that an NP-hard problem \(P\) OR-cross-composes into a parameterized problem \(Q\). Then, \(Q\) does not admit a polynomial compression, unless coNP \(\subseteq\) NP/poly._
We remark that an analogous proposition, where OR is replaced by AND, has been proved in [19].
We proceed to the definition of a parameterized counting problem.
**Definition 3.10** (**Parameterized Counting Problem)**.: _A parameterized counting problem is a mapping \(F\) from \(\Sigma^{\star}\times\mathbb{N}_{0}\) to \(\mathbb{N}_{0}\)._
An _algorithm for \(F\)_ is a procedure that, given \((x,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\), outputs \(F(x,k)\). As before, we say that \(P\) is _fixed-parameter tractable (FPT)_ if there exists an algorithm for \(P\) that runs in time \(f(k)\cdot|x|^{\mathcal{O}(1)}\) where \(f\) is some computable function of \(k\). A parameterized counting problem \(F\) is a _counting version_ of a parameterized problem \(P\) if, for every \((x,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\), \((x,k)\in P\) if and only if \(F(x,k)\geq 1\). When we refer to "the" counting version of a parameterized problem \(P\), we consider the counting version of \(P\) whose choice (among all counting versions of \(L\)) is widely regarded the most natural one, and it is denoted by \(\#P\).
**Definition 3.11** (**PPT (Counting Version)**).: _Let \(P,Q:\Sigma^{*}\times\mathbb{N}_{0}\to\mathbb{N}_{0}\) be two parameterized counting problems. A pair of polynomial-time procedures \((\mathsf{reduce},\mathsf{lift})\) is a polynomial parameter transformation (PPT) from \(P\) to \(Q\) such that:_
* _Given an instance_ \((x,k)\) _of_ \(P\)_,_ \(\mathsf{reduce}\) _outputs an instance_ \((x^{\prime},k^{\prime})\) _of_ \(Q\) _such that_ \(k^{\prime}\leq p(k)\) _for some polynomial function_ \(p\)_._
* _Given an instance_ \((I,k)\) _of_ \(P\)_, the instance_ \((I^{\prime},k^{\prime})\) _that is the output of_ \(\mathsf{reduce}\) _on_ \((I,k)\)_, and_ \(x^{\prime}\) _such that_ \(P(I^{\prime},k^{\prime})=x^{\prime}\)_,_ \(\mathsf{lift}\) _outputs_ \(x\) _such that_ \(Q(I,k)=x\)_._
The main concept to show that a problem is unlikely to be FPT is the one of parameterized reductions analogous to those employed in classical complexity. Here, the concept of W[1]-hardness replaces the one of NP-hardness, and for reductions we need not only construct an equivalent instance in FPT time, but also ensure that the size of the parameter in the new instance depends only on the size of the parameter in the original one. If there exists such a reduction transforming a parameterized problem known to be W[1]-hard to another parameterized problem \(P\), then the problem \(P\) is W[1]-hard as well. Central W[1]-hard problems include, for example, deciding whether a nondeterministic single-tape Turing machine accepts within \(k\) steps, Clique parameterized be solution size, and Independent Set parameterized by solution size. Naturally, \(\#\)W[1]-hardness is the concept analogous to W[1]-hardness in the realm of parameterized counting problems. For more information on W[1]-hardness and \(\#\)W[1]-hardness, we refer to [15, 11, 18].
**Problem Definitions.** The counting problems studied in this paper are defined as follows.
* \(\#k\)-Vertex Cover (\(\#k\)-Minimal Vertex Cover): Given a graph \(G\) and a non-negative integer \(k\), output the number of vertex covers (minimal vertex covers) of \(G\) of size at most \(k\). Here, the parameter is \(k\).
* \(\#\ell\)-Vertex Cover and \(\#m\)-Vertex Cover: Defined as \(\#k\)-Vertex Cover with the exception that the parameters \(\ell\) and \(m\) are \(k-\mathsf{LP_{VC}}(G)\) and \(k-\mu(G)\), respectively. Here, \(\mathsf{LP_{VC}}(G)\) denotes the optimum of the (standard) linear program that corresponds to Vertex Cover (see [15], Section 3.4).
* \(\#k\)-Planar \(\mathcal{F}\)-Deletion: Let \(\mathcal{F}\) be a finite set of connected graphs that contains at least one planar graph. Given a graph \(G\) and a non-negative integer \(k\), output the number of subsets \(S\subseteq V(G)\) of size at most \(k\) such that \(G-S\) does not contain any graph from \(\mathcal{F}\) as a minor. We remark that the \(\#k\)-Planar \(\mathcal{F}\)-Deletion problem encompasses (based on different choices of \(\mathcal{F}\)) various other problems, such as \(\#k\)-Vertex Cover, \(\#k\)-Vertex Cover and \(\#k\)-Vertex Cover.
* \(\#k\)-Min \((s,t)\)-Cut: Given a graph \(G\) and two distinct vertices \(s,t\in V(G)\), output the number of minimum \((s,t)\)-cuts in \(G\). Here, the parameter \(k\) is the size of a minimum \((s,t)\)-cut in \(G\).
* \(\#w\)-Min \((s,t)\)-Cut: Defined as \(\#k\)-Min \((s,t)\)-Cut with the exception that the parameter \(w\) is the treewidth of \(G\).
* \(\#k\)-Odd Cycle Transversal: Given a graph \(G\) and a non-negative integer \(k\), output the number of odd cycle transversal of \(G\) of size at most \(k\). Here, the parameter is \(k\).
## 4 Kernelization of Counting Problems
We define the notion of kernelization for counting problems as follows.
**Definition 4.1** (**Compression of Counting Problem)**.: _Let \(P\) and \(Q\) be two parameterized counting problems. A compression (or compression algorithm) of \(P\) into \(Q\) is a pair \((\mathsf{reduce},\mathsf{lift})\) of two polynomial-time procedures such that:_
* _Given an instance_ \((x,k)\) _of_ \(P\)_,_ reduce _outputs an instance_ \((x^{\prime},k^{\prime})\) _of_ \(Q\) _where_ \(|x^{\prime}|,k^{\prime}\leq f(k)\) _for some computable function_ \(f\)_._
* _Given an instance_ \((x,k)\) _of_ \(P\)_, the instance_ \((x^{\prime},k^{\prime})\) _that is the output of_ reduce _on_ \((x,k)\)_, and_ \(s^{\prime}\) _such that_ \(P(x^{\prime},k^{\prime})=s^{\prime}\)_,_ lift _outputs_ \(s\) _such that_ \(Q(x,k)=s\)_._
_When \(Q\) is immaterial, we refer to a compression of \(P\) into \(Q\) only as a compression of \(P\)._
When \(P=Q\), a compression is called a _kernel_. The measure \(f(k)\) is termed the _size_ of the compression. When \(f\) is a polynomial function, then the compression (or kernel) is said to be a _polynomial compression_ (_polynomial kernel_). The following observation is immediate.
**Observation 4.2**.: _Let \(P\) be a parameterized (decision) problem that does not admit a polynomial kernel (or compression). Then, no counting version of \(P\) admits a polynomial kernel (or compression)._
Hence, we only consider parameterized counting problems whose decisions versions are either in P, or, if they are not, then they at least admit polynomial kernels. Specifically, Min \((s,t)\)-Cut is in P [9], and polynomial kernels for \(k\)-Vertex Cover, \(\ell\)-Vertex Cover (and \(m\)-Vertex Cover), \(k\)-Planar \(\mathcal{F}\)-Deletion, and \(k\)-Odd Cycle Transversal can be found in [7], [34], [25] and [34] respectively.
Throughout the paper, whenever we discuss a compression, we suppose (implicitly) that the compression is into a well-behaved problem, defined as follows:
**Definition 4.3** (**Well-Behaved Problem)**.: _Let \(P:\Sigma^{\star}\times\mathbb{N}_{0}\to\mathbb{N}_{0}\) be a parameterized counting problem. Then, \(P\) is well-behaved if there exists a polynomial-time algorithm that, given \(n\in\mathbb{N}\) in unary, outputs \(N\in\mathbb{N}\) in binary with the following property: for every \((x,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\) of size at most \(n\), \(P(x,k)\leq N\)._
We remark that, essentially, every "natural" parameterized counting problem (that we know of) is well-behaved.
**Lemma 4.4**.: _Let \(P\) be a parameterized counting problem that is solvable in finite time. Then, \(P\) is FPT if and only if it admits a kernel._
Proof.: The proof follows lines similar to that of Proposition 3.5 (we also [37]). For the sake of completeness, we present the details in Appendix A.
Due to Lemma 4.4, every counting problem that is \(\#\mathrm{W}[1]\)-hard (and which is solvable in finite time) does not admit any kernel, even not of exponential (or worse) size. We remark that \(\#k\textsc{-Min}(s,t)\textsc{-Cut}\) is shown to be FPT by Berge et al. [2], and \(\#w\textsc{-Min}(s,t)\textsc{-Cut}\) is can be shown to be FPT by the usage of straightforward dynamic programming over tree decompositions (see, e.g. [15]).
**Lemma 4.5**.: _Let \(P,Q\) be two parameterized counting problems such that there exists a PPT from \(P\) to \(Q\). If \(Q\) admits a polynomial compression, then \(P\) admits a polynomial compression._
Proof.: The proof follows lines similar to that of Proposition 3.6. For the sake of completeness, we present the details in Appendix A.
We remark that in Sections 7 and 8 we discuss two new notions of a cross-composition for proofs of the unlikely existence of polynomial compressions for parameterized counting problems.
## 5 Polynomial Kernel for \(\#\)Vertex Cover
The purpose of this section is to prove the following theorem.
**Theorem 1**.: \(\#k\textsc{-Vertex Cover}\) _admits a polynomial kernel._
Towards the proof of this theorem, we first develop the reduction procedure. Then, we discuss properties of the reduced instance. Afterwards, we present the lifting procedure and conclude the correctness of the kernel. For the sake of brevity, throughout this section, we write \(\#\)Vertex Cover instead of \(\#k^{\prime}\textsc{-Vertex Cover}\) (where \(k^{\prime}\) is the current value of the parameter).
### Reduction Procedure and a Corollary for Minimal Vertex Covers
We define the procedure reduce as follows. Given an instance \((G,k)\) of #Vertex Cover, we will first exhaustively apply the following reduction rule, known as Buss Rule [7] (see also [15]):
**Definition 5.1** (**Buss Rule**).: _If \(G\) contains a vertex \(v\) of degree at leas \(k+1\), then update \(G\gets G-\{v\}\) and \(k\gets k-1\)._
Let \((G_{1},k_{1})\) be the instance of #Vertex Cover obtained after exhaustive application of Buss Rule. Let \(G_{2}\) be graph obtained from \(G_{1}\) by the removal of all isolated vertices, and denote \(k_{2}=k_{1}\). Let \(\mathcal{S}\)\((\mathcal{S}_{1},\mathcal{S}_{2})\) denote the set of vertex covers of \(G\)\((G_{1},G_{2})\) of size at most \(k\)\((k_{1},k_{2})\). Let \(n_{1}=|V(G_{1})|\) and \(n_{2}=|V(G_{2})|\). We have the following known proposition:
**Proposition 5.2** ([7, 15]).: _The three following properties hold:_
1. \(\mathcal{S}=\{S_{1}\cup(V(G)\setminus V(G_{1})):S_{1}\in\mathcal{S}_{1}\}\)_._
2. _If_ \(|E(G_{2})|>(k_{2})^{2}\)_, then_ \(G\) _does not contain any vertex cover of size at most_ \(k\)_._
3. _Else,_ \(|E(G_{2})|\leq(k_{2})^{2}\)_, then_ \(|V(G_{2})|\leq 2(k_{2})^{2}\)_._
Let \(x\) (resp., \(x^{\prime}\)) be the number of vertex covers (resp., minimal vertex covers) of \(G\) of size at most \(k\), let \(x_{1}\) (resp., \(x^{\prime}_{1}\)) be the number of vertex covers (resp., minimal vertex covers) of \(G_{1}\) of size at most \(k_{1}\), and let \(x_{2}\) (resp., \(x^{\prime}_{2}\)) be the number of vertex covers (resp., minimal vertex covers) of \(G_{2}\) of size at most \(k_{2}\). Then, due to the first item of Proposition 5.2 and since no minimal vertex cover can contain isolated vertices, we have the following corollary.
**Corollary 5.3**.: _The following equalities hold: \(x=x_{1}\) and \(x^{\prime}=x^{\prime}_{1}=x^{\prime}_{2}\)._
Given this corollary, we can already conclude a polynomial kernel for the variant of #Vertex Cover termed #\(k\)-Minimal Vertex Cover. (The challenge, dealt with in the rest of Section 5, would be to derive a polynomial kernel for#Vertex Cover.)
**Theorem 6**.: #\(k\)-Minimal Vertex Cover _admits a kernel of size \(\mathcal{O}(k^{2})\)._
Proof.: Given an instance \((G,k)\) of #\(k\)-Minimal Vertex Cover, the procedure reduce\({}^{\prime}\) outputs: _(i)_\((G_{2},k_{2})\) if \(|E(G_{2})|\leq(k_{2})^{2}\), and _(ii)_\((G^{\prime}=(\{u,v\},\{\{u,v\}\}),k^{\prime}=0)\) otherwise. Observe that the procedure runs in polynomial time, and, due to the third item of Proposition 5.2, the size of the output is bounded by \(\mathcal{O}(k^{2})\).
Given \((G,k)\), the output of reduce\({}^{\prime}\), and the solution \(x^{\prime}_{2}\) to this output, the procedure lift\({}^{\prime}\) returns \(x^{\prime}_{2}\). Observe that the procedure runs in polynomial time, and from the second item of Proposition 5.2 and Corollary 5.3, we know that \(x^{\prime}=x^{\prime}_{2}\) and hence the procedure is correct.
Unfortunately, for #Vertex Cover, we cannot simply output \((G_{2},k_{2})\). In particular, observe that different vertex covers of \(G_{1}\) of size at most \(k_{1}\) might contain different numbers of vertices that are isolated in \(G_{1}\), and hence the knowledge of \(x_{2}\) alone is insufficient in order to deduce \(x_{1}\) (and \(x\)).
We proceed to modify \(G_{2}\) in order to define the graph that will be the output of the reduction
**Definition 5.4**.: _Let \(d=n_{2}\) and \(t=d+dk_{2}+2(dk_{2})^{2}\). Then, let \(G_{3}\) be the graph whose vertex set \(\{v_{i}:v\in V(G_{2}),i\in[d]\}\cup T\), where \(T\) is a set of \(t\) new vertices, and whose edge set is \(\{\{u_{i},v_{j}\}:\{u,v\}\in E(G_{2}),i,j\in[d]\}\). Additionally, let \(k_{3}=d\cdot k_{2}\)._
That is, \(G_{3}\) is the result of the replacement of every vertex of \(G_{2}\) by \(d\) copies (false twins) of that vertex and the addition of \(t\) new vertices. We are now ready to define reduce.
**Definition 5.5** (**Procedure reduce**).: _Given an instance \((G,k)\) of #Vertex Cover, the procedure reduce outputs:_ (i)_\((G_{3},k_{3})\) if \(|E(G_{2})|\leq(k_{2})^{2}\), and_ (ii)_\((G^{\prime}=(\{u,v\},\{\{u,v\}\}),k^{\prime}=0)\) otherwise._
Due to the third item of Proposition 5.2, we have the following immediate observation.
**Observation 5.6**.: reduce _runs in polynomial time, and the size of its output is bounded by \(k^{\mathcal{O}(1)}\)._
### Properties of the Reduced Instance
For every \(i\in\{0,1,\ldots,k_{2}\}\), let \(\mathcal{S}_{2}^{i}\) be the set of vertex covers of \(G_{2}\) of size exactly \(i\). Then, \(\mathcal{S}_{2}=\bigcup_{i=0}^{k_{2}}\mathcal{S}_{2}^{i}\) is the set of vertex covers of size at most \(k_{2}\) of \(G_{2}\). Let \(\mathcal{S}_{3}\) be the set of vertex covers of \(G_{3}\) of size at most \(k_{3}\). Let \(n_{3}=|V(G_{3})|\). We say that a subset \(U\subseteq V(G_{3})\) is _valid_ if there does not exist \(v\in V(G_{2})\) such that \(\{v_{1},v_{2}\ldots,v_{d}\}\subseteq U\). We proceed to define the following mappings.
**Definition 5.7** (**Mappings map and Map**).: _The mappings \(\mathsf{map}:\mathcal{S}_{2}\to 2^{V(G_{3})}\) and \(\mathsf{Map}\mathcal{S}_{2}\to 2^{2^{V(G_{3})}}\) are defined as follows._
* _Given_ \(X\in\mathcal{S}_{2}\)_, let_ \(\mathsf{map}(X)=\{v_{j}:v\in X,j\in[d]\}\)_._
* _Given_ \(X\in\mathcal{S}_{2}\)_, let_ \(\mathsf{Map}(X)=\{\mathsf{map}(X)\cup U:U\text{ is valid, }U\cap\mathsf{map}(X)=\emptyset,|U|\leq k_{3}-|\mathsf{map}(X)|\}\)_._
We have the following lemma regarding the vertex covers of \(G_{3}\).
**Lemma 5.8**.: _We have that_ (i)_\(\mathcal{S}_{3}=\bigcup_{X\in\mathcal{S}_{2}}\mathsf{Map}(X)\), and_ (ii) _for distinct \(X,Y\in\mathcal{S}_{2}\), \(\mathsf{Map}(X)\cap\mathsf{Map}(Y)=\emptyset\)._
Proof.: We first prove the correctness of the first item. On the one hand, consider some \(A\in\mathcal{S}_{3}\). Let \(X=\{v\in V(G_{2}):\{v_{1},v_{2},\ldots,v_{d}\}\subseteq X\}\), and \(U=A\setminus X\). We claim that \(X\in\mathcal{S}_{2}\). Since \(|A|\leq k_{3}\) (because \(A\in\mathcal{S}_{3}\)) and \(k_{3}=d\cdot k_{2}\), it follows that \(|X|\leq k_{2}\). Moreover, consider an edge \(\{u,v\}\in E(G_{2})\). If there exists \(u_{i},v_{j}\in V(G_{3})\) such that \(\{u_{i},v_{j}\}\cap A=\emptyset\), then we have a contradiction since \(A\) is a vertex cover of \(G_{3}\). Hence, \(\{u,v\}\cap X\neq\emptyset\). In turn, we derive that \(X\) is a vertex cover of \(G_{2}\), which yields that \(X\in\mathcal{S}_{2}\). Now, notice that, by Definition 5.7, \(A=\mathsf{map}(X)\cup U\) and \(\mathsf{map}(X)\cup U\in\mathsf{Map}(X)\). So, \(A\in\bigcup_{X\in\mathcal{S}_{2}}\mathsf{Map}(X)\).
On the other hand, let \(B\in\bigcup_{X\in\mathcal{S}_{2}}\mathsf{Map}(X)\). So, \(B\in\mathsf{Map}(X)\) for some \(X\in\mathcal{S}_{2}\). By Definition 5.7, this implies that \(B=\mathsf{map}(X)\cup U\) for some valid subset \(U\subseteq V(G_{3})\) disjoint from \(\mathsf{map}(X)\), and \(|B|\leq k_{3}\). So, to derive that \(B\in\mathcal{S}_{3}\), it suffices to argue that \(\mathsf{map}(X)\) is a vertex cover of \(G_{3}\). To this end, consider some edge \(\{u_{i},v_{j}\}\in E(G_{3})\). Then, \(\{u,v\}\in E(G_{2})\). Because \(X\) is a vertex cover of \(G_{2}\), we have that \(\{u,v\}\cap X\neq\emptyset\). However, by Definition 5.7, this implies that \(\{u_{i},v_{j}\}\cap\mathsf{map}(X)\neq\emptyset\). Thus, the proof of the first item of the lemma is complete.
For the second item of the lemma, consider some distinct \(X,Y\in\mathcal{S}_{2}\). Without loss of generality, suppose that \(|X|\geq|Y|\). So, there exists \(v\in V(G_{2})\) such that \(v\in X\setminus Y\), and, hence, \(\{v_{1},v_{2},\ldots,v_{d}\}\subseteq\mathsf{map}(X)\) while \(\{v_{1},v_{2},\ldots,v_{d}\}\cap\mathsf{map}(Y)=\emptyset\). So, since a valid set cannot contain \(\{v_{1},v_{2},\ldots,v_{d}\}\), we derive that \(\{v_{1},v_{2},\ldots,v_{d}\}\setminus A\neq\emptyset\) for every \(A\in\mathsf{Map}(Y)\). However, since \(\{v_{1},v_{2},\ldots,v_{d}\}\setminus A=\emptyset\) for every \(A\in\mathsf{Map}(X)\), it follows that \(\mathsf{Map}(X)\cap\mathsf{Map}(Y)=\emptyset\).
We consider the sizes of the sets assigned by \(\mathsf{Map}\) in the following lemma.
**Lemma 5.9**.: _For every \(X\in\mathcal{S}_{2}^{i}\) for \(i\in\{0,1,\ldots,k_{2}\}\), it holds that \(|\mathsf{Map}(X)|=w_{i}\), where_
\[w_{i}=\sum_{(a^{\star},a_{1},a_{2},\ldots,a_{n_{2}-i})\in W_{i}}\binom{t}{a^{ \star}}\prod_{j=1}^{n_{2}-i}\binom{d}{a_{j}},\quad\text{and}\]
\[W_{i}=\{(a^{\star},a_{1},a_{2},\ldots,a_{n_{2}-i}):a^{\star}+\sum_{j=1}^{n_{2} -i}a_{j}\leq k_{3}-d\cdot i,a^{\star}\leq t,\text{and for each }j\in[n_{2}-i],a_{j}\in\{0,1,\ldots,d-1\}\}.\]
Towards the proof of this lemma and a latter lemma, for every \(r\in\{0,1,\ldots,k_{3}-i\cdot d\}\), let us denote \(W_{i}^{r}=\{(a^{\star},a_{1},a_{2},\ldots,a_{n_{2}-i}):a^{\star}+\sum_{j=1}^{n_{ 2}-i}a_{j}=r,a^{\star}\leq t,\text{and for each }j\in[n_{2}-i],a_{j}\in\{0,1,\ldots,d-1\}\}\), and \(w_{i}^{r}=\sum_{(a^{\star},a_{1},a_{2},\ldots,a_{n_{2}-i})\in W_{i}^{r}}\binom{t}{a ^{\star}}\prod_{j=1}^{n_{2}-i}\binom{d}{a_{j}}\).
Proof of Lemma 5.9.: Let \(X\in\mathcal{S}_{2}^{i}\). So, we need to count the number of subsets \(U\subseteq V(G_{3})\) such that \(U\) is valid, \(U\cap\mathsf{map}(X)=\emptyset\), and \(|U|\leq k_{3}-|\mathsf{map}(X)|\). Observe that \(|\mathsf{map}(X)|=d\cdot i\). So, because we demand that \(U\cap\mathsf{map}(X)=\emptyset\), every choice of \(U\) corresponds to the choice of some \(r\leq k_{3}-d\cdot i\) vertices
from \(V(G_{3})\setminus\mathsf{map}(X)\) such that the resulting set would be valid. In turn, every such choice, for a specific \(r\), corresponds to the choice of _(a)_ how many vertices to pick from \(T\) and how many vertices (a number between \(0\) and \(d-1\), due to validity) to pick from \(\{v_{1},v_{2},\ldots,v_{d}\}\) for every \(v\notin X\), so that in total we pick \(r\) vertices, and _(b)_ given a choice of type _(a)_, the choice of which specific vertices to pick from \(T\) and which specific vertices to pick from \(\{v_{1},v_{2},\ldots,v_{d}\}\) for every \(v\notin X\). Clearly, we have a natural 1-to-1 correspondence between the the choices of type _(a)_ and the vectors in \(W_{i}^{r}\). Then, given a choice of such a vector \((a^{\star},a_{1},a_{2},\ldots,a_{n_{2}-i})\), we have \(\binom{t}{a^{\star}}\prod_{j=1}^{n_{2}-i}\binom{d}{a_{j}}\) choices of type _(b)_. Considering all choices for \(r\), we attain the formula stated in the lemma.
In particular, we prove that the sizes in Lemma 5.9 satisfy the following.
**Lemma 5.10**.: _For every \(i\in\{0,1,\ldots,k_{2}\}\),_
\[w_{i}>\sum_{j=i+1}^{k_{2}}\binom{n_{2}}{j}\cdot w_{j}.\]
Proof.: Fix \(i\in\{0,1,\ldots,k_{2}\}\). First, observe that \(w_{p}\geq w_{q}\) for all \(p,q\in\{0,1,\ldots,k_{2}\}\) such that \(p\leq q\), and \(\binom{n_{2}}{j}\leq 2^{n_{2}}\) for all \(j\in\{0,1,\ldots,k_{2}\}\). Hence, it suffices to prove that \(w_{i}\geq k_{2}\cdot 2^{n_{2}}\cdot w_{i+1}\). For this purpose, notice that \(n_{3}=dn_{2}+t\). Additionally, on the one hand, for all \(i^{\prime}\in\{0,1,\ldots,k_{2}\}\),
\[w_{i^{\prime}}\leq k_{3}\cdot\binom{n_{3}-di^{\prime}}{k_{3}-di^{\prime}}=dk_ {2}\cdot\binom{d(n_{2}-i^{\prime})+t}{d(k_{2}-i^{\prime})}.\]
We refer to this inequality as Inequality (1). To see its correctness, note that \(w_{i^{\prime}}^{r}\) is maximum when \(r\) is maximum (restricted to \(\{0,1,\ldots,k_{3}-di^{\prime}\}\)), i.e., when \(r=k_{3}-di^{\prime}\leq k_{3}\). Hence, \(w_{i^{\prime}}\leq k_{3}\cdot w_{i^{\prime}}^{k_{3}-di^{\prime}}\). Now, observe that \(w_{i^{\prime}}^{k_{3}-di^{\prime}}\) corresponds to the number of choices of \(k_{3}-di^{\prime}\) elements out of a universe of size \(n_{3}-di^{\prime}\) that satisfy particular restrictions. Specifically, we have a partition of the universe into \(n_{2}-i^{\prime}+1\) parts--one of size \(t\) and the others of size \(d\)-- and we can pick at most \(d-1\) elements from each of the parts of size \(d\). In particular, this simply means that \(w_{i^{\prime}}^{k_{3}-di^{\prime}}\) is bounded from above by the number of choices of \(k_{3}-di^{\prime}\) elements out of a universe of \(n_{3}-di^{\prime}\) elements, which is \(\binom{n_{3}-di^{\prime}}{k_{3}-di^{\prime}}\). Thus, Inequality (1) is correct.
On the other hand,
\[w_{i^{\prime}}\geq\binom{n_{3}-di^{\prime}-n_{2}}{k_{3}-di^{\prime}}=\binom{d (n_{2}-i^{\prime})+t-n_{2}}{d(k_{2}-i^{\prime})}.\]
We refer to this inequality as Inequality (2). To see its correctness, note that \(w_{i^{\prime}}\geq w_{i^{\prime}}^{r}\) for all \(r\in\{0,1,\ldots,k_{3}-di^{\prime}\}\). So, in particular, \(w_{i^{\prime}}\geq w_{i^{\prime}}^{k_{3}-di^{\prime}}\). Recall the combinatorial interpretation of \(w_{i^{\prime}}^{k_{3}-di^{\prime}}\) discussed above for the correctness of Inequality (1). Now, out of that universe, suppose that we remove (arbitrarily) one element from each of the parts of size \(d\)--so, in total, we remove \(n_{2}-i^{\prime}\) elements. Then, we remove \(i^{\prime}\) additional elements. Hence, we remain with a universe of size \(n_{3}-di^{\prime}-n_{2}\). However, every choice of \(k_{3}-di^{\prime}\) elements from this universe satisfies the particular restrictions stated in the aforementioned combinatorial interpretation. Hence, \(w_{i^{\prime}}^{k_{3}-di^{\prime}}\) is bounded from below by the number of choices of \(k_{3}-di^{\prime}\) elements out of a universe of \(n_{3}-di^{\prime}-n_{2}\) elements, which is \(\binom{n_{3}-di^{\prime}-n_{2}}{k_{3}-di^{\prime}}\). Thus, Inequality (2) is correct.
Hence, having Inequality (2) and since \(d=n_{2}\),
\[w_{i} \geq \binom{d(n_{2}-i)+t-n_{2}}{d(k_{2}-i)}.\] \[= \frac{(d(n_{2}\!-\!i)+t-d(k_{2}\!-\!i))(d(n_{2}\!-\!i)+t-d(k_{2}\! -\!i)-1)\cdots(d(n_{2}\!-\!i)+t-d(k_{2}\!-\!i)-n_{2}+1)}{(d(k_{2}-i))(d(k_{2}-i )-1)\cdots(d(k_{2}-i)-n_{2}+1)}\] \[\cdot\binom{d(n_{2}-i)+t-d}{d(k_{2}-i)-d}.\]
Recall that \(t=d+dk_{2}+2(dk_{2})^{2}\). So, for all \(j\in[d]\), \(d(n_{2}-i)+t-d(k_{2}-i)-j+1\geq 2(dk_{2})^{2}\geq 2dk_{2}\cdot(d(k_{2}-i)-j+1)\). In particular, we derive that
\[\frac{(d(n_{2}-i)+t-d(k_{2}-i))(d(n_{2}-i)+t-d(k_{2}-i)-1)\cdots(d (n_{2}-i)+t-d(k_{2}-i)-n_{2}+1)}{(d(k_{2}-i))(d(k_{2}-i)-1)\cdots(d(k_{2}-i)-n_{ 2}+1)}\] \[\geq(2dk_{2})^{n_{2}}>d(k_{2})^{2}\cdot 2^{n_{2}}.\]
Hence, the calculation above implies that
\[w_{i} > d(k_{2})^{2}\cdot 2^{n_{2}}\cdot\binom{d(n_{2}-i)-d}{d(k_{2}-i)-d}\] \[\geq k_{2}\cdot 2^{n_{2}}\cdot w_{i+1},\]
where the last inequality follows from Inequality (1). As discussed earlier, this completes the proof.
### Procedure lift and Proof of Theorem 1
We start with a computation of the values \(w_{i}\), \(i\in\{0,1,\ldots,k_{2}\}\), defined in Lemma 5.9.
**Lemma 5.11**.: _There exists a polynomial-time algorithm that, given \(i\in\{0,1,\ldots,k_{2}\}\) and having \(t,d,k_{2}\) and \(n_{2}\) at hand, outputs \(w_{i}\). Here, the input numbers are encoded in unary, and the output number is encoded in binary._
Proof.: Observe that \(w_{i}=\sum_{r=0}^{k_{3}-id}w_{i}^{r}\). Hence, for the proof, it suffices to fix some \(r\in\{0,1,\ldots,k_{2}-id\}\), and show how to compute \(w_{i}^{r}\) in polynomial time. Now, denote \(\ell=n_{2}-i\), \(q=d-1\), and
\[\widehat{w}_{i}^{p}=\sum_{\begin{subarray}{c}(a_{1},a_{2},\ldots,a_{\ell})\\ \text{s.t.}\ \sum_{j=1}^{\ell}a_{j}=p,\text{ and }\forall j\in[\ell],a_{j}\in\{0,1, \ldots,q\}\end{subarray}}\prod_{j=1}^{\ell}\binom{d}{a_{j}}.\]
Then, \(w_{i}^{r}=\sum_{\begin{subarray}{c}a^{\star}=0\\ a^{\star}\end{subarray}}^{t}\binom{t}{a^{\star}}\widehat{w}_{i}^{r-a^{\star}}\). So, for the proof, it suffices to fix some \(a^{\star}\in\{0,1,\ldots,t\}\), and show how to compute \(\widehat{w}_{i}^{p}\), for \(p=r-a^{\star}\), in polynomial time.
In what follows, we employ dynamic programming to compute \(\widehat{w}_{i}^{p}\). To this end, for every \(\ell^{\prime}\in[\ell]\) and \(p^{\prime}\in\{0,1,\ldots,\min(p,\ell^{\prime}\cdot q)\}\), we allocate a table entry \(\mathfrak{M}[\ell^{\prime},p^{\prime}]\). We define (for the analysis):
\[W_{\ell^{\prime},p^{\prime}}=\sum_{\begin{subarray}{c}(a_{1},a_{2},\ldots,a_{ \ell^{\prime}})\\ \text{s.t.}\ \sum_{j=1}^{\ell^{\prime}}a_{j}=p^{\prime},\text{ and }\forall j\in[\ell^{\prime}],a_{j}\in\{0,1, \ldots,q\}\end{subarray}}\prod_{j=1}^{\ell^{\prime}}\binom{d}{a_{j}}.\]
The purpose of \(\mathfrak{M}[\ell^{\prime},p^{\prime}]\) would be to store \(W_{\ell^{\prime},p^{\prime}}\). Then, since \(\widehat{w}_{i}^{p}=W_{\ell,p}\), we would output \(\mathfrak{M}[\ell,p]\).
The basis is when \(\ell^{\prime}=1\). Then, for every \(p^{\prime}\in\{0,1,\ldots,\min(p,\ell\cdot q)\}\), we initialize \(\mathfrak{M}[\ell^{\prime},p^{\prime}]=\binom{d}{p^{\prime}}\).
Now, for every \(\ell^{\prime}\in[\ell]\) in increasing order, and every \(p^{\prime}\in\{0,1,\ldots,p\}\) in arbitrary order, we perform the following computation:
\[\mathfrak{M}[\ell^{\prime},p^{\prime}]\leftarrow\sum_{s=0}^{\min(p^{\prime},q) }\binom{d}{s}\cdot\mathfrak{M}[\ell^{\prime}-1,p^{\prime}-s].\]
Clearly, the computation can be performed in polynomial time (since the input numbers are encoded in unary, and the numbers stored in the table are encoded in binary).
Combinatorially, the interpretation of \(W_{\ell^{\prime},p^{\prime}}\) is of the number of choices to pick exactly \(p^{\prime}\) elements from a universe that is partitioned into \(\ell^{\prime}\) parts of size \(d\) each, such that we can pick at most \(q\) elements from each part. Equivalently, we can consider the number of choices to pick exactly \(s\leq p^{\prime}\) elements from the last part of the universe, and then, for each such choice, we can consider the number of
choices to pick exactly \(p^{\prime}-s\) additional elements from the remainder of the universe, such that we can pick at most \(q\) elements from each part. This yields the following equality:
\[W_{\ell^{\prime},p^{\prime}}=\sum_{s=0}^{\min(p^{\prime},q)}\binom{d}{s}\cdot W_{ \ell^{\prime}-1,p^{\prime}-s}.\]
In turn, using straightforward induction, this equality yields the correctness of the computation.
Now, we define lift as follows.
**Definition 5.12** (**Procedure lift**).: _Given an instance \((G,k)\) of \(\#\textsc{Vertex Cover}\), the output of \(\mathsf{reduce}\), and the solution \(x^{\star}\) to this output, the procedure lift performs the following operations:_
1. _Initialize_ \(\widehat{x}\gets x^{\star}\)_._
2. _For_ \(i=0,1,\ldots,k_{2}\)_:_ 1. _Use the algorithm in Lemma_ 5.11 _to compute_ \(w_{i}\)_._ 2. _Let_ \(y_{i}\leftarrow\lfloor\widehat{x}/w_{i}\rfloor\)_._ 3. _Update_ \(\widehat{x}\leftarrow\widehat{x}-y_{i}\cdot w_{i}\)_._ 4. _Let_ \(z_{i}\gets y_{i}\cdot\sum_{j=0}^{k_{2}-i}\binom{n_{1}-n_{2}}{j}\)_._
3. _Return_ \(z=\sum_{i=0}^{k_{2}}z_{i}\)_._
We start the analysis with the following observation, whose correctness is immediate from Lemma 5.11 and the definition of lift.
**Observation 5.13**.: lift _runs in polynomial time._
For every \(X\in\mathcal{S}_{2}\), define \(\mathsf{Pull}(X)=\{X\cup U:U\subseteq V(G_{1})\setminus V(G_{2}),|X|+|U| \leq k_{1}\}\). For the correctness of lift, we prove the two following lemmas.
**Lemma 5.14**.: _We have that (i) \(\mathcal{S}_{1}=\bigcup_{X\in\mathcal{S}_{2}}\mathsf{Pull}(X)\), and (ii) for distinct \(X,Y\in\mathcal{S}_{2}\), \(\mathsf{Pull}(X)\cap\mathsf{Pull}(Y)=\emptyset\)._
Proof.: Recall that \(G_{2}\) is obtained from \(G_{1}\) be the removal of all isolated vertices, and that \(k_{2}=k_{1}\). Hence, every vertex cover of \(G_{1}\) of size at most \(k_{1}\) is the union of two sets, \(A\) and \(B\), where \(A\) is a vertex cover of \(G_{2}\) of size at most \(k_{2}\), and \(B\subseteq V(G_{1})\setminus V(G_{2})\) is of size at most \(k_{1}-|A|\). So, the first item follows, and the second item is immediate.
**Lemma 5.15**.: _For every \(i\in\{0,1,\ldots,k_{2}\}\), we have that (i) \(y_{i}=|\mathcal{S}_{2}^{i}|\), and (ii) \(z_{i}=|\bigcup_{X\in\mathcal{S}_{2}^{i}}\mathsf{Pull}(X)|\)._
Proof.: From Lemma 5.8, we have that \(x^{\star}=\sum_{X\in\mathcal{S}_{2}}|\mathsf{Map}(X)|=\sum_{i=0}^{k_{2}}\sum_ {X\in\mathcal{S}_{2}^{i}}|\mathsf{Map}(X)|\). So, by Lemma 5.9, we derive that \(x^{\star}=\sum_{i=0}^{k_{2}}|\mathcal{S}_{2}^{i}|\cdot w_{i}\). Observe that for every \(i\in\{0,1,\ldots,k_{2}\}\), \(|\mathcal{S}_{2}^{i}|\leq\binom{n_{2}}{i}\). Hence, due to Lemma 5.10, it follows that for every \(i\in\{0,1,\ldots,k_{2}\}\), \(w_{i}\geq\sum_{j=i+1}^{k_{2}}|\mathcal{S}_{2}^{j}|\cdot w_{j}\). Given the manner in which lift handles the variables \(\widehat{x}\) and \(y_{0},y_{1},\ldots,y_{k_{2}}\), this implies the correctness of the first item of the lemma.
Now, observe that for any \(X\in\mathcal{S}_{2}^{i}\), \(|\mathsf{Pull}(X)|=\sum_{j=0}^{k_{2}-i}\binom{n_{1}-n_{2}}{j}\), and from the second item of Lemma 5.14, it follows that \(|\bigcup_{X\in\mathcal{S}_{2}^{i}}\mathsf{Pull}(X)|=\sum_{X\in\mathcal{S}_{2}^ {i}}|\mathsf{Pull}(X)|\). From these arguments, and since we have already proved the correctness of the first item, we derive the correctness of the second item as well.
Having Corollary 5.3 and Lemmas 5.14 and 5.15 at hand, we prove the following lemma, which implies the correctness of lift.
**Lemma 5.16**.: _We have that \(|\mathcal{S}|=z\)._
Proof.: By Corollary 5.3, \(|\mathcal{S}|=|\mathcal{S}_{1}|\). From Lemma 5.14, we have that \(|\mathcal{S}_{1}|=|\bigcup_{X\in\mathcal{S}_{2}}\mathsf{Pull}(X)|\), which equals \(\sum_{i=0}^{k_{2}}|\bigcup_{X\in\mathcal{S}_{2}^{i}}\mathsf{Pull}(X)|\). Further, from Lemma 5.15, we have that \(\sum_{i=0}^{k_{2}}|\bigcup_{X\in\mathcal{S}_{2}^{i}}\mathsf{Pull}(X)|=\sum_{i=0 }^{k_{2}}z_{i}=z\). So, we conclude that \(|\mathcal{S}|=z\).
Thus, the correctness of Theorem 1 follows from Observations 5.6 and 5.13, and Lemma 5.16.
## 6 Polynomial Compression for #Planar \(\mathcal{F}\)-Deletion
In this section we present a polynomial compression for the #Planar-\(\mathcal{F}\)-Deletion problem, which is a general problem encompassing #Vertex Cover, #Feedback Vertex Set and many others [25]. Let us begin by recalling the Planar-\(\mathcal{F}\)-Deletion problem, where \(\mathcal{F}\) is a finite set of connected graphs with at least one planar graph. The input is a graph \(G\) and an integer \(k\). The objective is to determine if there is a subset \(S\) of at most \(k\) vertices such that \(G-S\) is \(\mathcal{F}\)-minor free. In the counting version of the problem, #Planar-\(\mathcal{F}\)-Deletion, given \(G\) and \(k\) we must output the number of distinct vertex subsets \(S\) such that \(|S|\leq k\) and \(G-S\) is \(\mathcal{F}\)-minor free. We prove the following theorem in this section.
**Theorem 2**.: \(\#k\)-Planar \(\mathcal{F}\)-Deletion _admits a polynomial compression._
At a high level, we follow the approach of [25] which gave a polynomial kernel for #Planar-\(\mathcal{F}\)-Deletion, but we develop additional results that allow us to compress and then recover the number of solutions of size \(k\). We note that we only obtain a compression, and not a kernel, unlike the results for #Vertex Cover presented earlier.
### Preliminaries
We say that \(S\subseteq V(G)\) is a \(\mathcal{F}\)-deletion set of \(G\), if \(G-S\) is \(\mathcal{F}\)-minor free. We enumerate a few properties of \(\mathcal{F}\)-minor free graphs.
**Proposition 6.1** ([25] Proposition 1).: _If a graph \(G\) is \(\mathcal{F}\)-minor free, where \(\mathcal{F}\) is a finite family of graphs containing at least one planar graph, then there is a constant \(h\) depending only on \(\mathcal{F}\) such that \(tw(G)\leq\eta\)._
Let \((G,k)\) denote the input instance of #Planar-\(\mathcal{F}\) Deletion. Following [25], the first step of our reduce algorithm is to compute a modulator to \((G,k)\) using an approximation algorithm for Planar-\(\mathcal{F}\)-Deletion.
**Proposition 6.2** ([25]).: _There is a randomized polynomial time algorithm that given an instance of \((G,k)\) of Planar-\(\mathcal{F}\) Deletion either outputs a solution of size at most \(c\cdot k\) for a fixed constant \(c\) that depends only on \(\mathcal{F}\), or correctly reports that no solution of size \(k\) exists for \((G,k)\). This algorithm succeeds with probability at least \(1-1/2^{n}\)._
Having computed the approximate solution \(X\), we first check if \(|X|\leq c(k+1)\). If not, then it follows that the \((G,k)\) admits no solutions of size \(k\). Otherwise, \((G,k)\) admits a modulator of size at most \(c(k+1)\), which we denote by \(X\). Note that the bound was chosen as \(c(k+1)\) instead of \(ck\) to be consistent with [25]. Observe that the graph \(G-X\) is \(\mathcal{F}\)-minor free. Recall that, by Proposition 6.1, the treewidth of any \(\mathcal{F}\)-minor free graph is upper-bounded by a constant \(\eta\) that depends only on \(\mathcal{F}\). We augment \(X\) with additional vertices to arrive at the following.
**Proposition 6.3** ([25] Lemma 25, 26).: _There is a randomized polynomial time algorithm that given an instance \((G,k)\) of Planar \(\mathcal{F}\)-Deletion, either returns that \((G,k)\) has no solutions of size \(k\), or computes two disjoint vertex subsets \(X\) and \(Z\), with probability at least \(1-1/2^{n}\) such that,_
* \(|X|=\mathcal{O}(k)\) _and_ \(|Z|=\mathcal{O}(k^{3})\)_,_
* \(X\) _is a_ \(\mathcal{F}\)_-deletion set of_ \(G\)
* _For every connected component_ \(C\) _of_ \(G-(X\cup Z)\)_,_ \(|N(C)\cap Z|\leq 2(\eta+1)\)__
* _For any two vertices_ \(u,v\in N(C)\cap X\)_, there are at least_ \(k+\eta+3\) _vertex disjoint paths from_ \(u\) _to_ \(v\) _in_ \(G-X\)_._
* _For any_ \(\mathcal{F}\)_-deletion set_ \(S\) _of size_ \(k\)_,_ \(|(N(C)\cap X)\setminus S|\leq\eta+1\)_._
We call \(X\cup Z\) an _enriched modulator_ to \((G,k)\). Our next step is to compress the graph \(G-(X\cup Z)\). This is accomplished in two steps. First we reduce the number of connected components in \(G-(X\cup Z)\) to \(k^{\mathcal{O}(1)}\) and then we store each component in a compressed form that is sufficient to count the number of \(k\)-size solutions of \((G,k)\). Let us introduce some additional notation from [25, 26] that are required for these results.
A **boundaried graph** is a graph \(G\) with a set of distinguished vertices \(B\) and an injective mapping \(\lambda_{G}\) from \(B\) to \(\mathbb{Z}^{+}\). The set \(B\) is called the **boundary** of \(G\) which is also denoted by \(\delta(G)\), and \(\lambda_{G}\) is called the **labelling** of \(G\). The **label-set** of \(G\) is \(\Lambda(G)=\{\lambda(v)\forall v\in\delta(G)\}\). Given a finite set \(I\subseteq\mathbb{Z}^{+}\), let \(\mathcal{G}_{I}\) denote the set of all boundaried graphs whose label-set is \(I\). Let \(\mathcal{G}_{\subseteq I}\) denotes all boundaried graphs whose label-set is a subset of \(I\). Finally, for \(t\in\mathbb{Z}^{+}\), \(G\) is a \(t\)-boundaried graph is \(\Lambda(G)\subseteq\{1,2,\ldots,t\}\).
The **gluing operation**\(\oplus\) on two \(t\)-boundaried graphs \(G\) and \(H\) gives the (non boundaried) graph \(G\oplus H\) obtained by taking the disjoint union of \(G\) and \(H\) and then identifying pairs of vertices in \(\delta(G)\) and \(\delta(H)\) with the same label, and finally forgetting all the labels. The **boundaried gluing operation**\(\oplus_{\delta}\) is similar, but results in a boundaried graph: given two \(t\)-boundaried graphs \(G\) and \(H\), the \(t\)-boundaried graph \(G\oplus_{\delta}H\) is obtained by taking the disjoint union of \(G\) and \(H\) and then identifying pairs of vertices in \(\delta(G)\) and \(\delta(H)\) with the same label; this results in \(t\) new vertices that form the boundary of the new graph.
A \(t\)-boundaried graph \(H\) is a _minor_ of a \(t\)-boundaried graph \(G\), if \(H\) is a minor of \(G\) that is obtained without contracting any edge whose both endpoints are boundary vertices. Note that, if we contract an edge with exactly boundary vertex as an endpoint, the new vertex is also a boundary vertex with the same label. This relation is denoted by \(H\leq_{m}G\). The **folio** of a \(t\)-boundaried graph \(G\) is \(\mathsf{folio}(G)=\{H\leq_{m}G\}\). For two vertex subsets \(P,B\subseteq V(G)\), \(G_{P}^{B}\) denoted the \(|B|\)-boundaried graph \(G[B\cup P]\) with \(B\) as the boundary.
For a parameterized graph problem \(\Pi\), we define an **equivalence relation**\(\equiv_{\Pi}\) on the class of \(t\)-boundaried graphs as follows. Two \(t\)-boundaried graphs \(G_{1}\) and \(G_{2}\) are equivalent if and only if the following holds: for any other \(t\)-boundaried graph \(G_{3}\), \((G_{1}\oplus G_{3},k)\in\Pi\) if and only if \((G_{2}\oplus G_{3},k+c)\in\Pi\), where \(c\) is a constant for \(\Pi\). We say that \(\Pi\) has **Finite Integer Index**, the equivalence relation \(\equiv_{\Pi}\) partitions \(\mathcal{G}_{t}\) into finitely many equivalence classes. We shall require stronger conditions on the constant \(c\) for kernelization. Towards this, we say that \(\Pi^{\prime}\subseteq\sigma^{*}\times\mathbb{Z}\) is a **(positive) extended parameterized problem** of \(\Pi\), if \((I,k)\in\Pi^{\prime}\) whenever \(k\leq 0\) and \(\Pi^{\prime}\cap(\Sigma^{*}\times\mathbb{Z}^{+})=\Pi\). Note that the extended parameterized problem \(\Pi^{\prime}\) of \(\Pi\) is unique. Next, consider an equivalence class \(\mathcal{R}\) of \(\equiv_{\Pi^{\prime}}\) that is a subset of \(\mathcal{G}_{t}\). We say that \(H\in\mathcal{R}\) is a **progressive representative** of \(\mathcal{R}\) if for any \(G\in\mathcal{R}\) and any \(t\)-boundaried graph \(G^{\prime}\), \((G\oplus G^{\prime},k)\in\Pi^{\prime}\) if and only if \((H\oplus G^{\prime},k+c)\) in \(\Pi\) such that \(c\leq 0\). We have the following proposition, that ensures the existence of progressive representatives for those \(\Pi\) that admit an extension.
**Proposition 6.4** ([26] Lemma 16.11).: _Let \(\Pi\) be an extended parameterized graph problem. Then each equivalence class of \(\equiv_{\Pi}\) has a progressive representative._
From now onwards, let us fix \(\Pi\) to be Planar \(\mathcal{F}\)-deletion, and let \(\equiv_{\mathcal{F}}\) denote the equivalence relation for this problem. We have the following proposition.
**Proposition 6.5** ([25] Proposition 2).: _If \(\mathcal{F}\) is a finite family of connected graphs then \(\mathcal{F}\)-Deletion has finite integer index._
Let \(\mathcal{S}_{t}\) denote the set that contains one progressive representative for each equivalence class of \(\equiv_{\mathcal{F}}\) that is a subset of \(\mathcal{G}_{t}\). Observe that, for each \(t\in\mathbb{Z}^{+}\) the set \(\mathcal{S}_{t}\) has constant cardinality that is equal
to equal to the number of equivalence classes of \(\equiv_{\mathcal{F}}\) in \(G_{t}\). We define \(\mathcal{S}_{\leq t}=\cup_{t^{\prime}\leq t}\,\mathcal{S}_{t^{\prime}}\). Let \(c_{t,\mathcal{F}}=|\mathcal{S}_{\leq t}\), which is a constant that depends only on \(\mathcal{F}\) and \(t\). Furthermore, the sizes of the the graphs in \(\mathcal{S}_{t}\)\(\leq t\) is also a constant that depends only on \(\mathcal{F}\) and \(t\).
Let \(h\) be the maximum number of vertices in a graph in \(\mathcal{F}\). For a component \(C\) of \(G-(X\cup Z)\), the **border collection**\(\mathcal{B}_{C}\) of \(C\) is the collection of all vertex subsets \(B\) such that \(\;(i)\ B\setminus X\subseteq N(C)\setminus X\) and \(\;(ii)\ |B\cap X|\leq\eta+1\). For a set \(B\in\mathcal{B}_{C}\) and a boundaried graph \(H\) with \(B\) as the boundary, we say \(C\)**realizes**\((B,H)\) if \(H\leq_{m}G_{C}^{B}\). Observe that, in this case \(B\subseteq X\cup Z\), and \(|X\cap B|\leq\eta+1\) and hence \(|B\setminus X|\leq|N(C)\cap Z|\leq 2(\eta+1)\) using the bound from Proposition6.3. Let \(\mathcal{B}=\bigcup_{\text{component }C}\mathcal{B}_{C}\), and note that \(|\mathcal{B}|\leq(|X|+|Z|)^{3(\eta+1)}\) which is an upper-bound on the total number of possible borders over all components of \(G-(X\cup Z)\). For our purposes it is sufficient to consider all graphs \(H\) that contain at most \(h+3(\eta+1)\) vertices; in particular this includes all possible subgraphs of the graphs in \(\mathcal{F}\). The number of such graphs is at most \(2^{\binom{h+3(\eta+1)}{2}}\), which is a constant depending only on \(\mathcal{F}\).
### The reduce Procedure
Let us now turn to the reduce procedure for #Planar \(\mathcal{F}\)-Deletion. As in [25], we start with an enriched modulator \((X\cup Z)\) for the instance \((G,k)\) given by Proposition6.3. We then compress the remaining graph \(G-(X\cup Z)\) in two parts. First, we bound the number of connected components by identifying and deleting certain irrelevant components that will always have an empty intersection with a minimal \(\mathcal{F}\)-deletion set of \(G\) of size at most \(k\). Let \((G^{\prime},k)\) denote the resulting instance. The second step is to store a compressed representation of each connected component of \(G-(X\cup Z)\) that is sufficient to count the number of solutions of size \(k^{\prime}\) for each \(k^{\prime}\leq k\) for \((G^{\prime},k)\). The lift procedure will then use this information to count the number of solutions of \((G,k)\) in polynomial time; we present it in the next section. Note that we assume that Proposition6.3 gives \(X\cup Z\) of cardinality \(\mathcal{O}(k^{3})\) in the rest of this section.
#### 6.2.1 Bounding the number of connected components
If we have a large number of components in \(G-(X\cup Z)\) then the following lemma allows us to identify an irrelevant one, that contributes no vertices to a minimal \(\mathcal{F}\)-deletion set of size at most \(k\). Consider a pair \((B,H)\) where \(B\in\mathcal{B}\) and \(H\) is a boundaried graph on at most \(h+3(\eta+1)\) vertices with \(B\) as it's boundary. We say that a pair \((B,H)\) is _rich_ if there are at least \(\tau_{rich}=|X|+|Z|+k+(h+3(\eta+1))^{2}+2\) components of \(G-(X\cup Z)\) realizing it. The following lemma allows us to identify certain components of \(G-(X\cup Z)\) as irrelevant. Note that \(\tau_{rich}=\mathcal{O}(k^{3})\).
**Lemma 6.6** ([25] Lemma 36).: _Let \(C\) be a component of \(G-(X\cup Z)\) such that every pair \((B,H)\) that \(C\) realizes is rich. Then \(G\) has an \(\mathcal{F}\)-deletion set of size \(k\) if and only if \(G-V(C)\) does._
It is immediate from Lemma6.6 that if \(S\) is a minimal \(\mathcal{F}\)-deletion set of \(G\) of size at most \(k\) and \(C\) is a rich component, then \(S\cap V(C)=\emptyset\). Recall that, the number of choices for \(B\) is at most \((|X|+|Z|)^{3(\eta+1)}\), while the number of choices of graph \(H\) for each \(B\) is at most \(2^{\binom{h+3(\eta+1)}{2}}\). For each pair \((B,H)\) and a component \(C\), we can encode in MSOL if \(H\leq_{m}G_{C}^{B}\), and test if it is true in linear time [25]. Hence in polynomial time we can test if every pair realized by a component \(C\) is rich. We then arrive at the following reduction rule from [25].
**Reduction Rule 6.7**.: _If every pair \((B,H)\) realized by a component \(C\) of \(G-(X\cup Z)\) is rich, then delete \(V(C)\) from \(G\)._
The correctness of the above reduction rule is immediate from Lemma6.6. When the above reduction rule is not applicable, the total number of components in \(G-(X\cup Z)\) is bounded by \(\tau_{\#comp}=\tau_{rich}\cdot(|X|+|Z|)^{3(\eta+1)}\cdot 2^{\binom{h+3(\eta+1)}{2}}\) ([25] Lemma 36). Note that \(\tau_{\#comp}=\mathcal{O}(k^{3+9(\eta+1)})\), as \(h\) and \(\eta\) are constants depending only on \(\mathcal{F}\).
#### 6.2.2 Compressing the connected components
In this section we show how we can store a compressed representation of each connected component \(C\) of \(G-(X\cup Z)\) that is sufficient to count the number of \(\mathcal{F}\)-deletion sets of \(G\) of size at most \(k^{\prime}\) for any \(k^{\prime}\leq k\). Throughout this section, we assume that \(k^{\tau_{n}}\geq\log n\) where \(\tau_{n}\) is a constant depending on \(\mathcal{F}\) that will be specified later. We will justify this assumption in the description of \(\mathsf{lift}\) procedure, where we will argue that if \(n\) is too large then the we can count all \(\mathcal{F}\)-deletion sets of size \(k\) in polynomial time.
Consider a component \(C\) of \(G-(X\cup Z)\), and some \(\mathcal{F}\)-deletion set \(S\) of size at most \(k\) in \(G\). It follows from Proposition 6.3 that \(tw(C)\leq\eta\), and \(|N(C)\cap((X\cup Z)\setminus S)|\leq 3(\eta+1)\). In essence, \(C\) is a _near-protrusion_ of \(G\) as defined in [25]. For normal kernelization it is sufficient to identify an an irrelevant vertex or edge in this component if it were too large. For counting kernelization (compression), we must store information about all possible ways that \(S\) and \(C\) intersect. Therefore we need more detailed information about the what the "structure" of \(C-S\) could be, and how many ways is it possible to attain this structure by deleting vertices in \(N[C]\).
More precisely, consider a subset \(S\) of size at most \(k\), in \(G-S\), consider the boundaried graph \(G[N[C]\setminus S]\) with boundary \(N(C)\setminus S\). Note the following associated properties:
* The number \(i_{C}=|C\cap S|\), which is a one of \(\{0,1,\ldots,k\}\). This denotes the number of vertices from \(C\) that is picked into \(S\).
* The boundary \(B_{S,C}\) of \(G[N[C]\setminus S]\), i.e. \(B_{S,C}=N(C)\setminus S\), which is a subset of \(X\cup Z\) of size at most \(3(\eta+1)\). Recall that \(|X\cup Z|\leq\mathcal{O}(k^{3})\) and hence the number of possibilities for \(N(C)\setminus S\) is at most \(k^{9(\eta+1)}\).
* Finally, the equivalence class \(\mathcal{R}\) of \(\equiv_{\mathcal{F}}\) that contains the boundaried graph \(G[N[C]\setminus S]\) with boundary \(N(C)\setminus S\). Observe that the size of the boundary is at most \(t=3(\eta+1)\), and there are at most \(c_{t,\mathcal{F}}\) choices of \(\mathcal{R}\), which is a constant dependent only on \(\mathcal{F}\).
Let \(S\) be a subset of at most \(k\) vertices. We say that the **signature of \(S\) with respect to \(C\)**, denoted \(\sigma(S,C)\), is the tuple \((i_{C},B_{S,C},\mathcal{R})\) of the terms defined above. The **signature of \(S\)**, \(\sigma(S)\), is the collection of \(\{\sigma(S,C)\;\;\forall\text{ component }C\}\) along with \(S\cap(X\cup Z)\). For each component \(C\), we store a table \(T_{C}\) that for each possible choice of the tuple \(\sigma(S,C)\), stores the number of subsets \(S_{C}\subseteq V(C)\) that satisfy: \(|S_{C}|=i_{C}\) and the graph \(G[(V(C)\setminus S_{C})\cup B_{S,C}]\) lies in the equivalence class \(\mathcal{R}\). Note that the \(T_{C}\) has at most \(\tau_{table}=k\cdot k^{9(\eta+1)}\cdot c_{t,\mathcal{F}}\) entries, which is upper-bounded by a polynomial function of \(k\). Further, in each entry of \(T_{C}\) we store a number of value at most \(n^{k}\). As \(\log n\leq k^{\tau_{n}}\), we need at most \(k^{\tau_{n}+1}\) bits to store this number. Overall, we can store each table in \(k^{\tau}+1\cdot\tau_{table}\) space. To compute the table \(T_{C}\) for a component \(C\), we have the following lemma, which intuitively applies a variant of Courcelle's theorem [15] with the treewidth as the parameter. Since the treewidth of \(C\) is a constant (\(\eta\)) it runs in polynomial time.
**Lemma 6.8**.: _The table \(T_{C}\) corresponding to the component \(C\) can be computed in polynomial time._
Proof.: To compute the table \(T_{C}\), for each tuple \((i_{C},B_{S,C},R)\) we need to compute and store the number of subsets \(S_{C}\subseteq V(C)\) such that \(|S_{C}|\leq i_{C}\) and \(G[(V(C)\setminus S_{C})\cup B_{S,C}]\) lies in the equivalence class \(\mathcal{R}\) of \(\equiv_{\mathcal{F}}\). The second condition can be expressed as an CMSO-formula \(\psi\)[25]. Then using a dynamic programming algorithm we can count the number of \(S_{C}\subseteq V(C)\) that satisfies \(\psi\) and \(|S_{C}|\leq i_{C}\). This dynamic programming algorithm implements an optimization version of Courcelle's Theorem [25], except that it counts the number of solutions of size at most \(i_{C}\). This runs in time exponential in \(tw(G[V(C)\cup B_{S,C}])\leq 4(\eta+1)+|\psi|\), but polynomial in \(|V(C)|+|B_{S,C}|\). Hence, for a fixed family \(\mathcal{F}\), the algorithm runs in polynomial time.
Let us next argue that the collection of tables \(\{T_{C}\}\) is sufficient to count the number of solutions of size at most \(k^{\prime}\) for each \(k^{\prime}\leq k\). Let us start with the following observation.
**Observation 6.9**.: _Let \(S\) be a \(\mathcal{F}\)-deletion set of size at most \(k\) in \(G\). Let \(C\) be a component of \(G-(X\cup Z)\) and \(S_{C}=S\cap C\). Let \(i_{C}=|S_{C}|\), \(B_{S,C}=N(C)\setminus X\) and \(G_{S,C}\) be the boundaried graph \(G[N[C]\setminus S]\) with boundary \(B_{S,C}\). Let \(G_{S,C}\) lie in the equivalence class \(\mathcal{R}_{C}\) of \(\equiv_{\mathcal{F}}\), and let \(H_{C}\) be the progressive representative of \(\mathcal{R}_{C}\). Let \(\widehat{G_{S,C}}\) denote the boundaried graph \((G-(S\cup V(C))\) with boundary \(B_{S,C}\). Then \(\widehat{G_{S,C}}\oplus H_{C}\) is also \(\mathcal{F}\)-minor free._
Proof.: Since \(G-S=G_{S,C}\oplus\widehat{G_{C}}\) is \(\mathcal{F}\)-free, we have \((G_{S,C}\oplus\widehat{G_{S,C}},0)\in\Pi\) where \(\Pi\) denotes the parameterized graph problem \(\mathcal{F}\)-Deletion. Then, as \(H_{C}\) is a progressive representative \((H_{C}\oplus\widehat{G_{S,C}},0)\in\Pi\).
Next we attempt to characterize \(\mathcal{F}\)-deletion sets of size \(k\) using progressing representatives. Towards this, let \(S\) be a subset of at most \(k\) vertices of \(G\). For each component \(C\) of \(G-(X\cup Z)\), let \(S_{C}=S\cap C\), \(B_{S,C}=N(C)\setminus X\) and \(G_{S,C}\) be the boundaried graph \(G[N[C]\setminus S]\) with boundary \(B_{S,C}\). Let \(G_{S,C}\) lie in the equivalence class \(\mathcal{R}_{C}\) of \(\equiv_{\mathcal{F}}\), and let \(H_{C}\) be a progressive representative of \(\mathcal{R}_{C}\). We then define the following graph,
\[\bigoplus_{\text{component }C}G[(X\cup Z)\setminus S]\oplus_{\delta}H_{C}\]
where the gluing operations treats the graphs as \(|X\cup Z|\)-boundaried, with each vertex in \((X\cup Z)\) being labeled consistently across these graphs. This requires that we first fix a labeling \(\lambda_{XZ}\) of vertices in \(X\cup Z\), and then for each \(H_{C}\), we update it's labeling function \(\lambda_{H_{C}}\) to be a restriction of \(\lambda_{XZ}\) to \(\delta(H_{C})\), using the labels of \(G_{S,C}\) as a guide. The overall effect is that each \(G_{S,C}\) is replaced with it's progressive representative \(H_{C}\) for all components \(C\) of \(G-(X\cup Z)\).
**Observation 6.10**.: _Let \(S\) be a subset of at most \(k\) vertices of \(G\). For each component \(C\) of \(G-(X\cup Z)\), let \(S_{C}=S\cap C\), \(B_{S,C}=N(C)\setminus X\) and \(G_{S,C}\) be the boundaried graph \(G[N[C]\setminus S]\) with boundary \(B_{S,C}\). Let \(G_{S,C}\) lie in the equivalence class \(\mathcal{R}_{C}\) of \(\equiv_{\mathcal{F}}\), and let \(H_{C}\) be a progressive representative of \(\mathcal{R}_{C}\). Then, \(S\) is a \(\mathcal{F}\)-deletion set of \(G\) if and only if the following graph is \(\mathcal{F}\)-minor free._
\[\bigoplus_{\text{component }C}G[(X\cup Z)\setminus S]\oplus H_{C}\]
_._
Proof.: This observation follows easily by iteratively applying Observation 6.9, until every \(G_{S,C}\) has been replaced with \(H_{C}\).
We are now ready to show that the tables \(\{T_{C}\}\) are sufficient to count the number of \(\mathcal{F}\)-deletion sets of size \(k^{\prime}\) in \(G\) for any \(k^{\prime}\leq k\). Consider the following algebraic expression; we will prove that it computes the number of solutions of size at most \(k^{\prime}\) for the graph \(G\). For a logical statement \(\phi\), Let \([\phi]\) be the \(0,1\)-indicator function which is \(1\) if and only if \(\phi\) is true.
\[\mathsf{count}(k^{\prime}) =\sum_{S_{U}\subseteq X\cup Z,\ |S_{U}|\leq k^{\prime}}\ \sum_{\text{\{eq- class }\mathcal{R}_{C}\ \forall\text{ component }C\}}\ \sum_{\{i_{C}\ \forall\text{component }C\ |\ |S_{U}|+\sum_{C}i_{C}\leq k^{\prime}\}}\] \[\qquad\qquad\left[\bigoplus_{\text{component }C}G[(X\cup Z)\setminus S] \oplus H_{C}\quad\text{is }\mathcal{F}\text{-free}\right].\] \[\prod_{\text{component }C}\left[|N(C)\setminus S_{U}|\leq 3( \eta+1)|\cdot T_{C}[\mathcal{R}_{C},i_{C},N[C]\setminus S_{U}]\right.\]
**Lemma 6.11**.: \(\mathsf{count}(k^{\prime})\) _counts the number of \(\mathcal{F}\)-deletion sets of \(G\) of size at most \(k^{\prime}\)._
Proof.: Consider the expression \(\mathsf{count}(k^{\prime})\) in the fully expanded as a summation over all choices of \(S_{U}\), \(\{\mathcal{R}_{C}\ \forall C\}\) and \(\{i_{C}\ \forall C\}\). We need to verify that only the \(\mathcal{F}\)-deletion sets of size at most \(k^{\prime}\) contribute to this summation and each such set contributes \(1\). Towards this, for subset \(S\), let \(S_{U}=S\cap(X\cup Z)\). For each component \(C\) we have the equivalence class \(\mathcal{R}_{C}\) of the boundaried graph \(G[N[C]\setminus S]\) with
boundary \(N(C)\setminus S\) and \(i_{C}=|S\cap C|\), and let \(H_{C}\) be the progressive representative of \(\mathcal{R}_{C}\). If \(S\) is a \(\mathcal{F}\)-deletion set of size at most \(k\), then \(|N[C]\setminus S_{U}|\leq 3(\eta+1)\) by Proposition 6.3 and by Observation 6.10\(\bigoplus_{\text{component }C}G[(X\cup Z)\setminus S]\oplus H_{C}\) is \(\mathcal{F}\)-minor free. \(S\) therefore contributes \(1\) to \(\mathsf{count}(k^{\prime})\). Further, if \(|S|\leq k^{\prime}\) then it also satisfies \(|S_{U}|+\sum_{C}i_{C}\leq k^{\prime}\). It is straightforward to verify each \(\mathcal{F}\)-deletion set of size at most \(k^{\prime}\) contributes \(1\) to this sum. Further, if a set \(S\) is not a \(\mathcal{F}\)-deletion set of size at most \(k^{\prime}\), then one of the above three statements is false and it contributes \(0\) to this sum. Hence, the lemma holds.
**Observation 6.12**.: \(\mathsf{count}(k^{\prime})\) _can be evaluated in \(2^{\mathcal{O}(k^{3+9(\eta+1)}\log k)}\cdot n^{\mathcal{O}(1)}\) time._
Proof.: The time required is more precisely expressed as \(k^{3k}\cdot(c_{3(\eta+1),\mathcal{F}}\cdot k)^{\pi_{\theta\text{comp}}}\cdot n ^{\mathcal{O}(1)}\). The time is calculated by simply considering all possible choices of \(S_{U}\), the collections \(\{R_{C}\}\) and \(\{i_{C}\}\). For each choice we can test the required conditions in polynomial time, and then take a product of the values picked from the tables \(\{T_{C}\}\) in polynomial time.
#### 6.2.3 The reduce procedure and compression
Let us now describe the reduce procedure. The input is \((G,k)\) where \(G\) is a graph on \(n\) vertices. We fix \(\tau_{n}=20(\eta+1)\), and note that it is a constant that depends only on \(\mathcal{F}\). We first check if \(\log n\leq k^{\tau_{n}}\). If not, then we simply output the empty set \(\phi\). Otherwise, \(\log n\leq k^{\tau_{n}}\). We then apply Proposition 6.3 to obtain the enhanced modulator \(X\cup Z\). Either it returns that \(G\) has no \(\mathcal{F}\)-deletion set of size \(k\), in which case we output \(\phi\). Otherwise, we obtain an enriched modulator \(X\cup Z\) of size \(\mathcal{O}(k^{3})\). Then we apply Reduction Rule 6.7 exhaustively to find and delete components \(C\) of \(G-(X\cup Z)\) such that every pair \((B,H)\) realized by \(C\) is rich. Recall that by Lemma 6.6, each such component is disjoint from any minimal solution of \(G\) of size at most \(k\). Let \(G^{\prime}\) be the resulting graph, and note that each component of \(G^{\prime}-(X\cup Z)\) is also a component of \(G-(X\cup Z)\). The next step is to compute the tables \(T_{C}\) for each component \(C\) of \(G^{\prime}-(X\cup Z)\). Here we apply Lemma 6.8 for each \(C\) and obtain the table \(T_{C}\) in polynomial time. The output of the reduce procedure is \((X\cup Z)\) and the table collection \(\{T_{C}\}\). Observe that each table requires at most \(k^{\tau_{n}+1}\cdot\tau_{table}\) bits of space. Since there are at most \(\tau_{\#comp}\) components after an exhaustive application of Reduction Rule 6.7, we have the following lemma.
**Lemma 6.13**.: _Given instance \((G,k)\) of Planar \(\mathcal{F}\)-Deletion, the reduce procedure runs in polynomial time and outputs a data-structure of size \(k^{\tau_{n}+1}\cdot\tau_{table}\cdot\tau_{\#comp}\) which is a polynomial in \(k\)._
### The lift procedure
The lift procedure is given the instance \((G,k)\), the output of the reduce procedure, and for each \(k^{\prime}\leq k\) the value \(\mathsf{count}(k^{\prime})\) if the output of the reduce procedure is not \(\emptyset\). Note that, when the output of the reduce procedure is not \(\emptyset\), then it consists of an enriched modulator \(X\cup Z\) and a collection of tables \(T_{C}\) for each component of \(G^{\prime}-(X\cup Z)\). The objective is to the total number of \(\mathcal{F}\)-deletion sets in \(G\) of size at most \(k\).
The lift procedure begins by applying Proposition 6.3 to \((G,k)\). If it returns that \(G\) has no \(\mathcal{F}\)-deletion set of size \(k\), then we set \(\tau_{total}=0\) and output this value. Otherwise we have two cases, depending on whether\(k^{\tau_{n}}\geq\log n\) or not.
First consider the case \(k^{\tau_{n}}\geq\log n\). In this case, the reduce procedure has output an enriched modulator \((X\cup Z)\) and a collection of tables \(\{T_{C}\}\), one for each non-irrelevant component of \(G-(X\cup Z)\). We compute the total number of vertices in all the irrelevant components of \(G-(X\cup Z)\) that were deleted by Reduction Rule 6.7; it is denoted by \(\tau_{irr}\); this can be computed in polynomial time by simulating the application of Reduction Rule 6.7. Recall that \(G^{\prime}\) denotes the graph obtained from \(G\) after eliminating all irrelevant components. Now, as the output of reduce is not \(\emptyset\), we are also given the values of \(\mathsf{count}(k^{\prime})\) for each \(k^{\prime}\leq k\) as a part of the input. Recall \(\mathsf{count}(k^{\prime})\) denotes the total number of \(\mathcal{F}\)-deletion sets of size at most \(k^{\prime}\) in the graph \(G^{\prime}\). Let \(\tau^{\prime}_{count}(k^{\prime})=\mathsf{count}(k^{\prime})-\mathsf{count}(k^ {\prime}-1)\) denote the number of \(\mathcal{F}\)-deletion sets of size exactly \(k^{\prime}\) in \(G^{\prime}\). Then we output \(\tau_{total}=\sum_{k^{\prime}\leq k}\tau^{\prime}_{count}(k^{\prime})\cdot \binom{\tau_{irr}}{k-k^{\prime}}\) as the total number of solutions of size exactly \(k\).
The other case is when \(\log n>k^{\tau_{n}}\). In this case we proceed as follows. As in the previous case, we first apply Reduction Rule 6.7 to eliminate all the components of \(G-(X\cup Z)\) that have no intersection with any minimal \(\mathcal{F}\)-deletion set of size at most \(k\) in \(G\). Let \(\tau_{irr}\) denote the total number of vertices in the irrelevant components, and let \(G^{\prime}\) denote the remaining graph. The main difference from the above case is that, here we compute the value of \(\mathsf{count}(k^{\prime})\), for each \(k^{\prime}\leq k\), using the formula stated earlier. In this computation, instead of using the value of \(T_{C}[\mathcal{R}_{C},i_{C},N[C]\setminus S_{U}]\) from the table \(T_{C}\), we directly apply Lemma 6.8 to compute the value. Recall that each application of Lemma 6.8 takes polynomial time. Finally, as in the previous case we compute \(\tau_{total}\), the number of \(\mathcal{F}\)-deletion sets of size at most \(k\) in \(G\), and output it.
**Lemma 6.14**.: \(\tau_{total}\) _is the total number of all \(\mathcal{F}\)-deletion set of size at most \(k\) in \(G\), and it is computed in polynomial time._
Proof.: If Proposition 6.3 returns that \(G\) has no \(\mathcal{F}\)-deletion set of size \(k\), then clearly \(\tau_{total}=0\) is the correct answer. Otherwise, we compute \(\tau_{total}\) using the values of \(\mathsf{count}(k^{\prime})\) for \(k^{\prime}\leq k\). It is easy to verify that every \(\mathcal{F}\)-deletion set of \(G\) is counted in the formula for \(\tau_{total}\). To account for the running time, the case in which \(k^{\tau_{n}}\leq\log n\), the time taken is clearly polynomial.
In the case \(k^{\tau_{n}}>\log n\), that is, \(2^{k^{20(\eta+1)}}<n\), the only change is that the values of \(\mathsf{count}(k^{\prime})\) is computed by the lift procedure directly, rather than being supplied externally as in the other case. We also apply Lemma 6.8 to compute the values \(T_{C}[\mathcal{R}_{C},i_{C},N(C)\setminus S_{U}]\) for each choice of \(\mathcal{R}_{C},i_{C}\) and \(N(C)\setminus S\). Recall that the total number of calls made to Lemma 6.8 is at most \(\tau_{\#comp}\cdot\tau_{table}\) which is a polynomial in \(k\), and each application takes polynomial in \(n\) time. Next, we consider the computation of \(\mathsf{count}(k^{\prime})\) for some \(k^{\prime}\leq k\). By Observation 6.12, we need \(2^{\mathcal{O}(k^{3+9(\eta+1)}\log k)}\cdot n^{\mathcal{O}(1)}\) time for each evaluation of \(\mathsf{count}(k^{\prime})\). But as \(n>2^{k^{20(\eta+1)}}\), it follows that the time required for each evaluation is \(\mathcal{O}(n^{2})\). Thus in this case the lift procedure requires polynomial time.
The reduce and lift procedures described above prove Theorem 2.
## 7 Lower Bounds Based on SUM-Cross-Composition
We define two new notions of cross-compositions, which are suitable for parameterized counting problems. The first notion is defined as follows, and the second notion is defined in Section 8.
**Definition 7.1** (**SUM-Cross-Composition**).: _Let \(P:\Sigma^{\star}\to\mathbb{N}_{0}\) be a counting problem and \(Q:\Sigma^{\star}\times\mathbb{N}_{0}\to\mathbb{N}_{0}\) be a parameterized counting problem. We say that \(P\) SUM-cross-composes into \(Q\) if there exists a polynomial equivalence relation \(R\) and an algorithm \(A\), called a SUM-cross-composition, satisfying the following conditions. The algorithm \(A\) takes as input a sequence of strings \(x_{1},x_{2},\ldots,x_{t}\in\Sigma^{\star}\) that are equivalent with respect to \(R\), runs in time polynomial in \(\sum_{i=1}^{t}|x_{i}|\), and outputs one instance \((y,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\) such that:_
* \(k\leq p(\max_{i=1}^{t}|x_{i}|+\log t)\) _for some polynomial function_ \(p\)_, and_
* \(Q(y,k)=\sum_{i=1}^{t}P(x_{i})\)_._
We pose the following conjecture, which will be the basis of the lower bounds presented in this section.
**Conjecture 7.2** (**SUM-Conjecture**).: _Assume that a #P-hard counting problem \(P\) SUM-cross-composes into a well-behaved parameterized counting problem \(Q\). Then, \(Q\) does not admit a polynomial compression._
We first analyze the #\(k\)-Min \((s,t)\)-Cut problem, whose unparameterized version is #P-hard:
**Proposition 7.3** ([42]).: #Min \((s,t)\)-Cut _is #P-hard._
Since #Min \((s,t)\)-Cut is #P-hard, we will derive the hardness of the kernelization of #\(k\)-Min \((s,t)\)-Cut from the following lemma.
**Lemma 7.4**.: \(\#\textsc{Min}\ (s,t)\)-Cut _SUM-cross-composes into \(\#k\textsc{-Min}\ (s,t)\)-Cut._
Proof.: First, we specify the equivalence relation \(R\): Two strings \(x\) and \(x^{\prime}\) satisfy \(x\equiv_{R}x^{\prime}\) if and only if they do not encode instances of \(\#\textsc{Min}\ (s,t)\)-Cut, or they encode instances \(x=(G,s,t)\) and \(x^{\prime}=(G^{\prime},s^{\prime},t^{\prime})\) of \(\#\textsc{Min}\ (s,t)\)-Cut and the size of a minimum \((s,t)\)-cut in \(G\) is equal to the size of a minimum \((s^{\prime},t^{\prime})\)-cut in \(G^{\prime}\). Because the Min \((s,t)\)-Cut problem is solvable in polynomial time [9], it follows that \(R\) is polynomial.
Now, we describe the SUM-cross-composition. For this purpose, consider a sequence of strings \(x_{1},x_{2},\ldots,x_{\ell}\) that are equivalent with respect to \(R\). If they do not encode instances of \(\#\textsc{Min}\ (s,t)\)-Cut, then we can simply output a string that does not encode an instance of \(\#k\textsc{-Min}\ (s,t)\)-Cut. Hence, we suppose that for every \(i\in[\ell]\), \(x_{i}=(G_{i},s_{i},t_{i})\); then, the size of a minimum \((s_{i},t_{i})\)-cut in \(G_{i}\) is \(k\). Without loss of generality, we suppose that the vertices of these graphs are distinct (otherwise, we can rename them). Now, we construct an instance \((G,s,t)\) of \(\#k^{\prime}\textsc{-Min}\ (s,t)\)-Cut. (We will argue that \(k^{\prime}=k\).) Let:
\[V(G)=\bigcup_{i=1}^{\ell}(V(G_{i})\setminus\{t_{i}\})\cup\{t_{\ell}\},\ \text{and}\]
\[E(G)=\bigcup_{i=1}^{\ell-1}(E(G_{i}-\{t_{i}\})\cup\{\{v,s_{i+1}\}:\{v,t_{i}\} \in E(G_{i})\})\cup E(G_{\ell}).\]
Let \(s=s_{1}\) and \(t=t_{\ell}\). We refer to Fig. 1 for an illustration. Clearly, the construction can be done in polynomial time. For the sake of simplicity of the presentation, for every \(i\in[\ell-1]\), we abuse notation and refer to the vertex \(s_{i+1}\) in \(G\) also as \(t_{i}\); thus, for example, we refer to an edge \(\{u,s_{i+1}\}\) in \(G\) also as the edge \(\{u,t_{i}\}\) (which belongs to \(G_{i}\)). Observe that, under this notation abuse, we simply have that \(E(G)=\bigcup_{i=1}^{\ell}E(G_{i})\).
For the correctness of the composition, we state the two following claims. The correctness of these two claims is immediate from the construction of \(G\).
**Claim 7.5**.: _Let \(S\) be a minimum \((s,t)\)-cut in \(G\). Then, there exists \(i\in[\ell]\) such that \(S\subseteq E(G_{i})\) and \(S\) is a minimum \((s_{i},t_{i})\)-cut in \(G_{i}\)._
**Claim 7.6**.: _Let \(S\) be a minimum \((s_{i},t_{i})\)-cut in \(G_{i}\), for some \(i\in[\ell]\). Then, \(S\) is a minimum \((s,t)\)-cut in \(G\)._
Observe that, from Claims 7.5 and 7.6, it follows that \(k^{\prime}=k\), and that the number of minimum \((s,t)\)-cuts in \(G\) is equal to the sum, over all \(i\in[\ell]\), of the number of minimum \((s_{i},t_{i})\)-cuts in \(G_{i}\). Thus, the proof is complete.
Having Lemma 7.4 at hand, we proceed to consider PPTs that transfer the hardness to \(\#k\textsc{-Odd Cycle Transversal}\) and \(\#\ell\textsc{-Vertex Cover}\) (and \(\#m\textsc{-Vertex Cover}\)). First, we present a PPT from \(\#k\textsc{-Min}\ (s,t)\)-Cut to \(\#k\textsc{-Odd Cycle Transversal}\). For the PPT from \(\#k\textsc{-Odd Cycle Transversal}\) to \(\#\ell\textsc{-Vertex Cover}\) (and \(\#m\textsc{-Vertex Cover}\)), we will suppose that the instances of \(\#k\textsc{-Odd Cycle Transversal}\) satisfy a particular property, hence we already define it now, and prove that our PPT from \(\#k\textsc{-Min}\ (s,t)\)-Cut to \(\#k\textsc{-Odd Cycle Transversal}\) only produces instances with this property.
Figure 1: The construction in the proof of Lemma 7.4.
**Definition 7.7** (**Nice Instances of \(\#k\)-Odd Cycle Transversal**).: _An instance \((G,k)\) of \(\#k\)-Odd Cycle Transversal is nice if for every odd cycle transversal \(S\) of \(G\) of size at most \(k\), \(G-S\) is a connected graph._
We now present our PPT from \(\#k\)-Min \((s,t)\)-Cut to \(\#k\)-Odd Cycle Transversal.
**Lemma 7.8**.: _There exists a PPT from \(\#k\)-Min \((s,t)\)-Cut to \(\#k\)-Odd Cycle Transversal. Moreover, the PPT only produces nice instances of \(\#k\)-Odd Cycle Transversal._
Proof.: For the description of the PPT, let \((G,s,t)\) be an instance of \(\#k\)-Min \((s,t)\)-Cut. Without loss of generality, we suppose that \(G\) is a connected graph, else we can discard all connected components that do not contain \(s\) or \(t\), and, if \(s\) and \(t\) are not in the same connected component, then we already know that the solution is \(1\) (the only minimum \((s,t)\)-cut is the empty set), and hence the PPT is trivial. Then, we construct an instance \((G^{\prime},k)\) of \(\#k\)-Odd Cycle Transversal. Here, the parameter \(k\) is the size of a minimum \((s,t)\)-cut in \(G\).
First, let \(G_{1}\) be the graph obtained from \(G\) by subdividing each edge (once). For an edge \(\{u,v\}\in E(G)\), we denote the corresponding vertex in \(G_{1}\) by \(a_{\{u,v\}}\). Let \(G_{2}\) be the graph whose vertex set is \(\{v_{i}:v\in V(G),i\in[k+1]\}\cup(V(G_{1})\setminus V(G))\), and whose edge set is \(\{\{u_{i},a_{\{u,v\}}\}:\{u,v\}\in E(G),i\in[k+1]\}\). That is, \(G_{2}\) is the result of the replacement of every vertex of \(G_{1}\) that belongs to \(G\) by \(k+1\) copies (false twins) of that vertex. Lastly, we define \(G^{\prime}\):
\[V(G^{\prime})=V(G_{2})\cup\{x_{i}:i\in[k+1]\}\cup\{y_{i}:i\in[k+1]\},\text{ and}\]
\[E(G^{\prime})=E(G_{2})\cup\{\{x_{i},y_{i}\}:i\in[k+1]\}\cup\{\{s_{i},x_{j}\}:i,j\in[k+1]\}\cup\{\{t_{i},x_{j}\}:i,j\in[k+1]\}.\]
Clearly, the construction (performed by the reduction procedure of the PPT) can be done in polynomial time.
For correctness, we have the following claims.
**Claim 7.9**.: _Let \(G^{\prime\prime}=G^{\prime}-\{\{x_{i},y_{i}\}:i\in[k+1]\}\). Then, \(G^{\prime\prime}\) is bipartite._
Proof.: Consider the following partition \((X,Y)\) of \(V(G^{\prime\prime})\):
\[X=\{a_{\{u,v\}}:\{u,v\}\in E(G)\}\cup\{x_{i}:i\in[k+1]\}\cup\{y_{i}:i\in[k+1] \},\text{ and}\]
\[Y=V(G^{\prime\prime})\setminus X=\{v_{i}:v\in V(G),i\in[k+1]\}.\]
From the construction of \(E(G^{\prime\prime})\), it should be clear that \(E(G^{\prime\prime})\subseteq\{\{x,y\}:x\in X,y\in Y\}\).
**Claim 7.10**.: _Let \(C\) be an odd cycle in \(G^{\prime}\). Then, there exists a path \(P=v^{1}-v^{2}-\ldots-v^{\ell}\) where \(v^{1}=s\) and \(v^{\ell}=t\) in \(G\), and \(i_{1},i_{2},\ldots,i_{\ell}\in[k+1]\) such that:_
\[C\supseteq v^{1}_{i_{1}}-a_{\{v^{1},v^{2}\}}-v^{2}_{i_{2}}-a_{\{v^{2},v^{3}\}} -v^{3}_{i_{3}}-\cdots-v^{\ell-1}_{i_{\ell-1}}-a_{\{v^{\ell-1},v^{\ell}\}}-v^ {\ell}_{i_{\ell}}.\]
Proof.: Due to Claim 7.9 and since a graph is bipartite if and only if it does not contain any odd cycle, there exist \(j\in[k+1]\) such that \(\{x_{j},y_{j}\}\subseteq E(C)\). Targeting a contradiction, suppose that \(C\) does not contain a path of the form stated in the lemma. Thus, the definition of \(G^{\prime}\) implies that there exist \(r_{1},r_{2},\ldots,r_{s}\) such that
\[C=x_{r_{0}}-P_{r_{1}}-x_{r_{1}}-y_{r_{1}}-P_{r_{2}}-y_{r_{2}}-x_{r_{2}}-P_{r_{3 }}-x_{r_{3}}-y_{r_{3}}-\cdots-P_{r_{s-1}}-x_{r_{s-1}}-y_{r_{s-1}}-P_{r_{s}}-y_ {r_{s}}-x_{r_{s}},\]
where \(x_{j}=x_{r_{0}}=x_{r_{s}}\) and \(y_{j}=y_{r_{s}}\), and, for every \(i\in[s]\), \(P_{r_{i}}\) is a path in \(G^{\prime\prime}\) (defined in Claim 7.9) whose endpoints satisfy that they are adjacent to the vertices specified above (\(x_{r_{i-1}}\) and \(x_{r_{i}}\) if \(i\) is odd, and \(y_{r_{i-1}}\) and \(y_{r_{i}}\) if \(i\) is even). Observe that, necessarily, \(s\) is even. From Claim 7.9 (specifically, consider the bipartition defined in the proof), we know that each \(P_{r_{i}}\), \(i\in[s]\), along with the edge before it and the edge after it, has an even number of edges. Besides this, all other edges of \(C\) are \(\{x_{r_{i}},y_{r_{i}}\}\), \(i\in[s]\). However, since \(s\) is even, this means that their number is even as well. Overall, we derive that \(C\) contains an even number of edges, which is a contradiction (since \(C\) is an odd cycle).
**Claim 7.11**.: _Let \(P=v^{1}-v^{2}-\ldots-v^{\ell}\), where \(v^{1}=s\) and \(v^{\ell}=t\), be a path in \(G\), and let \(i_{1},i_{2},\ldots,i_{\ell},j\in[k+1]\). Additionally, let:_
\[C=v_{i_{1}}^{1}-a_{\{v^{1},v^{2}\}}-v_{i_{2}}^{2}-a_{\{v^{2},v^{3}\}}-v_{i_{3}} ^{3}-\cdots-v_{i_{\ell-1}}^{\ell-1}-a_{\{v^{\ell-1},v^{\ell}\}}-v_{i_{\ell}}^{ \ell}-x_{j}-y_{j}-v_{i_{1}}^{1}.\]
_Then, \(C\) is an odd cycle in \(G^{\prime}\)._
Proof.: From the definition of \(G^{\prime}\), it is immediate that \(C\) is a cycle in \(G^{\prime}\), and, clearly, \(C\) contains an odd number (being \(2(\ell-1)+3=2\ell+1\)) of edges.
**Claim 7.12**.: _Let \(S\) be a minimum \((s,t)\)-cut in \(G\). Then, \(S^{\prime}=\{a_{e}:e\in S\}\) is an odd cycle transversal of size at most \(k\) in \(G^{\prime}\)._
Proof.: Since \(|S|=k\), it follows that \(|S^{\prime}|\leq k\). Now, targeting a contradiction, suppose that \(S^{\prime}\) is not an odd cycle transversal of \(G^{\prime}\). So, \(G^{\prime}-S^{\prime}\) contains some odd cycle cycle \(C\). By Claim 7.10, there exists a path \(P=v^{1}-v^{2}-\ldots-v^{\ell}\) where \(v^{1}=s\) and \(v^{\ell}=t\) in \(G\), and \(i_{1},i_{2},\ldots,i_{\ell}\in[k+1]\) such that \(C\supseteq v_{i_{1}}^{1}-a_{\{v^{1},v^{2}\}}-v_{i_{2}}^{2}-a_{\{v^{2},v^{3}\}} -v_{i_{3}}^{3}-\cdots-v_{i_{\ell}}^{\ell-1}-a_{\{v^{\ell-1},v^{\ell}\}}-v_{i_ {\ell}}^{\ell}\). Since \(V(C)\cap S^{\prime}=\emptyset\), we have that \(a_{\{v^{1},v^{2}\}},a_{\{v^{2},v^{2}\}},\ldots,a_{\{v^{\ell-1},v^{\ell}\}}\notin S\). In turn, this implies that \(P\) exists in \(G-S\). However, since \(S\) is an \((s,t)\)-cut in \(G\), we have thus reached a contradiction.
**Claim 7.13**.: _Let \(S^{\prime}\) be an odd cycle cycle transversal of \(G^{\prime}\) of size at most \(k\). Then, \(S=\{e\in E(G):a_{e}\in S\}\) is a minimum \((s,t)\)-cut in \(G\). Moreover, \(G^{\prime}-S^{\prime}\) is a connected graph._
Proof.: We first show that \(S\) is a minimum \((s,t)\)-cut in \(G\). Since \(|S^{\prime}|\leq k\), it follows that \(|S|\leq k\). Now, targeting a contradiction, suppose that \(S\) is not an \((s,t)\)-cut in \(G\). So, \(G-S\) contains some path \(P=v^{1}-v^{2}-\ldots-v^{\ell}\) where \(v^{1}=s\) and \(v^{\ell}=t\). Because \(|S^{\prime}|\leq k\), there exist \(i_{1},i_{2},\ldots,i_{\ell},j\in[k+1]\) such that, for every \(r\in[\ell]\), \(v_{i_{r}}^{r}\notin S^{\prime}\), and \(x_{j},y_{j}\notin S^{\prime}\). By Claim 7.11, \(C=v_{i_{1}}^{1}-a_{\{v^{1},v^{2}\}}-v_{i_{2}}^{2}-a_{\{v^{2},v^{3}\}}-v_{i_{3} }^{3}-\cdots-v_{i_{\ell}}^{\ell-1}-a_{\{v^{\ell-1},v^{\ell}\}}-v_{i_{\ell}}^{ \ell}-x_{j}-y_{j}-v_{i_{1}}^{1}\) is an odd cycle in \(G^{\prime}\). Since \(P\) belongs to \(G-S\), we have that \(a_{\{v^{1},v^{2}\}},a_{\{v^{2},v^{3}\}},\ldots,a_{\{v^{\ell-1},v^{\ell}\}} \notin S^{\prime}\). Hence, from our choice of \(i_{1},i_{2},\ldots,i_{\ell},j\), it follows that \(C\) belongs to \(G^{\prime}-S^{\prime}\). However, since \(S^{\prime}\) is an odd cycle transversal of \(G^{\prime}\), we have thus reached a contradiction.
Because \(S\) is a minimum \((s,t)\)-cut in \(G\), it follows that \(|S|=k\). So, \(S^{\prime}=\{a_{e}:e\in S\}\). Moreover, because \(G\) is a connected graph and \(S\) is a minimum \((s,t)\)-cut in \(G\), \(G-S\) consists of exactly two connected components: one component that contains \(s\), and the other component that contains \(t\). However, by the definition of \(G_{2}\), this implies that in \(G_{2}-S^{\prime}\), every connected component must contain \(s_{i}\) or \(t_{i}\) for some \(i\in[k+1]\). In turn, by the definition of \(G^{\prime}\), this implies that \(G^{\prime}-S^{\prime}\) is a connected graph.
Observe that, from Claims 7.12 and 7.13, it follows that the number of minimum \((s,t)\)-cuts in \(G\) is equal to the number of odd cycle transversals of \(G^{\prime}\) of size at most \(k\). (In fact, every odd cycle transversal of \(G^{\prime}\) of size at most \(k\) is of size exactly \(k\).) Moreover, the second part of Claim 7.13 shows that \((G^{\prime},k)\) is nice. So, given the the number of odd cycle transversals of \(G^{\prime}\) of size at most \(k\), the lifting procedure of the PPT simply outputs this number. Thus, the proof is complete.
**Lemma 7.14**.: _There exists a PPT from \(\#k\)-Odd Cycle Transversal restricted to nice instances to \(\#\ell\)-Vertex Cover and \(\#m\)-Vertex Cover._
Proof.: We refer to the PPT in [15] (see Lemma 3.10) from \(k\)-Odd Cycle Transversal to \(\ell\)-Vertex Cover and \(m\)-Vertex Cover. From the construction (and the proof of Lemma 3.10), _and because we only deal with nice instances_, we can see that, given an instance \((G,k)\) of \(k\)-Odd Cycle Transversal, and the produced instance \((G^{\prime},k^{\prime})\) of \(\ell\)-Vertex Cover (or \(m\)-Vertex Cover), the number of odd cycle transversals of \(G\) of size at most \(k\) is exactly half the number of vertex covers of \(G^{\prime}\) of size at most \(k^{\prime}\). Thus, the correctness of the lemma follows. For the sake of completeness, we present the details in Appendix B.
Observe that all problems considered in this section are well-behaved: for a graph on \(n\) and \(m\) vertices and edges, \(2^{n+m}\) is a trivial upper bound on the number of solutions for all of these problems. So, from Lemmas 4.5, 7.4, 7.8 and 7.14, we directly conclude the following theorem.
**Theorem 5**.: \(\#k\)-Min \((s,t)\)-Cut_, \(\#k\)-Odd Cycle Transversal, \(\#\ell\)-Vertex Cover and \(\#m\)-Vertex Cover do not admit polynomial compressions, unless the SUM-conjecture is false._
## 8 Lower Bound Based on EXACT-Cross-Composition
Our second new notion of a cross-composition is defined as follows.
**Definition 8.1** (**Exact-Cross-Composition**).: _Let \(P:\Sigma^{\star}\rightarrow\mathbb{N}_{0}\) be a counting problem and \(Q:\Sigma^{\star}\times\mathbb{N}_{0}\rightarrow\mathbb{N}_{0}\) be a parameterized counting problem. We say that \(P\) EXACT-cross-composes into \(Q\) if there exists a polynomial equivalence relation \(R\) and an algorithm \(A\), called an EXACT-cross-composition, satisfying the following conditions. The algorithm \(A\) takes as input a sequence of strings \(x_{1},x_{2},\ldots,x_{t}\in\Sigma^{\star}\) that are equivalent with respect to \(R\), runs in time polynomial in \(\sum_{i=1}^{t}|x_{i}|\), and outputs one instance \((y,k)\in\Sigma^{\star}\times\mathbb{N}_{0}\) such that:_
* \(k\leq p(\max_{i=1}^{t}|x_{i}|+\log t)\) _for some polynomial function_ \(p\)_, and_
* _there exists a polynomial-time procedure that, given_ \(x_{1},x_{2},\ldots,x_{t},(y,k)\) _and_ \(Q(y,k)\)_, outputs_ \(P(x_{1}),P(x_{2}),\ldots,P(x_{t})\)_._
We remark that EXACT-cross-compositions seem to be harder to devise than SUM-cross-compositions. In particular, for EXACT-cross-compositions, but not for SUM-cross-compositions, we are able to prove the following theorem.
**Theorem 3**.: _Assume that a #P-hard counting problem \(P\) EXACT-cross-composes into a parameterized counting problem \(Q\). Then, \(Q\) does not admit a polynomial compression, unless #P \(\subseteq\) "NP/poly" (which implies that coNP \(\subseteq\) NP/poly)._
By #P \(\subseteq\) "NP/poly", we mean that, for any #P-complete problem \(P\), there exists a nondeterministic polynomial-time algorithm \(A\) and a sequence of strings \((\alpha_{n})_{n=0,1,2,\ldots}\), called _advice_, such that:
1. Given an instance \(x\) of \(P\) of size \(n\), \(A\) has access to \(\alpha_{n}\), and: 1. For every computation path of \(A\), the output is either \(P(x)\) or "Do Not Know". 2. There exists a computation path of \(A\) whose output is \(P(x)\).
2. There exists a polynomial \(p:\mathbb{N}\rightarrow\mathbb{N}\) such that \(|\alpha_{i}|\leq p(i)\) for every \(i\in\mathbb{N}\).
Another way to think of the phrase "NP/poly" is as follows. Observe that we can "force" a counting problem \(P\) to be a decision problem \(P^{\prime}\) by the addition, to each of its instances, of another argument \(s\), and, accordingly, modifying its task to that of deciding whether \(P(x)\geq s\). Further, if we let \(N_{n}\) be the maximum bitsize of the encoding of \(P(x)\) for any instance \(x\) of \(P\) of size \(n\) (that is polynomially bounded, since \(P\) is well-behaved), then we can solve \(P\) itself by making polynomially many calls to an algorithm for \(P^{\prime}\): We use this algorithm to perform a binary search on the range \([N_{n}]\). Under this interpretation, our proof implies the following statement. If a #P-hard problem \(P\) EXACT-cross-composes into a parameterized counting problem \(Q\), then \(Q\) does not admit a polynomial compression unless the "forced" decision version \(P^{\prime}\) of \(P\) can be solved by a (standard) NP/poly algorithm, or, alternatively, \(P\) can be solved by making polynomially many calls to a (standard) NP/poly algorithm.
Additionally, we would like to point out that the supposition that coNP is not contained in NP/poly (which is widely believed to be true, and it is the standard supposition on which hardness results for kernelization algorithms are based [26]) implies the supposition that #P is not contained in "NP/poly". To see this, suppose that #P \(\subseteq\) "NP/poly". For example, this implies that #\(k\)-Vertex Cover, which is #P-hard [29], belongs "NP/poly". Now, consider the complement of \(k\)-Vertex Cover, denoted by
\(Q\): Given a graph \(G\) and a non-negative integer \(k\), decide whether all vertex covers of \(G\) are of size larger than \(k\). Since \(k\)-Vertex Cover is NP-hard [32], \(Q\) is coNP-hard. However, because we suppose that \(\#k\)-Vertex Cover belongs to "NP/poly", the above discussion implies we can determine whether its solution is at least one by making a single call to a (standard) NP/poly algorithm. However, this solves \(Q\), and, hence, we derive that coNP \(\subseteq\) NP/poly.
Proof of Theorem 3.: The proof follows lines similar to that of Proposition 3.9. For the sake of completeness, we present the details in Appendix C.
We proceed to present an EXACT-cross-composition for \(\#w\)-Min \((s,t)\)-Cut. We remark that we do not know how to present EXACT-cross-composition for the problems in Section 7.
**Lemma 8.2**.: \(\#\textsc{Min}\ (s,t)\)-Cut _EXACT-cross-composes into \(\#w\)-Min \((s,t)\)-Cut._
Proof.: The equivalence relation \(R\) is the same as the one in the proof of Lemma 7.4. Now, we describe the EXACT-cross-composition. For this purpose, consider a sequence of strings \(x_{1},x_{2},\ldots,x_{\ell}\) that are equivalent with respect to \(R\). Similarly to the proof of Lemma 7.4, we suppose that for every \(i\in[\ell]\), \(x_{i}=(G_{i},s_{i},t_{i})\); then, the size of a minimum \((s_{i},t_{i})\)-cut in \(G_{i}\) is \(k\). Let \(m^{\prime}=\max_{i=1}^{\ell}|E(G_{i})|\). For every \(i\in[\ell]\), let \(q_{i}\) denote the (unknown) number of minimum \((s_{i},t_{i})\)-cuts in \(G_{i}\) (which are of size \(k\)). Observe that, if \(\ell\geq 2^{m^{\prime}}\), then, in polynomial time, we can iterate over every subset of edges of each of the graphs \(G_{i},i\in[\ell]\), and thereby compute \(q_{1},q_{2},\ldots,q_{\ell}\). In this case, the design of an EXACT-cross-composition is trivial, and hence we suppose that \(\ell<2^{m^{\prime}}\).
Now, we consider the construction of \((G,s,t)\) given in the proof of Lemma 7.4. However, here, we modify \(G\) further in order to attain the output instance. Let \(m=2m^{\prime}\). Then, the output instance of \(\#w\)-Min \((s,t)\)-Cut is \((G^{\prime},s,t)\), where \(G^{\prime}\) is defined as follows:
\[V(G^{\prime})=V(G)\cup\left(\bigcup_{i=1}^{\ell}\{x_{i}^{j}:j\in[m(\ell-1)] \}\cup\{y_{i}^{j},z_{i}^{j}:j\in[m(i-1)]\}\right),\text{ and }\]
\[E(G^{\prime})=E(G)\cup\left(\bigcup_{i=1}^{\ell}E_{i}\right),\text{ where }\]
\[E_{i}=\{\{s_{i},x_{i}^{j}\},\{x_{i}^{j},y_{i}^{j}\},\{y_{i}^{j},z_{i}^{j}\},\{ z_{i}^{j},t_{i}\}:j\in[m(i-1)]\}\cup\{\{s_{i},x_{i}^{j}\},\{x_{i}^{j},t_{i}\}:j \in[m(\ell-1)]\setminus[m(i-1)]\}.\]
We refer to Fig. 2 for an illustration. Clearly, the construction can be done in polynomial time.
For the correctness of the composition, we first present an upper bound on the treewidth of \(G^{\prime}\), which is the parameter \(w\) associated with \((G^{\prime},s,t)\).
**Claim 8.3**.: \(w=\mathsf{tw}(G^{\prime})\leq\max\{2,\max_{i=1}^{\ell}\mathsf{tw}(G_{i})+1\}\)_._
Proof.: Let \(w^{\star}=\max_{i=1}^{\ell}\mathsf{tw}(G_{i})\). For every \(i\in[\ell]\), let \(\mathcal{T}_{i}=(T_{i},\beta_{i})\) be a tree decomposition of \(G_{i}\) of width \(\mathsf{tw}(G_{i})\leq w^{\star}\), and define \(\mathcal{T}_{i}^{\prime}=(T_{i}^{\prime},\beta_{i}^{\prime})\) as follows:
* Choose \(v_{i}\in V(T_{i})\) such that \(s_{i}\in\beta(a_{i})\).
* \(V(T_{i}^{\prime})=V(T_{i})\cup\{a_{i}^{j}:j\in[m(\ell-1)]\}\cup\{b_{i}^{j},c_ {i}^{j}:j\in[m(i-1)]\}\).
* \(E(T_{i}^{\prime})=E(T_{i})\cup\{\{v_{i},a_{i}^{j}\}:j\in[m(\ell-1)]\}\cup\{\{a_ {i}^{j},b_{i}^{j}\},\{b_{i}^{j},c_{i}^{j}\}:j\in[m(i-1)]\}\).
* For every \(u\in V(T_{i})\), \(\beta_{i}^{\prime}(u)=\beta_{i}(u)\cup\{t_{i}\}\).
* For every \(j\in[m(\ell-1)]\),\(\beta_{i}^{\prime}(a_{i}^{j})=\{s_{i},x_{i}^{j},t_{i}\}\).
* For every \(j\in[m(i-1)]\), \(\beta_{i}^{\prime}(b_{i}^{j})=\{x_{i}^{j},y_{i}^{j},t_{i}\}\) and \(\beta_{i}^{\prime}(c_{i}^{j})=\{y_{i}^{j},z_{i}^{j},t_{i}\}\).
It is straightforward to verify that \(\mathcal{T}_{i}^{\prime}\) is a tree decomposition of \(G^{\prime}[V(G_{i})\cup\{x_{i}^{j}:j\in[m(\ell-1)]\}\cup\{y_{i}^{j},z_{i}^{j}:j \in[m(i-1)]\}]\), and its width is \(w_{i}^{\prime}=\max\{2,\mathsf{tw}(G_{i})+1\}\leq\max\{2,w^{\star}+1\}\).
Now, we define \(\mathcal{T}^{\prime}=(T^{\prime},\beta^{\prime})\) as follows:
* \(V(T^{\prime})=\bigcup_{i=1}^{\ell}V(T^{\prime}_{i})\).
* \(E(T^{\prime})=\bigcup_{i=1}^{\ell-1}(E(T^{\prime}_{i})\cup\{\{v_{i},v_{i+1}\}\}) \cup E(T^{\prime}_{\ell})\).
* For every \(i\in[\ell]\) and \(u\in V(T^{\prime}_{i})\), \(\beta^{\prime}(u)=\beta^{\prime}_{i}(u)\).
It is straightforward to verify that \(\mathcal{T}^{\prime}\) is a tree decomposition of \(G^{\prime}\), and its width is bounded from above by \(\max\{2,w^{\star}+1\}\). This completes the proof of the claim.
Second, we present two immediate claims about the correspondence between the cuts of \(G_{i}\), \(i\in[\ell]\), and the cuts of \(G^{\prime}\). Towards this, for all \(i\in[\ell]\), let \(C_{i}=\{(\{a_{1},b_{1}\},\{a_{2},b_{2}\},\ldots,\{a_{m(\ell-1)},b_{m(\ell-1)} \}):\) for all \(j\in[m(i-1)]\), \(\{a_{j},b_{j}\}\in\{\{s_{i},x^{j}_{i}\},\{x^{j}_{i},y^{j}_{i}\},\{y^{j}_{i},z^ {j}_{i}\},\{z^{j}_{i},t_{i}\}\}\), and for all \(j\in[m(\ell-1)]\setminus[m(i-1)],\{a_{j},b_{j}\}\in\{\{s_{i},x^{j}_{i}\},\{x^{ j}_{i},t_{i}\}\}\}\); observe that \(|C_{i}|=2^{m(i-1)}\cdot 2^{m(\ell-1)}\).
**Claim 8.4**.: _Let \(S\) be a minimum \((s,t)\)-cut in \(G\). Then, there exists \(i\in[\ell]\) such that \(S=A\cup B\) where \(A\) is a minimum \((s,t)\)-cut in \(G_{i}\) and \(B\in C_{i}\)._
**Claim 8.5**.: _Let \(A\) be a minimum \((s_{i},t_{i})\)-cut in \(G_{i}\), for some \(i\in[\ell]\). Then, for all \(B\in C_{i}\), \(S=A\cup B\) is a minimum \((s,t)\)-cut in \(G\)._
Additionally, let \(q\) denote the number of minimum \((s,t)\)-cuts in \(G^{\prime}\). So, from Claims 8.4 and 8.5, we arrive at the following conclusion.
**Claim 8.6**.: \(q=\sum_{i=1}^{\ell}q_{i}\cdot 2^{m(i-1)}\cdot 2^{m(\ell-1)}\)_._
We are now ready to show how to extract each \(q_{i}\), \(i\in[\ell]\), given \(q\).
**Claim 8.7**.: _There exists a polynomial-time procedure that, given \((G_{1},s_{1},t_{1}),(G_{2},s_{2},t_{2}),\ldots,(G_{\ell},s_{\ell},t_{\ell})\), \((G^{\prime},s,t)\) and \(q\), outputs \(q_{1},q_{2},\ldots,q_{\ell}\)._
Proof.: The procedure performs the following operations:
1. Initialize \(\widehat{q}\gets q\).
2. Initialize \(\widehat{q}\gets q\).
3. Initialize \(\widehat{q}\gets q\).
4. Initialize \(\widehat{q}\gets q\).
Figure 2: The construction in the proof of Lemma 7.4.
2. For \(i=\ell,\ell-1,\ldots,1\): 1. Let \(q_{i}\leftarrow\lfloor\widehat{q}/(2^{m(i-1)}\cdot 2^{m(\ell-1)})\rfloor\). 2. Update \(\widehat{q}\leftarrow\widehat{q}-q_{i}\cdot 2^{2m(i-1)}\cdot 2^{m(\ell-1)}\).
3. Return \(q_{1},q_{2},\ldots,q_{\ell}\).
Clearly, the procedure runs in polynomial time. Additionally, recall that \(\ell<2^{m^{\prime}}\) and \(m=2m^{\prime}\), and observe that for every \(i\in[\ell]\), we have that \(q_{i}\in[2^{m^{\prime}}]\). Thus, for every \(i\in[\ell]\), we have that
\[\begin{array}{ll}2^{m(i-1)}\cdot 2^{m(\ell-1)}&>\ell\cdot 2^{m^{\prime}}\cdot 2 ^{m((i-1)-1)}\cdot 2^{m(\ell-1)}\\ &\geq\sum_{j=1}^{i-1}2^{m^{\prime}}\cdot 2^{m((i-1)-1)}\cdot 2^{m(\ell-1)}\\ &\geq\sum_{j=1}^{i-1}q_{j}\cdot 2^{m(j-1)}\cdot 2^{m(\ell-1)}.\end{array}\]
Due to this inequality, the correctness of the procedure follows from Claim 8.6.
Thus, the correctness of the composition follows from Claims 8.3 and 8.7.
As already noted in the previous section, \(\#\textsc{Min}\ (s,t)\)-Cut (and, hence, also any parameterized version of it) is well-behaved. So, from Proposition 7.3, Theorem 3 and Lemma 8.2, we directly conclude the following theorem.
**Theorem 4**.: \(\#w\textsc{-Min}\ (s,t)\textsc{-Cut}\) _does not admit a polynomial compression, unless #P \(\subseteq\) "NP/poly" (which implies that coNP \(\subseteq\) NP/poly)._
|
2306.02424 | Sanity Checks for Saliency Methods Explaining Object Detectors | Saliency methods are frequently used to explain Deep Neural Network-based
models. Adebayo et al.'s work on evaluating saliency methods for classification
models illustrate certain explanation methods fail the model and data
randomization tests. However, on extending the tests for various state of the
art object detectors we illustrate that the ability to explain a model is more
dependent on the model itself than the explanation method. We perform sanity
checks for object detection and define new qualitative criteria to evaluate the
saliency explanations, both for object classification and bounding box
decisions, using Guided Backpropagation, Integrated Gradients, and their
Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0,
trained on COCO. In addition, the sensitivity of the explanation method to
model parameters and data labels varies class-wise motivating to perform the
sanity checks for each class. We find that EfficientDet-D0 is the most
interpretable method independent of the saliency method, which passes the
sanity checks with little problems. | Deepan Chakravarthi Padmanabhan, Paul G. Plöger, Octavio Arriaga, Matias Valdenegro-Toro | 2023-06-04T17:57:51Z | http://arxiv.org/abs/2306.02424v1 | # Sanity Checks for Saliency Methods Explaining Object Detectors
###### Abstract
Saliency methods are frequently used to explain Deep Neural Network-based models. Adebayo _et al._'s work on evaluating saliency methods for classification models illustrate certain explanation methods fail the model and data randomization tests. However, on extending the tests for various state of the art object detectors we illustrate that the ability to explain a model is more dependent on the model itself than the explanation method. We perform sanity checks for object detection and define new qualitative criteria to evaluate the saliency explanations, both for object classification and bounding box decisions, using Guided Backpropagation, Integrated Gradients, and their Smoothgrad versions, together with Faster R-CNN, SSD, and EfficientDet-D0, trained on COCO. In addition, the sensitivity of the explanation method to model parameters and data labels varies class-wise motivating to perform the sanity checks for each class. We find that EfficientDet-D0 is the most interpretable method independent of the saliency method, which passes the sanity checks with little problems.
Keywords:Object detectors Saliency methods Sanity checks.
## 1 Introduction
Localizing and categorizing different object instances is pivotal in various real-world applications such as autonomous driving [7], healthcare [4], and text detection [9]. Recent advances with Deep Neural Network-based (DNN) object detectors demonstrate remarkable performances both in terms of robustness and generalization across practical use cases [3]. Even though detectors are extensively needed in safety-critical applications, the heavily parameterized DNN-based detectors limit understanding the rationale behind the detections made by such detectors. In addition, object detectors are prone to non-local effects as a slight change in the object position can affect the detector prediction [20]. Therefore, explaining detector decisions is imperative to earn user trust and understand the reason behind predictions to a certain extent in safety-critical situations, overall improving system safety.
Explaining a DNN decision-making process has been addressed prominently [23][34][24][6]. The explanations are useful for debugging the model, reveal the spurious effects and biases learned by a model as well as underpins regulatory requirements (like GDPR). Furthermore, such explanations boost transparency and contribute towards safety of the associated DNN-based systems [11][29]. Among the methods explaining DNNs, saliency methods are popular explanation methods [21][16], which provide the input feature attribution that highlights the most relevant pixels responsible for the model prediction. Despite extensive study of employing saliency methods to classification tasks, only handful of works explain detector decisions [18][30][8]. Moreover, the evaluation metrics used to quantitatively assess the detector explanations fail certain sanity checks as well and prove to be statistically unreliable [28].
Sanity checks are basic procedures to test the ability of an explanation method to correctly explain a model decision [2] or test the ability of an evaluation metrics to correctly assess the explanation method [28] that generates a saliency map. In this paper, we are concerned with the former, where we check the ability of an explanation method to generate relevant saliency map based explanations for detections made by an object detector. However, there is limited work studying object detector explainability, and in particular basic sanity checks have not been performed to the best of our knowledge. Therefore, conducting simple sanity checks to determine the quality of an explanation method is extremely important.
In this paper we conduct simple sanity checks for certain explanation methods explaining three object detector predictions. We extend the sanity checks in [2] to object detectors. The sanity checks test explanation method sensitivity towards the detectors parameters (model randomization test) and data generation method (data randomization test).
The contributions of our paper are:
* We evaluate sanity checks for saliency explanations of object detectors, both on classification and bounding box decision explanations.
* We define clear qualitative evaluation criteria for sanity checks in saliency explanations for object detectors.
* We find that Modern object detectors like EfficientDet-D0 [27] seem to be more interpretable and pass more sanity checks than older detectors like SSD [15] and Faster R-CNN [19].
We expect that our work helps advance our understanding of object detector explainability and increases the use of explanations in computer vision.
## 2 Related Work
Adebayo _et al._[2] are the pioneers to propose sanity checks for explanation methods based on randomization tests. The authors identify that various widely used explanation methods provide saliency map explanations that are independent of the model parameters and the data used to develop the model. The widely-used gradient-based explanation methods such as guided backpropagation [24] and
Guided GradCAM [22] fail both model and data randomization sanity checks. In this related work, the sanity checks are performed on classifier models such as Inception v3, CNN, and MLP trained using ImageNet, Fashion MNIST, and MNIST datasets respectively. However, Yona _et al._[33] posit that the randomization tests are distribution-dependent and modify the sanity checks proposed in [2] with a causal perspective. The model sensitivity test is performed by combining the original images with multiple or partial objects to generate saliency maps for random and trained model. This reformulation is an attempt to spatially control the relevant features for a particular class and extract visually distinct saliency maps. The methods failing the sanity checks in [2] such as vanilla and guided backpropagation pass this reformulated version. Kindermans _et al._[13] proposes input invariance property as a sanity check for saliency methods. The saliency method output should not be affected by the transformations done to the input, mirroring the model sensitivity to the specific transformation. Experiments on MNIST illustrate the possibility to forcefully manipulate the explanations. The literature on interpretability cover certain axioms such as completeness [6], implementation invariance, and sensitivity [26] are considered as indicators of reliability for saliency methods. Kim _et al._[12] develop a synthetic benchmark and enable a ground-truth-based evaluation procedure. Various evaluation metrics to assess the explanation method with regards to factors such as faithfulness, robustness, and fairness of explanation is provided by [10]. Tomsett _et al._[28] conclude the evaluation metrics assessing the faithfulness of the explanations are unreliable by conducting certain sanity checks on the metrics. In this paper, we
Figure 1: Sample detection explanations using EfficientDet-D0 and SGBP, considering one saliency explanation for classification and bounding box regression decisions. We find that EfficientDet-D0 provides high quality explanations that pass sanity checks. For all figures in this paper, the saliency maps are overlaid on the corresponding original image after min-max normalization with the minimum and maximum value indicated in the corresponding heatmap.
extend the sanity checks performed by [2] based on randomization to detectors and report our findings.
## 3 Sanity Checks for Object Detection Saliency Explanations
We use two kinds of sanity checks as defined by Adebayo _et al._[2]. The model parameter and data randomization tests have been proposed to evaluate the explanation methods for classification tasks.
**Model Randomization**. The model parameter randomization test analyzes the saliency method output for a trained classifier model against the saliency method output for a model parameter initialized with random values [2]. The saliency maps help to understand the explanation method sensitivity to model parameters and to model properties, in general. A similar saliency map signifies that the saliency method will not be helpful to debug a model as the saliency method is invariant to the model parameters.
**Data Randomization**. In the data randomization test, the saliency maps for a model trained on a correctly labeled dataset and model trained using randomly permuted labels are compared [2].
A similar saliency map between the two outputs illustrates the relationship insensitivity between labels and input images. The saliency maps will not reflect the reason behind label and input image relationship captured by the data generation process. If the explanations are indifferent to a random label assigned to a mammogram image, for instance, the saliency map fails to explain the real reason for a diagnosis output.
The tests serve as sanity checks to assess the scope of a particular explanation method for explaining models performing certain tasks. These are very basic assumptions made on saliency explanations and many methods fail these basic tests in classification tasks.
In this paper, we use the two randomization tests on pre-trained object detectors, for a certain set of saliency explanation methods, and we test if those detectors and explanation methods pass the basic sanity checks.
### Quantitative Evaluation Criteria
For quantitative evaluation, in order to assess the change in saliency maps when randomizing the model parameters, the similarity between the classification decision saliency maps generated from each randomized model instance and the true model is computed using Structural Similarity (SSIM). This allows for visual changes to the saliency map to be compared and tracked.
### Qualitative Evaluation Criteria
This section reports on the subjective analysis carried out to understand the differences in sensitivity of explanation methods across various detectors. Table 1
illustrates clearly that the ability to explain is more model-dependent than the ability of the explanation method to interpret a particular model.
A comparison is developed by visually inspecting certain aspects of the saliency map obtained using a completely randomized model and also by comparing it with the saliency map generated using the trained model. The various aspects considered to indicate the magnitude of sensitivity are provided below with the a scoring guide. A visual illustration of these aspects is shown in Table 2. In the negative scenarios, the methods are awarded a score -1 \(\times\) (score awarded below). A score 1 is added to the total score if the method scores 1 for any one aspect. This indicates that the method passes the sanity test.
Now we define criteria to evaluate a saliency map made by explaining an object detector output.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline
**OD** & **IM** & & & & & & & \\ \hline \multirow{4}{*}{ED0} & GBP & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & 7 \\ & SGBP & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & 7 \\ & IG & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & 7 \\ & SIG & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & 7 \\ \hline \multirow{4}{*}{SSD} & GBP & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ & 5 \\ & SGBP & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ & 5 \\ & IG & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ & 5 \\ & SIG & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ & 5 \\ \hline \multirow{4}{*}{FRN} & GBP & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & 1 \\ & SGBP & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & 1 \\ & IG & ✗ & ✗ & ✓ & ✓ & ✓ & ✓ & 5 \\ \cline{1-1} & SIG & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the subjective analysis for the model randomization test. The score is computed as explained in Sec. 3.2. The higher the score the more sensitive is the method for the detector model parameters. Each column indicates an aspect considered to evaluate the change in the saliency map which is produced for the randomized model. The table is generated by scoring the majority characteristic illustrated by each detector and explanation method combination over 15 randomly sampled detections from the COCO test 2017 split.
\begin{table}
\begin{tabular}{l l l} Edge Detector & Highlight object of & Focus certain \\ (SSD\_SGBP) & interest (SSD\_GBP) & objects (FRN\_IG) \\ \end{tabular}
\end{table}
Table 2: Visual illustrations of saliency map sanity check properties. This table compares explanation patterns made by different detectors and saliency explanation methods against a randomly trained model. These results complement the qualitative evaluation we perform in this paper.
1. **Edge detector**. Saliency methods sometimes act as an edge detector which does not depend on the input image, which is undesirable [2]. A method acting as an edge detector is scored -1 because the explanations should be meaningful rather than simply behaving like an edge detector.
2. **Highlight only interest object**. Saliency explanations should be focused on the interest object inside the bounding box, assuming that that model performs adequately and is not fooled by context or background [20]. A model with randomized should not have this behavior as information was destroyed and the saliency map should reflect this. When the saliency map generated using the randomized model only highlights the interest object explained, the method is awarded a -1 score.
3. **Focus more than one object**. Opposite from the previous criteria, a randomized model should focus in more than one object as there is no object-specific information in the model. Score of 1 is awarded to the method producing a saliency map that highlights more than a single object in the image.
4. **Texture change**. The texture of a saliency map denotes the spatial arrangement of intensity in a pattern over an image region. If the texture of the saliency map obtained using the randomized model varies from that of the saliency map of the true model, the method is awarded a score 1. For instance, the randomized model map can be a smoothened version without sharp features or completely hazy.
5. **Illustrate artifacts**. Artifacts in saliency maps are also undesirable as they show bias in the model structure and/or equations which affect the quality of a saliency map. If the saliency map from the randomized model displays certain image artifacts such as checkerboard artifacts and sharp parallel lines, the method is awarded -1.
6. **Intensity range change**. The range of pixel values in a saliency map should change as the model is randomized, reflecting the destruction in information when weights are randomized. Score of 1 is awarded if the saliency map intensity range changes before normalizing between 0 to 1 across the randomized and true model.
## 4 Experimental Setup
**Object Detectors**. In this study we evaluate three pre-trained object detectors: Faster R-CNN (FRN) [19], SSD512 (SSD) [15], and EfficientDet-D0 (EDO0) [27], all trained on the COCO dataset [14]. Details are provided in Table 3.
**Explanation Methods**. We evaluate several gradient-based saliency methods, namely Guided Backpropagation (GBP) and Integrated Gradients (IG), as well as their variations using SmoothGrad (SGBP and SIG). Mathematical details for these methods are provided in the appendix.
**Datasets**. The detectors trained on common objects are used to perform the model randomization test. The detector details are available in Table 3. Therefore, the model randomization test is carried out for all the 12 combinations of detectors
and explanation methods. The dataset used for the model randomization study is the COCO test 2017 split [14]. 15 randomly sampled images from the COCO test 2017 split is analyzed for model randomization test. The test split is chosen because the train and validation splits are used in training the detectors.
In order to perform the data randomization test, the Marine Debris dataset [31][32] is used. This study uses two versions of SSD trained on the Marine Debris dataset. Details and performance of detectors trained on Marine Debris Dataset are shown in Table 4. The two versions are true and random SSD models with VGG16 backbone trained using the true and random labels respectively. The additional details about the true SSD-VGG16 model is provided in Table 4. The random detector is trained using random class labels and adding random noise to the ground truth box coordinates. The random detector is trained until the mAP@[IoU=0.5] on the train set is 80%. The explanations are generated for the test set images. The Marine Debris dataset is used for this experiment to overcome the time taken to train a detector on a complex COCO dataset. In addition, the Marine debris dataset aids in studying the applicability of explaining detectors in a real-world application.
## 5 Results and Discussion
**Model randomization test:** The saliency maps are investigated for both the bounding box and classification decisions corresponding to a detection. The model
\begin{table}
\begin{tabular}{l l l} \hline \hline
**SSD Backbones** & **mAP (\%)** & **Input Image Size** \\ \hline VGG16 & 91.69 & 300 x 300 \\ ResNet20 & 89.85 & 96 x 96 \\ MobileNet & 70.30 & 96 x 96 \\ DenseNet121 & 73.80 & 96 x 96 \\ SqueezeNet & 68.37 & 96 x 96 \\ MiniXception & 71.62 & 96 x 96 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Details about the marine debris objector used in this work. The mAP reported is at 0.5 IoU threshold.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & \multicolumn{4}{c}{**COCO split**} & \\
**Detector** & **Stage** & **Train set** & **Test set** & **mAP (\%)** & **Weights Code** \\ \hline Faster R-CNN & Two & train+val35k 2014 & minival2014 & 54.4 & [1] & [1] \\ SSD512 & Single & train+val35k 2014 & test-dev 2015 & 46.5 & [15] & [5] \\ EfficientDet-D0 & Single & train 2017 & test-dev 2017 & 53.0 & [27] & [5] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of object object implementations used in this work. The detectors are trained to detect common objects using COCO dataset. The mAP reported is at 0.5 IoU threshold. val35k represents 35k COCO validation split images. minival is the remaining images in the validation set after sampling val35k.
parameter randomization randomizes the weight variables starting from the last layers. The left-most column after the interest detection with 0% represents the saliency map generated using the trained model with none of the weight variables randomized. 100% in the last column is the saliency maps generated using a randomly initialized model with all weight variables completely randomized. Figure 2 illustrates the effect of classification explanations to the model parameters. In the case of the EfficientDet-D0 classification explanation with SGBP, the saliency map is completely noisy without highlighting any specific feature. SSD with SGBP acts like an edge detector by sharply highlighting certain features as the number of weight variables randomized changes. However, the saliency map highlights feature other than the person object. Figure 3 illustrate the sensitivity of box coordinate \(x_{min}\) explanations using SGBP to the model characteristics. The saliency maps highlight regions of the person at a certain randomization level for SIG as shown in Figure 4.
The magnitude of change between the saliency maps of true and randomized model is different for each model as the weight variables are randomized. It clearly illustrates model randomization tests should be performed for each model and method combinations as stated in the related work. Section 3.2 discusses subjectively the magnitude of the change in sensitivity across detectors and explanation method combinations. Therefore, the ability to explain models are more dependent on the model than the ability of the explanation method to explain the model.
Figure 2: Model randomization test for classification explanations (red-colored box) across different models using SGBP. The first column is the detection of interest that is explained in the consecutive columns. The second column is the saliency map generated using the trained model without randomizing any parameters, which highlights the important parts such as hands, eyes, and face. The last column is the saliency map generated using a model with all parameters randomized. Note how FRN fails the randomization test.
The explanation using GBP for EfficientDet-D0 is noisy because the GBP method acts similar to the Gradients method in the case of EfficientDet-D0. Gradients estimate the gradient of the output target neuron with the input. Since there are no ReLU activations for EfficientDet-D0 the negative contributions are not retarded and the prime usage of GBP is relaxed to work as Gradients method.
The SSIM in Figure 5 is the average SSIM across different percentage of weight variables randomized for a set of 15 images randomly sampled from the COCO test set. Since the explanations have changed in terms of the important pixels highlighted, saliency map texture, and SSIM metric with regards to the explanations using the true model, all the explanation methods pass the model randomization test for detectors.
The gradient attribution maps for the two-stage detector, Faster R-CNN, illustrate checkerboard artifact on randomizing weights as shown in Figure 2. There are various reasons for the gradient artifacts as discussed in [17][25].
In the case of using GBP and SGBP with Faster R-CNN, the higher SSIM between the classification decision saliency maps of completely randomized model and true model is because of the checkerboard artifact shown in Figure 2. Even though the center of mass of the grid pattern shifts over the image, the SSIM provides a higher score due to similarity in the pattern. This observation is in agreement with the subjective analysis in Section 3.2 with low sensitivity scores
Figure 3: Model randomization test for \(x_{min}\) explanations (red-colored box) across different models using SGBP. The first column is the detection of interest that is explained in the consecutive columns. The second column is the saliency map generated using the trained model without randomizing any parameters. The second column highlights the important parts such as hands, eyes, and face. The last column is the saliency map generated using a model with all parameters randomized. Note how FRN fails the randomization test.
for Faster R-CNN - GBP as well as Faster R-CNN - SGBP compared to other detector and explanation method combinations.
**Data randomization test:** Figure 6 illustrate the differences in the saliency maps explaining classification decision of SSD-VGG16. The attribution map intensity levels are largely different. The texture of the explanations from the
Figure 4: Comparison of explanations using different explanation methods for the classification decision corresponding to EfficientDet-D0 detections. The first column is the detection (red-colored box) explained by the methods. The second column is the saliency map generated using the trained model without randomizing any parameters. The last column is the saliency map generated using the model with all parameters randomized. SIG after randomizing 75 percentage of the weight variables visually highlight certain regions of the person detection. However, the magnitude is relatively very less and texture of the map is considerably different to the true model explanation.
Figure 5: A quantitative assessment using SSIM of the change in classification saliency map features during model randomization test across explanation methods and detectors is provided. SSIM is the average SSIM computed across a subset of test images sampled from the COCO test 2017.
random model looks smoothed. However, the explanations generated using the true model illustrate sharp features. There are substantial differences in the saliency map generated using SIG for the chain detection in Figure 6. In addition, the drink-carton classification explanations for the random model illustrates patches, where as, the drink-carton is relatively sharper in the explanation from the true model. However, the difference for other detection is only at the level of attribution intensity and texture. Therefore, this opens up the possibility to perform sanity checks at the class level. The methods should remain sensitive for each class predicted by the model. The findings is consistent in Figure 7 for explanations generated using GBP. In addition, all the explanation methods provide different saliency maps for both classification and bounding box explanations in terms of features highlighted, saliency map texture, and the attribution intensity. Therefore, none of the selected explanation methods fail the data randomization test for the SSD-VGG16 detector.
## 6 Conclusions and Future Work
In this work we have evaluated standard sanity checks for saliency explanations in object detectors, considering both object classification and bounding box regression explanations, through data and weight randomization. We defined new qualitative criteria to systematically evaluate saliency maps visually, and we find that overall, more modern object detectors like EfficientDet-D0 pass more sanity checks and provide higher quality saliency explanations than older detectors like SSD and Faster R-CNN.
Our conclusions hold under multiple gradient-based saliency methods, we tested Guided Backpropagation and Integrated gradients, as well as their Smooth Gradient combinations.
When Faster R-CNN fails to be explained using gradient-based saliency maps, there are large checkerboard artifacts in the explanation, which stay even as weights are randomized. SSD does not produce checkboard patterns but the explanation is insensitive to weights being randomized. Only EfficientDet-D0 produces explanations that pass both data and weight randomization checks.
We expect that our work can increase interest in object detector explanations, and provide additional ways to empirically validate these explanations. We believe that our work provides additional insights not covered by [2], specially using multiple and more complex models like object detectors.
**Limitations**. On a broader note, our work can focus on a larger evaluation set with more defined evaluation metrics to assess the saliency maps. The evaluation set is limited due to the high computation time to generate saliency maps for each detection in an image for all coordinates, category decision, and randomization levels.
In addition, we consider that certain models are not explainable based solely on the fact that a few explanation methods fail in effectively explaining certain detector decision. To make informed decisions, more explanation methods should be evaluated together with sanity checks. Our work only provides a limited view
on this problem, but we do show that explainability depends both on model and saliency explanation method.
Figure 6: Data randomization test using SSD-VGG16 and SIG. The saliency maps explains the classification decision. The first column depicts the detections, the detection of interest is highlighted in white. The true and random model classification explanations differ in terms of the features highlighted, attribution intensity, and the explanation texture.
## 5 Conclusion
Figure 7: Data randomization test using SSD-VGG16 and GBP. The saliency maps explains the classification decision. The first column depicts the detections, the detection of interest is highlighted in white. The true and random model classification explanations differ in terms of the features highlighted, attribution intensity, and the explanation texture. |
2308.14932 | Generalized Loewy length of Cohen-Macaulay local and graded rings | We generalize a theorem of Ding relating the generalized Loewy length
$\text{g}\ell\ell(R)$ and index of a one-dimensional Cohen-Macaulay local ring
$(R,\mathfrak{m},k)$. Ding proved that if $R$ is Gorenstein, the associated
graded ring is Cohen-Macaulay, and $k$ is infinite, then the generalized Loewy
length and index of $R$ are equal. However, if $k$ is finite, equality may not
hold. We prove that if the index of a one-dimensional Cohen-Macaulay local ring
is finite and the associated graded ring has a homogeneous nonzerodivisor of
degree $t$, then $\text{g}\ell\ell(R) \leq \text{index}(R)+t-1$. Next we prove
that if $R$ is a one-dimensional hypersurface ring with a witness to the
generalized Loewy length that induces a regular initial form on the associated
graded ring, then the generalized Loewy length achieves this upper bound. We
then compute the generalized Loewy lengths of several families of examples of
one-dimensional hypersurface rings over finite fields. Finally, we study a
graded version of the generalized Loewy length and determine its value for
numerical semigroup rings. | Richard Bartels | 2023-08-28T23:08:54Z | http://arxiv.org/abs/2308.14932v3 | # Generalized Loewy length of Cohen-Macaulay local and graded rings
###### Abstract.
We generalize a theorem of Ding relating the generalized Loewy length \(\mathrm{g}\ell\ell(R)\) and index of a one-dimensional Cohen-Macaulay local ring \((R,\mathfrak{m},k)\). Ding proved that if \(R\) is Gorenstein, the associated graded ring is Cohen-Macaulay, and \(k\) is infinite, then the generalized Loewy length and index of \(R\) are equal. However, if \(k\) is finite, equality may not hold. We prove that if the index of a one-dimensional Cohen-Macaulay local ring is finite and the associated graded ring has a homogeneous nonzerodivisor of degree \(t\), then \(\mathrm{g}\ell\ell(R)\leq\mathrm{index}(R)+t-1\). Next we prove that if \(R\) is a one-dimensional hypersurface ring with a witness to the generalized Loewy length that induces a regular initial form on the associated graded ring, then the generalized Loewy length achieves this upper bound. We then compute the generalized Loewy lengths of several families of examples of one-dimensional hypersurface rings over finite fields. Finally, we study a graded version of the generalized Loewy length and determine its value for numerical semigroup rings.
Key words and phrases:Auslander's delta invariant, index, generalized Loewy length, generalized graded length, Ding's conjecture, hypersurface ring, numerical semigroup ring, Cohen-Macaulay, Gorenstein.
at one for different classes of Cohen-Macaulay rings. The smallest positive integer \(n\) for which \(\delta_{R}(R/\mathfrak{m}^{n})=1\) is the following numerical invariant defined by Auslander.
\[\operatorname{index}(R):=\inf\{n\geq 1\,|\,\delta_{R}(R/\mathfrak{m}^{n})=1\}\]
Suppose \(R\) is a Cohen-Macaulay local ring with canonical module \(\omega\). The _trace_ of \(\omega\) in \(R\), denoted \(\tau_{\omega}(R)\), is the ideal of \(R\) generated by all \(R\)-homomorphic images of \(\omega\) in \(R\). Ding proved that if \(R\) is a Cohen-Macaulay local ring with canonical module such that \(\mathfrak{m}\subseteq\tau_{\omega}(R)\), then \(\operatorname{index}(R)\) is finite and bounded above by the _generalized Loewy length_ of \(R\)[6, Proposition 2.4]. This invariant, denoted \(\operatorname{g\ell\ell}(R)\), is the smallest positive integer \(n\) for which \(\mathfrak{m}^{n}\) is contained in the ideal generated by a system of parameters of \(R\). In particular, \(\operatorname{index}(R)\leq\operatorname{g\ell\ell}(R)\) if \(R\) is Gorenstein. If, in addition to being Gorenstein, \(R\) has infinite residue field and Cohen-Macaulay associated graded ring \(\operatorname{gr}_{\mathfrak{m}}(R)\), then \(\operatorname{index}(R)=\operatorname{g\ell\ell}(R)\)[7, Theorem 2.1].
In general, if \(R\) is a Cohen-Macaulay local ring that satisfies the above equality, we say that \(R\) satisfies Ding's conjecture. In this paper, we study how the finiteness of the residue field can cause Ding's conjecture to fail. In particular, we prove that there are infinitely-many hypersurfaces with Cohen-Macaulay associated graded ring and finite residue field that do not satisfy Ding's conjecture. Each of our families of hypersurfaces generalizes an example of Hashimoto and Shida [8, Example 3.2], who showed for \(R=\mathbb{F}_{2}\llbracket x,y\rrbracket/(xy(x+y))\) that \(\operatorname{index}(R)=3\) and \(\operatorname{g\ell\ell}(R)=4\).
When \(k\) is finite, the assumption that \(\operatorname{gr}_{\mathfrak{m}}(R)\) is Cohen-Macaulay does not guarantee the existence of a homogeneous system of parameters of degree one \(x_{1}^{*},...,x_{d}^{*}\) in \((\operatorname{gr}_{\mathfrak{m}}(R))_{1}\). If a homogeneous system of parameters in \(\operatorname{gr}_{\mathfrak{m}}(R)\) does not consist of linear elements, it cannot be used in Ding's argument to prove that \(\operatorname{index}(R)=\operatorname{g\ell\ell}(R)\).
However, if \(R\) is a one-dimensional Cohen-Macaulay local ring with finite index and \(\operatorname{gr}_{\mathfrak{m}}(R)\) is Cohen-Macaulay, then we can use a homogeneous \(\operatorname{gr}_{\mathfrak{m}}(R)\)-regular element of minimal degree to obtain an upper bound for \(\operatorname{g\ell\ell}(R)\) in terms of \(\operatorname{index}(R)\). In Theorem 2.3, we prove that if \(R\) is one-dimensional Cohen-Macaulay and \(\operatorname{gr}_{\mathfrak{m}}(R)\) has a homogeneous nonzerodivisor \(z^{*}\), where \(z\in\mathfrak{m}^{t}\setminus\mathfrak{m}^{t+1}\), then
\[\operatorname{g\ell\ell}(R)\leq\operatorname{index}(R)+t-1\,.\]
If \(R\) is Gorenstein, then
\[\operatorname{index}(R)\leq\operatorname{g\ell\ell}(R)\leq\operatorname{ index}(R)+t-1\,.\]
When \(R\) is a hypersurface ring, we have \(\operatorname{index}(R)=e(R)\), where \(e(R)\) denotes the Hilbert-Samuel multiplicity of \(R\)[5, Theorem 3.3]. Therefore, the index of hypersurface rings is easy to compute: if \(R=k\llbracket x_{1},...,x_{n}\rrbracket/(f)\), \(\mathfrak{m}=(x_{1},...,x_{n})R\), and \(f\in\mathfrak{m}^{r}\setminus\mathfrak{m}^{r+1}\), then \(\operatorname{index}(R)=e(R)=r\).
In section 3, we prove that if \(R\) is a one-dimensional hypersurface with a witness \(z\) to its generalized Loewy length that induces a regular initial form on \(\operatorname{gr}_{\mathfrak{m}}(R)\), then
\[\mathrm{g}\ell\ell(R)=\mathrm{ord}_{R}(z)+e(R)-1.\]
We then compute the generalized Loewy lengths of families of examples of one-dimensional hypersurface rings with finite residue field and Cohen-Macaulay associated graded ring. These examples illustrate differences between hypersurface rings \(R\) with finite residue field and Cohen-Macaulay associated graded ring for which \(\mathrm{g}\ell\ell(R)=\mathrm{index}(R)\) and \(\mathrm{g}\ell\ell(R)=\mathrm{index}(R)+1\). In [3], De Stefani gave examples of one-dimensional Gorenstein local rings with infinite residue field for which \(\mathrm{g}\ell\ell(R)=\mathrm{index}(R)+1\).
In section 4, we let \(R\) be a positively-graded Noetherian \(k\)-algebra, where \(k\) is an arbitrary field. We show that several families of one-dimensional graded hypersurfaces attain the graded version of the upper bound for the generalized Loewy length from Theorem 2.3. We then study a graded version of the generalized Loewy length: the _generalized graded length_ of \(R\), denoted \(\mathbf{ggl}(R)\). After determining bounds for \(\mathbf{ggl}(R)\) in terms of \(\mathrm{g}\ell\ell(R)\) and the minimum and maximum degrees of generators of \(R\), we compute the generalized graded length of numerical semigroup rings. For \(R=k[t^{a},t^{b}]\), where \(a<b\), we prove that \(\mathbf{ggl}(R)=ba-b+1\) and \(t^{a}\) is the unique witness to \(\mathbf{ggl}(R)\).
## 2. Estimating the Generalized Loewy Length of One-Dimensional Cohen-Macaulay Rings
Throughout this section, \((R,\mathfrak{m},k)\) is a local ring. We assume that \(R\) has a nonzerodivisor \(x\) of order \(t\) such that multiplication by \(x\) is injective on graded components of the associated graded ring in degrees less than \(\mathrm{index}(R)\). Generalizing [7, Lemma 2.3] to this context, we prove that if \(R\) is a one-dimensional Cohen-Macaulay local ring with finite index, then \(\mathrm{g}\ell\ell(R)\leq\mathrm{index}(R)+t-1\).
**Lemma 2.1**.: _Let \(s\) and \(t\) be positive integers and \(x\in\mathfrak{m}^{t}\setminus\mathfrak{m}^{t+1}\) an \(R\)-regular element. Suppose the induced map \(\overline{x}:\mathfrak{m}^{i-1}/\mathfrak{m}^{i}\longrightarrow\mathfrak{m}^{ i+t-1}/\mathfrak{m}^{i+t}\) is injective for \(1\leq i\leq s\). Then_
\[(\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}\cong R/\mathfrak{m}^{s}\oplus( \mathfrak{m}^{s+t-1},x)/xR.\]
**Proof.** Let \(I=xR\cap\mathfrak{m}^{s+t-1}\) and \(W=(I+\mathfrak{m}^{s+t})/\mathfrak{m}^{s+t}\). Since \(W\) is a \(k\)-subspace of \(\mathfrak{m}^{s+t-1}/\mathfrak{m}^{s+t}\), there is a direct sum decomposition
\[\mathfrak{m}^{s+t-1}/\mathfrak{m}^{s+t}=W\oplus V\]
for some subspace \(V\subseteq\mathfrak{m}^{s+t-1}/\mathfrak{m}^{s+t}\). Let \(e_{1},...,e_{n}\) be a \(k\)-basis for \(V\). For each \(i\), let \(e_{i}=\overline{y_{i}}\), where \(y_{i}\in\mathfrak{m}^{s+t-1}\). Let \(B\) denote the \(R\)-submodule of \((\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}\) generated by \([y_{1}],...,[y_{n}]\in(\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}\). We will prove that \((\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}=A\oplus B\), where \(A=xR/x\mathfrak{m}^{s}\). First we show
that
\[A+B=(\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}.\]
Choose \(r_{1},...,r_{\alpha}\in R\) such that \(I=(r_{1}x,...,r_{\alpha}x)\). Then \(\mathfrak{m}^{s+t-1}/\mathfrak{m}^{s+t}\) is generated as a vector space by \(\{\overline{r_{i}x}\}_{i=1}^{\alpha}\cup\{\overline{y}_{j}\}_{j=1}^{n}\), and by Nakayama's lemma, \(\mathfrak{m}^{s+t-1}\) is generated as an \(R\)-module by \(\{r_{i}x\}_{i=1}^{\alpha}\cup\{y_{j}\}_{j=1}^{n}\). Let \([z]\in(\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}\). Then \([z]=r[x]+r^{\prime}[v]\), where \(r,r^{\prime}\in R\), \(v\in\mathfrak{m}^{s+t-1}\), and \(v=r^{\prime\prime}x+\sum_{i=1}^{n}\rho_{i}y_{i}\), where \(r^{\prime\prime},\rho_{i}\in R\). So
\[[z]=(r+r^{\prime}r^{\prime\prime})[x]+\sum_{i=1}^{n}r^{\prime}\rho_{i}[y_{i} ]\in A+B.\]
Now we show that \(A\cap B=0\). Let \([z]\in A\cap B\). Then \([z]=a[x]=\sum_{i=1}^{n}a_{i}[y_{i}]\), where \(a,a_{i}\in R\), and \(ax-\sum_{i=1}^{n}a_{i}y_{i}\in x\mathfrak{m}^{s}\). Let \(ax-\sum_{i=1}^{n}a_{i}y_{i}=xy\), where \(y\in\mathfrak{m}^{s}\). Then \(\sum_{i=1}^{n}a_{i}y_{i}=(a-y)x\in I\), so
\[\overline{(a-y)x}=\overline{0}\in\mathfrak{m}^{s+t-1}/\mathfrak{m}^{s+t}.\]
If \(a=y\) we are done, so assume \(a-y\neq 0\). Then there is a nonnegative integer \(l\) such that \(a-y\in\mathfrak{m}^{l}\setminus\mathfrak{m}^{l+1}\). Suppose \(0\leq l<s\). Since \(\overline{(a-y)x}=\overline{0}\) in \(\mathfrak{m}^{l+t}/\mathfrak{m}^{l+t+1}\), it follows from the injectivity of the induced map \(\overline{x}\) that \(a-y\in\mathfrak{m}^{l+1}\), a contradiction. Therefore, \(a-y\in\mathfrak{m}^{s}\), and \(ax-xy\in x\mathfrak{m}^{s}\). Since \(xy\in x\mathfrak{m}^{s}\), \(ax\in x\mathfrak{m}^{s}\), and \([z]=a[x]=[0]\).
It follows that \((\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}=xR/x\mathfrak{m}^{s}\oplus B\) and \(B\cong(\mathfrak{m}^{s+t-1},x)/xR\). Since \(x\) is \(R\)-regular, it follows that \((\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}\cong R/\mathfrak{m}^{s}\oplus( \mathfrak{m}^{s+t-1},x)/xR\).
**Lemma 2.2**.: _Let \((R,\mathfrak{m})\) be a local ring, \(I\subseteq R\) an ideal, and \(x,y\in\mathfrak{m}\) such that \((x,I)=(y)\). If \(I\) is not a principal ideal, then \((x)=(y)\)._
Proof.: Let \(a,b\in R\) and \(z\in I\) such that \(y=ax+bz\). Let \(c\in R\) such that \(x=cy\). Then \(y=acy+bz\) and \((1-ac)y=bz\). Suppose \(c\in\mathfrak{m}\). Then \(1-ac\) is invertible and \(y=(1-ac)^{-1}bz\in I\), so \((y)=I\), which is false. Therefore \(c\) is invertible and \((x)=(y)\).
**Theorem 2.3**.: _Let \((R,\mathfrak{m})\) be a one-dimensional Cohen-Macaulay local ring for which index\((R)\) is finite. Let \(s=\text{index}(R)\) and \(x\in\mathfrak{m}^{t}\setminus\mathfrak{m}^{t+1}\) a nonzerodivisor, where \(t\geq 1\). If the induced map_
\[\overline{x}:\mathfrak{m}^{i-1}/\mathfrak{m}^{i}\longrightarrow\mathfrak{m}^{ i+t-1}/\mathfrak{m}^{i+t}\]
_is injective for \(1\leq i\leq s\), then_
\[\text{g}\ell\ell(R)\leq\text{index}(R)+t-1.\]
_If \(\mathfrak{m}^{s+t-1}\) is not a principal ideal, then \(\mathfrak{m}^{s+t-1}\subseteq(x)\)._
**Proof.** By Lemma 2.1, \((\mathfrak{m}^{s+t-1},x)/x\mathfrak{m}^{s}\cong R/\mathfrak{m}^{s}\oplus( \mathfrak{m}^{s+t-1},x)/xR\), so there is a surjection
\[(\mathfrak{m}^{s+t-1},x)\longrightarrow R/\mathfrak{m}^{s}.\]
Therefore, \(\delta_{R}((\mathfrak{m}^{s+t-1},x))>0\). By [10, Lemma 2.5], \((\mathfrak{m}^{s+t-1},x)\) is a parameter ideal of \(R\). Let \((\mathfrak{m}^{s+t-1},x)=(y)\), where \(y\in\mathfrak{m}\) is a regular element. Since \(\mathfrak{m}^{s+t-1}\subseteq(y)\), we have \(\mathrm{g}\ell\ell(R)\leq s+t-1\). If \(\mathfrak{m}^{s+t-1}\) is not a principal ideal, then by Lemma 2.2 we have \(\mathfrak{m}^{s+t-1}\subseteq(x)\). \(\square\)
**Definition 2.4**.: Let \(R\) be a Cohen-Macaulay local ring with canonical module \(\omega\). The _trace_ of \(\omega\) in \(R\), denoted \(\tau_{\omega}(R)\), is the ideal of \(R\) generated by all \(R\)-homomorphic images of \(\omega\) in \(R\).
**Corollary 2.5**.: _Let \((R,\mathfrak{m})\) be a one-dimensional Cohen-Macaulay local ring with canonical module \(\omega\) such that \(\mathfrak{m}\subseteq\tau_{\omega}(R)\). Let \(x\in\mathfrak{m}^{t}\setminus\mathfrak{m}^{t+1}\) such that \(x^{*}\in\text{gr}_{\mathfrak{m}}(R)\) is a regular element. Then_
\[\text{index}(R)\leq\text{g}\ell\ell(R)\leq\text{index}(R)+t-1.\]
**Proof.** This follows from [6, Proposition 2.4] and Theorem 2.3. \(\square\)
## 3. Examples
In this section we derive a formula for the generalized Loewy length of one-dimensional hypersurface rings and compute the generalized Loewy lengths of several families of examples of one-dimensional hypersurfaces. The associated graded ring of each of these hypersurface rings has a homogeneous nonzerodivisor of degree one or two, so the index and generalized Loewy length differ by at most one.
Using techniques from the proof of [8, Example 3.2], we prove that for several families of hypersurfaces \(\{R_{n}\}_{n=1}^{\infty}\),
\[\mathrm{g}\ell\ell(R_{n})-\text{index}(R_{n})=1\]
for \(n\geq 1\). This difference is positive for each \(n\) because of the absence of a regular linear form in certain one-dimensional hypersurface rings over finite fields.
Throughout this section, \(S=k\llbracket x,y\rrbracket\), where \(k\) is a field and \(\mathfrak{n}=(x,y)S\). We say that the _order_ of an element \(f\in S\) is \(r\) if \(f\in\mathfrak{n}^{r}\setminus\mathfrak{n}^{r+1}\), and write \(\text{ord}_{S}(f)=r\). Let \(R=S/(f)\), where \(f\in\mathfrak{n}\). If \(z=\overline{h}\in R\) for some \(h\in S\), then \(\text{ord}_{R}(z):=\text{ord}_{S}(h)\) if \(h\not\in(f)\), and \(\text{ord}_{R}(z)=0\) otherwise. Let \(\mathfrak{m}=(x,y)R\). Recall that \(\text{index}(R)=e(R)\). Finally, if \((R,\mathfrak{m})\) is any local ring of embedding dimension \(n\), then \(\mu_{R}(\mathfrak{m}^{r})\leq{n+r-1\choose r}\).
**Lemma 3.1**.: _Let \(R=k[\![x,y]\!]/(f)\), where \(\text{ord}_{S}(f)=e\) and \(g=g\ell\ell(R)\). Let \(z\in\mathfrak{m}\) such that \(\mathfrak{m}^{g}\subseteq(z)\) and \(i\geq 0\). If \(g\ell\ell(R)\leq e+i\), then \(\text{ord}_{R}(z)\leq i+1\)._
**Proof.** Let \(\text{ord}_{R}(z)=r\) and \(\zeta\in\mathfrak{n}^{r}\setminus\mathfrak{n}^{r+1}\) such that \(\overline{\zeta}=z\). Then \(\mathfrak{n}^{g}\subseteq(f,\zeta)\). Let \(M\) be the \(k\)-vector space of leading forms of degree \(g\) of elements of \((f,\zeta)\). Since \(\text{ord}_{S}(\zeta)=r\), we obtain leading forms of degree \(g\) from this element by multiplying \(\zeta\) by generators of \(\mathfrak{n}^{g-r}\). Therefore,
\[\dim_{k}M\leq\binom{2+(g-e)-1}{g-e}+\binom{2+(g-r)-1}{g-r}=2g-(e+r)+2.\]
On the other hand, the vector space of forms of degree \(g\) in \(\mathfrak{n}^{g}\) has dimension \(g+1\). Therefore, \(g+1\leq 2g-(e+r)+2\) and \(e+r\leq g+1\). The result follows from this inequality.
If \(R\) is a one-dimensional hypersurface with a witness \(z\) to \(\text{g}\ell\ell(R)\) that induces a regular initial form on \(\text{gr}_{\mathfrak{m}}(R)\), then we can compute \(\text{g}\ell\ell(R)\) using the following formula. We see that the order of \(z\) is uniquely determined by \(\text{g}\ell\ell(R)\) and \(e(R)\).
**Proposition 3.2**.: _Let \(R=k[\![x,y]\!]/(f)\), where \(\text{ord}_{S}(f)=e\) and \(z\in\mathfrak{m}\) such that \(z^{*}\) is \(\text{gr}_{\mathfrak{m}}(R)\)-regular. If \(z\) is a witness to \(g\ell\ell(R)\), then_
\[\text{g}\ell\ell(R)=\text{ord}_{R}(z)+e-1.\]
**Proof.** Let \(g=\text{g}\ell\ell(R)\) and \(n=g-e\). Then \(g=e+n\) and by Lemma 3.1, \(\text{ord}_{R}(z)\leq n+1\). By Theorem 2.3, \(g\leq e+\text{ord}_{R}(z)-1\leq e+n=g\).
If we cannot find an element of a one-dimensional hypersurface that is a witness to \(\text{g}\ell\ell(R)\) and induces a regular form on \(\text{gr}_{\mathfrak{m}}(R)\), then we can use the following lemma to estimate the generalized Loewy length.
**Lemma 3.3**.: _Let \(R=k[\![x,y]\!]/(f)\), where \(\text{ord}_{S}(f)=e\geq 2\). If \(R\) has no nonzerodivisors of the form \(\alpha x+\beta y\), where \(\alpha,\beta\in k\), then \(\text{g}\ell\ell(R)>e\)._
**Proof.** Since \(x\) is a zerodivisor on \(R\), there is an element \(g\in\mathfrak{n}^{e-1}\) such that \(f=xg\). Since \(\text{index}(R)=e\), we have \(e\leq\text{g}\ell\ell(R)\)[6, Proposition 2.4]. Suppose \(\text{g}\ell\ell(R)=e\). Let \(z\in\mathfrak{m}\) such that \(\mathfrak{m}^{e}\subset(z)\). By Corollary 3.2, we have \(\text{ord}_{R}(z)=1\). Let \(\zeta\in\mathfrak{n}\setminus\mathfrak{n}^{2}\) be a preimage of \(z\). Letting an appropriate element of \(\text{GL}_{2}(k)\) act on \(S\), we may assume that \(\zeta=x-h(x,y)\) for some nonzero element \(h\in(x,y)^{2}S\).
Let \(R^{\prime}=S/(\zeta)\). Since \(S\) is a regular local ring and \(\text{ord}_{S}(\zeta)=1\), it follows that \(R^{\prime}\) is a one-dimensional regular local ring, and thus a discrete valuation ring. Let \(\overline{f}\) denote the image of \(f\) in \(R^{\prime}\). Then
\[R/(z)\cong R^{\prime}/(\overline{f}).\]
Since \(\overline{g}\in(x,y)^{e-1}R^{\prime}\) and \(\overline{x}=\overline{h}\in(x,y)^{2}R^{\prime}\), it follows that \(\overline{f}\in(x,y)^{e+1}R^{\prime}\), so \(l_{R^{\prime}}(R^{\prime}/(\overline{f}))=\text{ord}_{R^{\prime}}(\overline{f })\geq e+1\) and \(l_{R}(R/(z))\geq e+1\). Now let \(R_{1}:=R/(z)\) and \(\mathfrak{m}_{1}:=\mathfrak{m}/(z)\). Then
\[0=\mathfrak{m}_{1}^{e}\subseteq\mathfrak{m}_{1}^{e-1}\subseteq\cdot\cdot \cdot\subseteq\mathfrak{m}_{1}\subseteq R_{1}\]
is a composition series for \(R_{1}\), so \(\,l_{R}(R/(z))=e\). This is a contradiction.
If \((R,\mathfrak{m},k)\) is a one-dimensional local ring with Cohen-Macaulay associated graded ring and infinite residue field, then \(\text{gr}_{\mathfrak{m}}(R)\) has a homogeneous linear nonzerodivisor [15, p.465]. We now consider one-dimensional hypersurface rings with finite residue field such that the associated graded ring is Cohen-Macaulay, but does not have a homogeneous linear nonzerodivisor. If the associated graded ring has a homogeneous quadratic nonzerodivisor, then it follows from Theorem 2.3 and Lemma 3.3 that the difference between the generalized Loewy length and index is one.
**Proposition 3.4**.: _Let \(k\) be a finite field and \(R=k[\![x,y]\!]/y(\prod\limits_{\alpha\in k}(x+\alpha y))\). Then_
\[g\ell\ell(R)=\text{index}(R)+1=|k|+2.\]
Proof.: We construct a homogeneous nonzerodivisor of degree \(2\) in \(\text{gr}_{\mathfrak{m}}(R)\). Let \(f\in k[x]\) be a degree \(2\) irreducible polynomial. Define
\[g(x,y)\in k[x,y]\,\,\,by\,\,\,g(x,y):=y^{2}f(\frac{x}{y}).\]
We claim that the element \(\overline{g}=g(\overline{x},\overline{y})\in\text{gr}_{\mathfrak{m}}(R)=k[x,y]/y(\prod\limits_{\alpha\in k}(x+\alpha y))\) is \(\text{gr}_{\mathfrak{m}}(R)\)-regular.
Let \(h\in k[x,y]\) such that \(\overline{gh}=\overline{0}\). Then there exists a polynomial \(p(x,y)\in k[x,y]\) such that
\[gh=py(\prod\limits_{\alpha\in k}(x+\alpha y)).\]
Let \(\alpha\in k\). Suppose \((x+\alpha y)\mid g\) and \(q(x,y)\in k[x,y]\) such that \((x+\alpha y)q(x,y)=g(x,y)\). Then \((x+\alpha)q(x,1)=g(x,1)=f(x)\). This contradicts the irreducibility of \(f\).
It follows that \((x+\alpha y)\mid h\). Clearly \(y\nmid g\), so \(y\mid h\) as well, and \(y(\prod\limits_{\alpha\in k}(x+\alpha y))\mid h\). Therefore we have \(\overline{h}=\overline{0}\), and \(\overline{g}\) is \(\text{gr}_{\mathfrak{m}}(R)\)-regular. By Theorem 2.3 and Lemma 3.3, \(\text{g}\ell\ell(R)=\text{index}(R)+1\)
**Remark 3.5**.: When \(k=\mathbb{F}_{2}\), Proposition 3.4 is Hashimoto and Shida's counterexample to Ding's conjecture: \(\mathbb{F}_{2}\llbracket x,y\rrbracket/(xy(x+y))\). In the following propositions, we compute the generalized Loewy lengths of families of one-dimensional hypersurface rings of the form \(k\llbracket x,y\rrbracket/(xy(x^{n}+y^{n}))\), where \(k\) is a finite field and \(n\) is a positive integer.
**Proposition 3.6**.: _Let \(n\geq 1\) and \(k\) a field such that \(\text{char}\,k\neq 2\) and \(\text{char}\,k\nmid 1+(-2)^{n}\). Let \(R=k\llbracket x,y\rrbracket/(xy(x^{n}+y^{n}))\). Then \(\mathfrak{m}^{n+2}=(x+2y)\mathfrak{m}^{n+1}\) and_
\[g\ell\ell(R)=\text{index}(R)=n+2.\]
Proof.: Since \(\mathfrak{m}^{n+1}\) is generated by \(\{x^{n+1-i}y^{i}\}_{i=0}^{n+1}\), it follows that \((x+2y)\mathfrak{m}^{n+1}\) is generated by \(\{x^{n+2-i}y^{i}+2x^{n+1-i}y^{i+1}\}_{i=0}^{n+1}\). Let
\[z_{i}=x^{n+2-i}y^{i}+2x^{n+1-i}y^{i+1}\]
for \(0\leq i\leq n+1\). Since \(xy^{n+1}=-x^{n+1}y\),
\[\sum_{i=1}^{n}(-2)^{i-1}z_{i}=x^{n+1}y+2(-2)^{n-1}xy^{n+1}\] \[=x^{n+1}y-2(-2)^{n-1}x^{n+1}y\] \[=(1+(-2)^{n})x^{n+1}y.\]
So \(x^{n+1}y\in(x+2y)\mathfrak{m}^{n+1}\). It follows that \(\mathfrak{m}^{n+2}\subseteq(x+2y)\mathfrak{m}^{n+1}\).
**Corollary 3.7**.: _Let \(k\) be a field of characteristic \(p>2\) and \(R=k\llbracket x,y\rrbracket/(xy(x^{p^{n}}+y^{p^{n}}))\), where \(n\geq 0\). Then \(\mathfrak{m}^{p^{n}+2}=(x+2y)\mathfrak{m}^{p^{n}+1}\), and_
\[g\ell\ell(R)=\text{index}(R)=p^{n}+2.\]
Proof.: First assume \(n>0\). Suppose \(p\mid 1+(-2)^{p^{n}}\). Since \(1+(-2)^{p^{n}}=1-2^{p^{n}}\), we have \(2^{p^{n}}=1\bmod p\). Since \(2^{p^{n}}=2\bmod p\), it follows that \(2=1\bmod p\), which is false. Therefore, \(p\nmid 1+(-2)^{p^{n}}\). When \(n=0\), it is easy to check that \(\mathfrak{m}^{3}\subseteq(x+2y)\mathfrak{m}^{2}\).
If we let \(p=2\) in Corollary 3.7, then the generalized Loewy length and index of \(R\) differ by one. This is a special case of Proposition 3.11. To prove Proposition 3.11, we require the following results about the reducibility of cyclotomic polynomials modulo prime integers and primitive roots of powers of prime integers.
**Lemma 3.8** (12, Theorem 2.47).: _Let \(K=\mathbb{F}_{q}\), where \(q\) is prime and \(q\nmid n\). Let \(\varphi\) denote Euler's totient function and \(d\) the least positive integer such that \(q^{d}=1\) mod \(n\). Then the \(n^{\text{th}}\) cyclotomic polynomial \(\,\Phi_{n}\) factors into \(\varphi(n)/d\) distinct monic irreducible polynomials in \(K[x]\) of degree \(d\)._
**Lemma 3.9** (12, Example 2.46).: _Let \(p\) be prime and \(m\in\mathbb{N}\). Then the \(p^{m}\)th cyclotomic polynomial \(\,\Phi_{p^{m}}\) equals_
\[1+x^{p^{m-1}}+x^{2p^{m-1}}+\,\cdots\,+x^{(p-1)p^{m-1}}.\]
**Lemma 3.10** (2, Proposition 3.4.1).: _Let \(p\) be a prime and \(g\) a positive integer. Then the following three assertions are equivalent:_
1. \(g\) _is a primitive root modulo_ \(p\) _and_ \(g^{p-1}\neq 1\) _mod_ \(p\)_;_
2. \(g\) _is a primitive root modulo_ \(p^{2}\)_;_
3. _For every_ \(i\geq 2\)_,_ \(g\) _is a primitive root modulo_ \(p^{i}\)_._
**Proposition 3.11**.: _Let \(R=\mathbb{F}_{2}\llbracket x,y\rrbracket/(xy(x^{2^{n}p^{m}}+y^{2^{n}p^{m}}))\), where \(m,n\geq 0\) and \(p>3\) is a prime such that \(2\) is a primitive root modulo \(p^{2}\). Then_
\[\text{g}\ell\ell(R)=\text{index}(R)+1=2^{n}p^{m}+3\]
_and_
\[\mathfrak{m}^{2^{n}p^{m}+3}\subseteq(x^{2}+xy+y^{2}).\]
_If \(m=1\), then we need only assume that \(2\) is a primitive root modulo \(p\)._
**Proof.** First assume that \(m>0\). We show that \(x^{2}+xy+y^{2}\) is \(\text{gr}_{\mathfrak{m}}(R)\)-regular. Let \(S=\mathbb{F}_{2}\llbracket x,y\rrbracket\) and suppose \(f,g\in S\) such that
\[(x^{2}+xy+y^{2})f=g(xy(x^{2^{n}p^{m}}+y^{2^{n}p^{m}}))=g(xy(x^{p^{m}}+y^{p^{m} })^{2^{n}}). \tag{3.1}\]
By Lemmas 3.8 through 3.10, \(\Phi_{p^{i}}(x)\) is an irreducible polynomial over \(\mathbb{F}_{2}\) of degree \(p^{i}-p^{i-1}\) for \(1\leq i\leq m\). We obtain the following factorization of \(x^{p^{m}}+1\) into irreducible polynomials over \(\mathbb{F}_{2}\).
\[x^{p^{m}}+1=(x+1)\prod_{i=1}^{m}\Phi_{p^{i}}(x).\]
Let \(h_{i}(x,y):=y^{p^{i}-p^{i-1}}\Phi_{p^{i}}(x/y)\) for \(i=1,...,m\). Then \(h_{i}(x,y)\) is a homogeneous polynomial of degree \(p^{i}-p^{i-1}\), and
\[x^{p^{m}}+y^{p^{m}}=(x+y)\prod_{i=1}^{m}h_{i}(x,y). \tag{3.2}\]
We claim that each \(h_{i}(x,y)\) is irreducible over \(\mathbb{F}_{2}\). Suppose \(p,q\in\mathbb{F}_{2}[x,y]\) such that
\[h_{i}(x,y)=p(x,y)q(x,y).\]
Since \(h_{i}\) is homogeneous, \(p\) and \(q\) are homogeneous. Let \(y=1\) in the above equation. Then
\[\Phi_{p^{i}}(x)=h_{i}(x,1)=p(x,1)q(x,1).\]
Since \(\Phi_{p^{i}}\) is irreducible over \(\mathbb{F}_{2}\), \(p(x,1)=\Phi_{p^{i}}(x)\) or \(q(x,1)=\Phi_{p^{i}}(x)\). Assume \(p(x,1)=\Phi_{p^{i}}(x)\). Then \(p(x,y)=h_{i}(x,y)\), so \(h_{i}(x,y)\) is irreducible. By equations (3.1) and (3.2), \(h_{i}\mid(x^{2}+xy+y^{2})\) or \(h_{i}\mid f\). Since the degree of \(h_{i}\) is
\[p^{i}-p^{i-1}=p^{i-1}(p-1)\geq p^{i-1}3,\]
it follows that \(h_{i}\mid f\). It is clear that \(x\), \(y\), and \(x+y\) divide \(f\) as well, so \(f\in(xy(x^{p^{m}}+y^{p^{m}})^{2^{n}})\), and \(x^{2}+xy+y^{2}\) is a nonzerodivisor on \(\operatorname{gr}_{\mathfrak{m}}(R)\). By Theorem 2.3 and Lemma 3.3, \(\operatorname{g\ell\ell}(R)=\operatorname{index}(R)+1\) and \(\mathfrak{m}^{2^{n}p^{m}+3}\subseteq(x^{2}+xy+y^{2})\). If \(m=0\), (3.1) becomes
\[(x^{2}+xy+y^{2})f=g(xy(x^{2^{n}}+y^{2^{n}}))=g(xy(x+y)^{2^{n}}).\]
It follows that \(f\in xy(x^{2^{n}}+y^{2^{n}})\), so \(x^{2}+xy+y^{2}\) is a nonzerodivisor on \(\operatorname{gr}_{\mathfrak{m}}(R)\). Therefore, \(\mathfrak{m}^{2^{n}+3}\subseteq(x^{2}+xy+y^{2})\) and \(\operatorname{g\ell\ell}(R)=\operatorname{index}(R)+1\).
**Remark 3.12**.: Whether there are infinitely many primes \(p\) such that \(2\) is a primitive root modulo \(p\) is an open question. This is a special case of Artin's conjecture on primitive roots [2, p.66]. A list of the first primes \(p\) for which \(2\) is a primitive root modulo \(p\) is sequence A001122 in the OEIS.
## 4. Generalized Loewy Length of Graded Algebras
We now consider positively-graded Noetherian \(k\)-algebras and a graded analogue of the generalized Loewy length of a local ring. Throughout this section, \(k\) is an arbitrary field.
**Definition 4.1**.: Let \(R=\bigoplus\limits_{i\geq 0}R_{i}\) be a positively-graded Noetherian \(k\)-algebra, where \(R_{0}=k\) and \(\mathfrak{m}=\bigoplus\limits_{i\geq 1}R_{i}\) is the irrelevant ideal. For \(n\geq 0\), let \(\mathfrak{m}_{n}:=\bigoplus\limits_{i\geq n}R_{i}\). The _generalized graded length_ of \(R\), denoted \(\operatorname{\mathsf{ggl}}(R)\), is the smallest positive integer \(n\) for which \(\mathfrak{m}_{n}\) is contained in the ideal generated by a homogeneous system of parameters.
In this context, the generalized Loewy length, \(\operatorname{g\ell\ell}(R)\), is the smallest positive integer \(n\) for which \(\mathfrak{m}^{n}\) is contained in the ideal generated by a homogeneous system of paramters.
With Herzog, we note that all of the above definitions can be transferred accordingly to homogeneous Gorenstein \(k\)-algebras [9, p.98]. Using the graded versions of Theorem 2.3, Lemma 3.1,
and [4, Proposition 1.25], we determine that the generalized Loewy length of \(k[x,y]/(f)\) is one less than the sum of \(\deg(f)\) and the minimum degree of its homogeneous nonzerodivisors.
**Proposition 4.2**.: _Let \(R=k[x,y]/(f)\), where \(f\in k[x,y]\) is a form of degree \(e\). Suppose \(R\) has a homogeneous nonzerodivisor \(z\) and no homogeneous nonzerodivisor of degree less than \(\deg_{R}(z)\). If \(z\) is a witness to \(g\ell\ell(R)\), then \(g\ell\ell(R)=\deg_{R}(z)+e-1\)._
It is clear that for each \(n\geq 1\), we have \(\mathfrak{m}^{n}\subseteq\mathfrak{m}_{n}\), so \(g\ell\ell(R)\leq\mathbf{ggl}(R)\). We now determine upper and lower bounds for \(\mathbf{ggl}(R)\) in terms of \(g\ell\ell(R)\) and the minimum and maximum degrees of generators of \(R\).
**Proposition 4.3**.: _Let \((R,\mathfrak{m})\) be a positively-graded Noetherian \(k\)-algebra, where \(R_{0}=k\) and \(\mathfrak{m}\) is the irrelevant ideal. Suppose \(x_{1},...,x_{n}\in\mathfrak{m}\) are homogeneous elements such that \(R=k[x_{1},...,x_{n}]\). Let_
\[\text{min}\{\deg(x_{i})\}_{i=1}^{n}=a\leq b=\text{max}\{\deg(x_{i})\}_{i=1}^{n}.\]
_Then_
\[a(g\ell\ell(R))-(a-1)^{2}\leq\mathbf{ggl}(R)\leq b(g\ell\ell(R))-b+1.\]
_If \(a=b=1\), then \(\mathbf{ggl}(R)=g\ell\ell(R)\)._
**Proof.** We claim that for \(n\geq 0\), \(\mathfrak{m}_{nb+1}\subseteq\mathfrak{m}^{n+1}\). This is trivial when \(n=0\). Suppose the inclusion holds for some \(n\geq 0\). Let \(x\in\mathfrak{m}_{(n+1)b+1}\) be homogeneous, and suppose \(x=\sum\limits_{i=1}^{n}s_{i}x_{i}\), where each \(s_{i}\in R\) is homogeneous. Then \(\deg(s_{i})\geq(n+1)b+1-\deg(x_{i})\geq nb+1.\) Therefore, \(s_{i}\in\mathfrak{m}_{nb+1}\subseteq\mathfrak{m}^{n+1}\), and \(x\in\mathfrak{m}^{n+2}\). This proves the claim. Let \(n=\mathrm{g}\ell\ell(R)-1\). Then by the above inclusion, \(\mathbf{ggl}(R)\leq b(\mathrm{g}\ell\ell(R)-1)+1\).
Let \(m=\mathbf{ggl}(R)\). There exists an integer \(c\geq 0\) and an integer \(0\leq l<a\) such that \(m=ac+l\). It is clear that \(\mathfrak{m}^{i}\subseteq\mathfrak{m}_{ia}\) for \(i\geq 0\). We claim that \(\mathfrak{m}^{i+j}\subseteq\mathfrak{m}_{ia+j}\) for \(i,j\geq 0\). Fix \(i\). If the inclusion holds for some \(j\geq 0\), then
\[\mathfrak{m}^{i+j+1}=\mathfrak{m}\cdot\mathfrak{m}^{i+j}\subseteq\mathfrak{m} \cdot\mathfrak{m}_{ia+j}\subseteq\mathfrak{m}_{ia+j+1}.\]
It follows that \(\mathfrak{m}^{c+l}\subseteq\mathfrak{m}_{m}\) and \(c+l\geq n=\mathrm{g}\ell\ell(R)\). Since \(ac+al\geq an\), we have
\[\mathbf{ggl}(R)\geq a(\mathrm{g}\ell\ell(R))-(a-1)l\]
and
\[\mathbf{ggl}(R)\geq a(\mathrm{g}\ell\ell(R))-(a-1)^{2}.\]
Let \(H=\langle a_{1},...,a_{n}\rangle\) be the numerical semigroup with unique minimal generating set \(0<a_{1}<a_{2}<\cdot\cdot\cdot<a_{n}\). Let \(C\) denote the _conductor_ of \(H\), the smallest integer \(n\in H\) for which every integer larger than \(n\) is also in \(H\). Define \(k[H]:=k[t^{a_{1}},...,t^{a_{n}}]\subseteq k[t]\).
**Proposition 4.4**.: _Let \(R=k[H]\), where \(H=\langle a_{1},...,a_{n}\rangle\). Then \(\,\mathbf{ggl}(R)=C+a_{1}\)._
**Proof.** Let \(\mathfrak{m}=(t^{a_{1}},...,t^{a_{n}})\). It is clear that \(\mathfrak{m}_{C+a_{1}}\subseteq(t^{a_{1}})\), so \(\,\mathbf{ggl}(R)\leq C+a_{1}\). Let \(n,d\geq 0\) and suppose \(\mathfrak{m}_{C+n}\subseteq(t^{d})\). This inclusion holds if and only if \(t^{C+n+i}\in(t^{d})\) for all \(i\geq 0\), which is true if and only if \(\,C+n+i-d\in H\) for all \(i\geq 0\). This is equivalent to the inequality \(C+n-d\geq C\), or \(n\geq d\). Therefore, \(\mathfrak{m}_{C+a_{1}-1}\not\subseteq(t^{d})\) for all \(d\in H\setminus\{0\}\). It follows that \(\mathbf{ggl}(R)=C+a_{1}\). \(\square\)
**Corollary 4.5**.: _Let \(R=k[t^{a},t^{b}]\), where \(a<b\). Then \(\,\mathbf{ggl}(R)=ba-b+1\)._
**Proof.** The conductor of \(\,\langle a,b\rangle\,\) is \(\,ba-a-b+1\,\)[14, p.201]. \(\square\)
Veliche notes that for \(R=k[\![t^{a},t^{b}]\!]\), where \(a<b\) and \(k\) is infinite, we have \(\mathrm{g}\ell\ell(R)=\mathrm{index}(R)=a\)[16, p.3]. She then determines formulas for the generalized Loewy lengths of Gorenstein local numerical semigroup rings of embedding dimension at least three over infinite fields. [16, Corollary 2.4, Corollary 3.3, Proposition 3.9]. If we know the conductor of the semigroup that determines one of these rings, then the generalized graded length of the corresponding graded ring is easier to compute than the generalized Loewy length of this local ring.
**Proposition 4.6**.: _Let \(R=k[H]\), where \(H=\langle a,b\rangle\) and \(a<b\). If \(z\) is a witness to \(\mathbf{ggl}(R)\), then \((z)=(t^{a})\)._
**Proof.** We have \(R\cong k[x,y]/(x^{b}-y^{a})=k[\overline{x},\overline{y}]\), where \(\deg(\overline{x})=a\), \(\deg(\overline{y})=b\), and \(\mathfrak{m}=(\overline{x},\overline{y})R\). Write \(z=\overline{x}^{i}\overline{y}^{j}\), where \(i,j\geq 0\). For each \(n\geq 1\), we have \(\mathfrak{m}^{n}\subseteq\mathfrak{m}_{na}\). Therefore,
\[\mathfrak{m}^{b}\subseteq\mathfrak{m}_{ab}\subseteq\mathfrak{m}_{ab-(b-1)}.\]
It follows that \(\mathfrak{n}^{b}\subseteq(\zeta,x^{b}-y^{a})\subseteq S\), where \(\mathfrak{n}=(x,y)S\), \(\zeta=x^{i}y^{j}\), and \(S\) denotes \(k[x,y]\) with the standard grading. Let \(M\) be the \(k\)-vector space of leading forms of degree \(b\) of elements in \((\zeta,x^{b}-y^{a})\). Let \(\mu=i+j\). Then \(\deg(\zeta)=\mu\) and \(\deg(x^{b}-y^{a})=b\). Therefore,
\[\dim_{k}M\leq\binom{2+(b-\mu)-1}{b-\mu}+1=b-\mu+2.\]
Since the vector space of forms of degree \(b\) in \(\mathfrak{n}^{b}\) has dimension \(b+1\), it follows that \(b+1\leq b-\mu+2\) and \(\mu\leq 1\). Therefore \(0<i+j\leq 1\), and it follows that \(z=\alpha\overline{x}\) or \(\ z=\alpha\overline{y}\) for some nonzero \(\alpha\in k\). By the above proposition, \(\mathfrak{m}_{ab-(b-1)}\subseteq(\overline{x})\). Suppose that also \(\mathfrak{m}_{ab-(b-1)}\subseteq(\overline{y})\). Since \(a<b\) are
coprime, we have \(b=as+r\) for some \(s>0\) and \(0<r<b\), so
\[ab-(b-1)=a(as+r)-(as+r-1)=a((a-1)s+r)-(r-1).\]
It follows that \(\overline{x}^{(a-1)s+r}\in(\overline{x})\)\(\cap\)\((\overline{y})=(\overline{xy},\overline{x}^{b})\). Since \((a-1)s+r<b\), we have \(\overline{x}^{(a-1)s+r}=\overline{fxy}\) for some \(f\in k[x,y]\) and \(x^{(a-1)s+r}-fxy=hx^{b}-hy^{a}\) for some \(h\in k[x,y]\). It follows that \(fxy-hy^{a}=x^{(a-1)s+r}-hx^{b}\). Therefore, \(y\,|\,(x^{(a-1)s+r}-hx^{b})\). Since \((a-1)s+r<b\), this is false. Therefore, \((z)=(\overline{x})\).
**Definition 4.7**.: Let \((R,\mathfrak{m})\) be a positively-graded Noetherian \(k\)-algebra, where \(R_{0}=k\) and \(\mathfrak{m}\) is the irrelevant ideal. Let \(I\subseteq R\) be a graded ideal. We say that \(I\) is a _graded reduction_ of \(\mathfrak{m}\) of degree \(d\) if there is a positive integer \(i\) such that \(I\mathfrak{m}_{i}=\mathfrak{m}_{i+d}\).
It is clear that for a numerical semigroup ring \(k[t^{a_{1}},...,t^{a_{n}}]\), the ideal \((t^{a_{1}})\) is a graded reduction of \(\mathfrak{m}\) of degree \(a_{1}\). We therefore ask the following questions, which parallel a question asked by De Stefani [3, Questions 4.5 (ii)].
**Questions 4.8**.: Suppose \(R\) is a positively-graded Noetherian \(k\)-algebra. Is there a witness to \(\operatorname{\mathsf{gggl}}(R)\) that generates a graded reduction of \(\mathfrak{m}\)? What can be said about the degree of such a graded reduction?
## Acknowledgements
I would like to thank my thesis advisor Graham Leuschke for his support and insight into this topic, and for many helpful conversations. I would also like to thank Eloisa Grifo for her helpful suggestions for the abstract of this paper.
|
2307.15271 | Anatomy-Aware Lymph Node Detection in Chest CT using Implicit Station
Stratification | Finding abnormal lymph nodes in radiological images is highly important for
various medical tasks such as cancer metastasis staging and radiotherapy
planning. Lymph nodes (LNs) are small glands scattered throughout the body.
They are grouped or defined to various LN stations according to their
anatomical locations. The CT imaging appearance and context of LNs in different
stations vary significantly, posing challenges for automated detection,
especially for pathological LNs. Motivated by this observation, we propose a
novel end-to-end framework to improve LN detection performance by leveraging
their station information. We design a multi-head detector and make each head
focus on differentiating the LN and non-LN structures of certain stations.
Pseudo station labels are generated by an LN station classifier as a form of
multi-task learning during training, so we do not need another explicit LN
station prediction model during inference. Our algorithm is evaluated on 82
patients with lung cancer and 91 patients with esophageal cancer. The proposed
implicit station stratification method improves the detection sensitivity of
thoracic lymph nodes from 65.1% to 71.4% and from 80.3% to 85.5% at 2 false
positives per patient on the two datasets, respectively, which significantly
outperforms various existing state-of-the-art baseline techniques such as
nnUNet, nnDetection and LENS. | Ke Yan, Dakai Jin, Dazhou Guo, Minfeng Xu, Na Shen, Xian-Sheng Hua, Xianghua Ye, Le Lu | 2023-07-28T02:41:41Z | http://arxiv.org/abs/2307.15271v1 | # Anatomy-Aware Lymph Node Detection in Chest CT using Implicit Station Stratification
###### Abstract
Finding abnormal lymph nodes in radiological images is highly important for various medical tasks such as cancer metastasis staging and radiotherapy planning. Lymph nodes (LNs) are small glands scattered throughout the body. They are grouped or defined to various LN stations according to their anatomical locations. The CT imaging appearance and context of LNs in different stations vary significantly, posing challenges for automated detection, especially for pathological LNs. Motivated by this observation, we propose a novel end-to-end framework to improve LN detection performance by leveraging their station information. We design a multi-head detector and make each head focus on differentiating the LN and non-LN structures of certain stations. Pseudo station labels are generated by an LN station classifier as a form of multi-task learning during training, so we do not need another explicit LN station prediction model during inference. Our algorithm is evaluated on 82 patients with lung cancer and 91 patients with esophageal cancer. The proposed implicit station stratification method improves the detection sensitivity of thoracic lymph nodes from 65.1% to 71.4% and from 80.3% to 85.5% at 2 false positives per patient on the two datasets, respectively, which significantly outperforms various existing state-of-the-art baseline techniques such as nnUNet, nnDetection and LENS.
Keywords:Lymph node detection Lymph node station CT.
## 1 Introduction
Lymph nodes play essential roles in the staging and treatment planning of general cancer patients [4, 13]. As cancer evolves, tumor cells can spread to lymph nodes and cause them to metastasize and possibly enlarge. Finding all of the abnormal (metastatic) lymph nodes is a crucial task for radiologists and oncologists. Computed tomography (CT) is the primary modality for tumor imaging in the chest [18]. In CT, most lymph nodes can be identified as small, oval-shaped structures with soft-tissue intensity, which are challenging to be differentiated from surrounding soft tissues such as vessels, esophagus, and muscles. Due to its
importance and difficulty, automatic lymph node (LN) detection and segmentation has been attracting increasing attentions [6, 17, 14, 2, 23, 10]. Convolutional neural network (CNN) is becoming the mainstream method in recent years. Oda et al. [14] trained a 3D U-Net using not only LN annotations but also neighboring organs to reduce oversegmentation of LNs. Bouget et al. [2] combined the outputs of 2D U-Net and Mask R-CNN to predict both LNs and neighboring organs. Yan et al. [21] showed that jointly learning multiple datasets improved LN detection accuracy. Zhu et al. [23] divided LNs into two subclasses of tumor-proximal and tumor-distal ones and used a U-Net with two decoder branches to learn the two groups separately. Iuga et al. [10] designed a neural network with multi-scale inputs to fuse information from multiple spatial resolutions.
Different from other types of lesions (e.g., lung nodules) that typically locate in one organ, LNs scatter throughout the body. The anatomical location of a metastatic lymph node is an important indicator to determine the stage of the cancer and even the subsequent treatment recommendations. Taking lung cancer as an example, the International Association for the Study of Lung Cancer (IASLC) defined 14 lymph node stations in the chest based on their relative position with adjacent organs [5], as shown in Fig. 1. We can observe that LNs in different stations are surrounded by varying organs, thus show very diverse contextual layouts. To detect an LN is essentially to distinguish it from surrounding confounding organs and structures, therefore, detecting LNs in different stations may actually be considered as different tasks. Most existing works treat LNs in all stations as one positive class and define other organs as one negative class. We would argue that this representation is suboptimal because the inter-class difference between LNs and non-LNs is sometimes very subtle (e.g., Fig. 1 2R and 8). If we mix the samples from all stations, the model may struggle to learn the coherent imaging feature of LNs and be distracted by their contextual appearance. In this work, we propose to first stratify LNs and non-LNs based on
Figure 1: Lymph node (LN) stations defined for lung cancer staging [5]. The anatomical map on the left is reproduced from [19]. LN examples in some stations are shown on the right in green boxes, either in contrast-enhanced (1st row) or non-contrast (2nd row) images. Note the significant diversity of appearance across stations.
their stations, and then train an LN vs. non-LN classifier for each station group. Fig. 3 illustrates our intuition. In addition, the distributions of shape and size of LNs vary in different stations [13, 18]. Our stratification strategy could also handle this variation better by separately modeling each station.
In this paper, we instantiate this strategy and propose a station-stratified LN detector. It is based on the widely-used two-stage CNN detection architecture [16, 21] with a novel detection branch and a station branch simultaneously. The detection branch contains multiple output heads, each focusing on classifying LN/non-LN in one station group. The station branch predicts a probability vector for each proposal indicating its station group, which in turn is used by the detection branch to compute a weighted loss in training and a final LN likelihood in inference. The group can either be stations or super-stations (by grouping similar stations). A related but different method is [23]. They proposed a segmentation method that groups LNs according to their distance with the tumor, thus the location of tumor needs to be known in prior. We stratify LNs according to the anatomy-related stations and no tumor location is needed. Our method is more widely applicable even for non-cancer patients as a form of screening abnormal LNs by stations implicitly. No extra cost on LN station segmentation is needed in inference. While LN groups in [23] are manually computed and the distance threshold needs to be tuned, ours are predicted by a station branch automatically. Our algorithm employs a 2.5D backbone for better efficiency and accuracy. To convert the predicted 2D boxes to 3D ones, we further design a novel lesion-centric box stacking and merging algorithm.
The proposed framework is extensively evaluated on two datasets of 82 patients with lung cancer and 91 patients with esophageal cancer. A total of 1,380 lymph nodes were annotated in the 14 IASLC stations. By employing the proposed station stratification strategy alone, our LN detection sensitivity is improved from 65.1% to 71.4% and from 80.3% to 85.5% at 2 false positives (FPs) per patient in the two datasets, respectively, outperforming various strong mainstream methods such as nnUNet [9], nnDetection [1], and LENS [21]. To the best of our knowledge, we are the first to demonstrate that the station information can be used to improve LN detection effectively (from recent literature reported). While most prior studies used contrast-enhanced (CE) CTs, we also run our method on 85 more challenging non-contrast (NC) CT scans. Joint learning of CE and NC CT imaging modalities achieves a sensitivity of 83.8% at 2 FPs per patient of NC CT scan, which is an encouraging result for scenarios such as lung nodule screening and radiotherapy planning [23].
## 2 Method
The framework of our proposed method is illustrated in Fig. 2. It is based on the widely-used two-stage detection framework Faster R-CNN [16]. The input of the network is multiple consecutive axial CT slices. We adopt the 2.5D design in MULAN [22] as backbone. It extracts 2D features for the CT slices and aggregates them to fuse 3D context information, which is important for distin
guishing LNs from other tube-shaped organs such as vessels and esophagus. We empirically find the 2.5D network outperforms pure 3D ones in both convergence speed and accuracy. The fused feature map is fed to a proposal network such as the Fully Convolutional One-Stage detector (FCOS) [20]. It learns to generate 2D LN proposals in all LN stations. We observed that the vast majority of proposals concentrate in LN station areas, but some of them are confounding false positives (FPs) such as vessels, connective tissue, and esophageal tumor inside the station, see examples in the non-LN columns of Fig. 3. This indicates that the proposal network has successfully learned the image context of the LNs, but struggles to differentiate some subtle structures inside the stations.
To solve this problem, we force the network to disambiguate true positive (TPs) and FPs inside the same station to learn more discriminative features. Specifically, we design a novel multi-head detection branch and a station branch as the second stage of the detector. Suppose that there are \(c\) stations, the **sta
Figure 3: (a) Existing LN detection algorithms mix samples in different stations and train one classifier. (b) Our method stratifies samples based on stations and learns station-specific classifiers. Samples in each group share similar contextual appearance, so the model can focus on mining the subtle discriminative features to separate LNs/non-LNs.
Figure 2: Framework of our proposed station-stratified LN detector. The blocks in green and orange are our key technical novelties. Red cross mark means gradient stopping.
tion branch** predicts a probability vector \(\mathbf{t}_{i}\in\mathbb{R}^{c}\) to classify the station of each proposal \(i\). It is trained on manually annotated station labels using the cross-entropy loss. The loss is only computed on TP proposals since only TPs have station labels, but the branch can predict station probabilities for both TPs and FPs. Similar to Faster R-CNN [16], the **detection branch** contains a classification layer and a bounding-box regression layer. In our algorithm, we aim to train \(c\) LN vs. non-LN classifiers, each corresponding to one station (or station group). Therefore, the classification layer will output a station-specified score vector \(\mathbf{s}_{i}\in\mathbb{R}^{c}\) for each proposal \(i\). These scores have a common LN/non-LN label \(y_{i}\), so we can compute a binary cross-entropy loss for each score, forming a station-specified loss vector \(\mathbf{L}_{i}\in\mathbb{R}^{c}\). Finally, we use the station probabilities \(\mathbf{t}_{i}\) to compute a weighted sum of them:
\[L_{\text{cls}}=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{c}t_{ij}\left(y_{i}\log \sigma(s_{ij})+(1-y_{i})(1-\log\sigma(s_{ij}))\right), \tag{1}\]
where \(n\) is the number of proposals in a mini-batch, \(\sigma\) is the sigmoid function. During inference, station label is no longer needed because the station branch has learned to predict it. We use the predicted \(\mathbf{t}\) to compute a weighted score \(s_{i,\text{final}}=\sum_{j=1}^{c}t_{ij}s_{ij}\) for each proposal \(i\).
Ideally, \(\mathbf{t}_{ij}\) should be 1 if proposal \(i\) belongs to station \(j\) and 0 otherwise, so only \(L_{ij}\) will be counted in \(L_{\text{cls}}\), making classifier \(j\) receive positive and negative samples in the station \(j\) alone. However, some proposals may lie in the intersection area of multiple stations. The predicted station probabilities are also not ideal. Thus, we use a soft-gated loss \(L_{\text{cls}}\) weighted by \(\mathbf{t}\), which is more robust than hard-gating each proposal to only one classifier. The \(c\) classifiers are all built upon the feature vector of the final fully-connected (FC) layer in the detection branch, which can be viewed as finding an optimal subspace for each station in the feature space. As shown in Fig. 2, the station branch does not back-propagate gradients to the backbone. We find it yield better detection performance, possibly because it can make the backbone focus on learning features for LN vs. non-LN. In this study, our goal is not improving the station classification accuracy. The mean area-under-ROC-curve (AUC) for station classification is 93.5% in this setting, showing that station classification is a relatively simple task.
The proposed method predicts 2D boxes for each CT slice. It is necessary to merge 2D boxes to 3D ones to describe 3D lesions. LENS [21] proposed a merging algorithm. It starts from the boxes in the first slice, and then merges boxes in the second slice that overlaps with those in the first slice in the axial plane, and repeats until the last slice. This algorithm has a drawback: it starts to generate each 3D box from its first 2D box, which corresponds to the top edge of a lesion that may be inaccurate in detection with a low confidence score. Inspired by the non-maximum suppression (NMS) algorithm, we propose to start generating each 3D box from its 2D box with the highest confidence score, as detailed in Algorithm 1 and Fig. 2. Our experiments show that this lesion-centric merging strategy outperforms the slice-wise scheme in [21].
```
0: A list of predicted 2D boxes \(B_{2}\); Intersection-over-union (IoU) threshold \(\theta\).
0: A list of merged 3D boxes \(B_{3}\).
1:while\(B_{2}\) is not empty do
2: Take \(b\in B_{2}\) with the highest confidence score. Suppose \(b\) is on slice \(i\).
3: Create a new 2D box list \(T=\{b\}\)
4:for slices \(i+1,i+2,\cdots\)do
5:if\(\exists\,\tilde{b}\in B_{2},\mathrm{IoU}(b,\tilde{b})>\theta\)then\(T=T\cup\{\tilde{b}\},B_{2}=B_{2}-\tilde{b}\)else stop iteration.
6:endif
7:endfor
8: Repeat steps 4-7 for slices \(i-1,i-2,\cdots\)
9: Compute a 3D box \(\hat{b}\) from \(T\), whose \(x,y,z\) ranges and confidence score is the maximum of the 2D boxes in \(T\). \(B_{3}=B_{3}\cup\{\hat{b}\}\)
10:endwhile
```
**Algorithm 1** 3D box generation by lesion-centric 2D box merging
Figure 4: (a) Station distribution of the lung cancer dataset. (a) Station distribution of the esophageal cancer dataset. (c) Size distribution (in mm) of the lung cancer dataset. (a) Size distribution (in mm) of the esophageal cancer dataset.
## 3 Experiment
**Datasets.** Thoracic LNs can be affected by multiple cancer types [18]. In this work, we collected two datasets of different cancer origins. The **lung cancer** dataset includes contrast-enhanced (CE) CTs of 82 patients. 668 LNs were annotated by three board-certified radiation oncologist with more than 10 years of experience. All visible LNs were comprehensively annotated, whose average long and short diameters [4] are \(12.3\times 7.4\)mm (min. 1.5mm, max. 60.6mm). The **esophageal cancer** dataset contains both CE and non-contrast (NC) CTs of 91 patients. 712 LNs in stations 1-9 with average diameters of \(11.0\times 6.5\)mm (min. 2.1mm, max. 27.0mm) were annotated by the same group of oncologists. The LNs were annotated on CE CTs in which they were more distinguishable. Then, we registered NC CTs to CE ones for each patient using DEEDS [8], followed by manual verification of the registration quality. In this way, we can train and evaluate our LN detector on NC CTs as well. The masks of LN stations 1-9 were also annotated in this dataset, from which we can infer the station label of each LN. We also trained an LN station segmentation algorithm [7] using these annotations and applied it to the lung cancer dataset to infer their LN stations. Note that LNs in stations 10-14 (pulmonary nodes [5]) exist in the lung cancer dataset but not in the esophageal cancer one. The station segmentation algorithm cannot predict stations 10-14. Hence, when applying it on the lung cancer dataset, we regarded all LNs outside its predicted masks as belonging to stations 10-14. See Fig. 4 for details about distribution of LN stations and sizes in the datasets.
**Implementation details.** We implemented our algorithm using PyTorch 1.10 and mmDetection 2.18 [3]. CT images were normalized using a spacing of \(0.8\times 0.8\times 2\)mm and an intensity window of \([-200,300]\) Hounsfield unit. Data augmentation included random scaling (0.7-1.4), cropping, rotation (\(\pm 15^{\circ}\)), intensity scaling (0.7-1.3), and gamma augmentation (0.7-1.5) [15]. In training, each mini-batch consisted of 4 samples, where each sample included 9 CT slices for 3D feature fusion [22]. The station branch had two 512D FC layers, whereas the detection branch had two 2048D FC layers. We used RAdam [12] to train for 10 epochs and set the base learning rate to 0.0001, and then reduced it by a factor of 10 after the 7th epoch. In each epoch, we used all positive slices (with LN annotations) and randomly sampled 2 times of negative slices (without annotations) [21]. The entire training process took 1.5h for the esophageal cancer dataset on a Tesla V100 GPU. In the 2D box merging algorithm, we set the IoU threshold \(\theta\) to 0.7.
**Evaluation metrics.** For both datasets, we randomly split the data into 60% training, 15% validation, and 25% testing in the patient level. For the esophageal cancer dataset, we trained a joint model for CE and NC images and show their performance in the test set separately. Following previous lesion detection works [21, 2, 11, 22], we use the free-response receiver operating characteristic (FROC) curve as the evaluation metric and report the sensitivity at different FP levels. When comparing each detected 3D box with the ground-truth 3D boxes, if the 3D intersection over detected bounding-box ratio (IoBB) is larger than 0.3,
the detected box is counted as hit [21]. According to the RECIST guideline [4], LNs with short axis less than 10mm are considered normal. However, some studies [18] show that metastatic LNs can be smaller than 10mm. Therefore, we set a smaller size threshold and aim to detect LNs larger than 7mm during inference. If a ground-truth LN smaller than 7mm is detected, it is neither counted as a TP nor an FP. In training, we still use all LN annotations.
**Quantitative results.** First, we validate our key assumption: stratification of samples based on LN stations improves detection accuracy. Results on the two datasets are displayed in Table 1. \(c=1\) means no stratification; \(c=14\) means the most fine-grained stratification. We also tried to group some stations to super-stations according to radiological literature [5], resulting in \(c=6\) or \(8\). Note that the lung dataset has one more station label (pulmonary nodes) than the esophageal dataset, so the actual \(c\) used for the latter dataset is 1, 5, 7, and 13. In Table 1, station stratification consistently improves accuracy. Increasing \(c\) enhances the purity of each group but also reduces the number of samples in each classifier, which is the possible reason why \(c=6\) achieves the most significant improvement in the lung dataset. In the following experiments, we will use \(c=6\). The 6 groups are [5]: supraclavicular (stations 1L, 1R), superior mediastinal (2L, 2R, 3A, 3P, 4L, 4R), aortopulmonary (5, 6), subcarinal (7), inferior mediastinal (8, 9), and pulmonary (10-14) nodes. Detection performance with different size thresholds is shown in Table 2.
\begin{table}
\begin{tabular}{l|l l l l l|l l l l l l|l l l l l} \hline & \multicolumn{8}{c|}{_Lung_} & \multicolumn{8}{c|}{_Esophageal CE_} & \multicolumn{8}{c}{_Esophageal NC_} \\ \cline{2-13} Size & 0.5 & 1 & 2 & 4 & Avg. & 0.5 & 1 & 2 & 4 & Avg. & 0.5 & 1 & 2 & 4 & Avg. \\ \hline All & 34 & 41 & 51 & 59 & 47.1 & 38 & 52 & 59 & 64 & 53.4 & 39 & 45 & 52 & 63 & 49.9 \\ \(>\) 5mm & 52 & 60 & 63 & 70 & 61.1 & 50 & 66 & 73 & 79 & 67.1 & 50 & 59 & 67 & 79 & 63.6 \\ \(>\) 7mm & 60 & 68 & 71 & 76 & 69.0 & 60 & 76 & 86 & 89 & 77.8 & 63 & 71 & 79 & 91 & 76.1 \\ \(>\) 10mm & 74 & 76 & 79 & 85 & 78.7 & 44 & 78 & 83 & 89 & 73.6 & 64 & 79 & 93 & 100 & 83.9 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of sensitivity (%) at different FPs per patient on each dataset. The performance of lymph nodes with different sizes are reported based on their short axis diameters.
\begin{table}
\begin{tabular}{l|l l l l l|l l l l l l l l l l l l} \hline & \multicolumn{8}{c|}{_Lung_} & \multicolumn{8}{c|}{_Esophageal CE_} & \multicolumn{8}{c}{_Esophageal NC_} \\ \cline{2-13} \(c\) & 0.5 & 1 & 2 & 4 & Avg. & 0.5 & 1 & 2 & 4 & Avg. & 0.5 & 1 & 2 & 4 & Avg. \\ \hline
1 & 52 & 60 & 65 & 70 & 61.9 & **66** & 70 & 80 & 87 & 75.7 & 59 & 68 & 81 & 88 & 73.9 \\
6 & **60** & **68** & **71** & **76** & **69.0**\({}_{7.1\uparrow}\) & 60 & 76 & **86** & 90 & **77.8**\({}_{2.1\uparrow}\) & **63** & 71 & 79 & **91** & 76.1 \({}_{2.2\uparrow}\) \\
8 & 56 & 64 & **71** & 75 & 66.3 \({}_{4.4\uparrow}\) & 58 & 72 & 84 & **92** & 76.6 \({}_{0.9\uparrow}\) & 57 & **75** & 82 & 88 & 75.7 \({}_{1.8\uparrow}\) \\
14 & 49 & 62 & 68 & **76** & 63.9 \({}_{2.0\uparrow}\) & 63 & **78** & 83 & 86 & 77.3 \({}_{1.6\uparrow}\) & **63** & 72 & **84** & 88 & **76.8**\({}_{2.9\uparrow}\) \\ \hline \end{tabular}
\end{table}
Table 1: Sensitivity (%) at 0.5, 1, 2, and 4 FPs per patient on the lung and esophageal cancer datasets. The number of heads \(c\) is varied, where \(c=1\) is the baseline.
Next, we evaluate alternative strategies of our algorithm, see Table 3. We trained \(c\) station-stratified classifiers. Another possibility is not to stratify samples using stations, but to use all samples to train each classifier and average their prediction during inference. This strategy did not bring improvement in Table 3 row (b), showing the station information is useful and our performance gain is not simply due to increase of parameters and ensemble of predictions. One way to utilize station information is to train a \(c\)-way multi-class classifier, instead of the \(c\) binary classifiers in our algorithm. In row (c), multi-class classification did not help. It asks the model to distinguish LNs of different stations, but LN detection actually requires the model to distinguish LNs and non-LNs in each station, which is effectively achieved by our strategy. In our algorithm, we use a soft gating strategy to combine classifiers in training and inference by weighted sum. It is better than the hard gating strategy [23] in row (d) which only considers the classifier with the highest station score. In row (e), we show that our lesion-centric 2D box merging outperforms the slice-wise method in [21].
Finally, we compare our algorithm with prior works. nnDetection [1] is a self-configuring 3D detection framework utilizing test time augmentation and model ensemble. MULAN [22] is a 2.5D detection framework which learns lesion detection, classification, and segmentation in a multi-task fashion. LENS [21] is the state of the art for 3D universal lesion detection. It improves lesion detection by
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c c c|c} \hline \hline & \multicolumn{4}{c|}{\(Lung\)} & \multicolumn{4}{c|}{\(Esophageal\)} & \multicolumn{4}{c|}{\(CE\)} & \multicolumn{4}{c|}{\(Esophageal\)} & \multicolumn{4}{c|}{\(NC\)} & Time \\ \cline{2-13} Method & 0.5 & 1 & 2 & 4 & Avg. & 0.5 & 1 & 2 & 4 & Avg. & 0.5 & 1 & 2 & 4 & Avg. & (s) \\ \hline nnDetection [1] & 47 & 57 & 62 & 70 & 58.9 & 43 & 64 & 72 & 75 & 63.8 & 59 & 63 & 69 & 72 & 65.8 & 86 \\ MULAN [22] & 43 & 57 & 60 & 73 & 58.3 & 51 & 63 & 75 & 79 & 67.1 & 55 & 68 & 72 & 81 & 68.9 & **1.5** \\ LENS [21] & 58 & 60 & 67 & 71 & 64.1 & **64** & **79** & 82 & 84 & 77.3 & 60 & **74** & **79** & 81 & 73.5 & 2.1 \\ Proposed & **60** & **68** & **71** & **76** & **69.0** & 60 & 76 & **86** & **89** & **77.8** & **63** & 71 & **79** & **91** & **76.1** & **1.5** \\ \hline nnUNet [9] & [email protected] & FPs (vs. & **73.0**) & [email protected] & FPs (vs. & **89.5**) & [email protected] & FPs (vs. & **89.7**) & 53 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of sensitivity (%) at different FPs per patient on each dataset. nnUNet is a segmentor thus only has one FP point. The number in parenthesis in the last row represents the sensitivity of the proposed method at the FP point of nnUNet.
\begin{table}
\begin{tabular}{l|c c|c c c c} \hline \hline Method & Sta. & LM & Lung & Eso. CE & Eso. NC & Average \\ \hline (a) No stratification & & ✓ & 61.9 & 75.7 & 73.9 & 70.5 \\ (b) Uniform stratification & & ✓ & 62.3 & 74.0 & 69.3 & 68.5 \\ (c) Multi-class & ✓ & ✓ & 58.1 & 74.7 & 72.6 & 68.5 \\ (d) Hard gating & ✓ & ✓ & 64.7 & **79.9** & 73.5 & 72.7 \\ (e) Slice-wise 2D box merging [21] & ✓ & & 65.5 & 75.0 & 74.6 & 71.7 \\ (f) Proposed & ✓ & ✓ & **69.0** & 77.8 & **76.1** & **74.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sensitivity (%) averaged at 0.5, 1, 2, and 4 FPs per patient using different strategies. Sta.: LN station information. LM: Lesion-centric 2D box merging.
jointly learning multiple datasets with a shared backbone and multiple proposal networks and detection heads. We trained it using both lung and esophageal datasets. nnUNet [9] is a strong self-adapting framework that has been widely-used for medical image segmentation. Our proposed method achieves the best accuracy in all datasets with evident margins, while taking only 1.5s to infer a CT volume. It outperforms LENS even without multi-dataset joint training. More qualitative results are included in Fig. 5.
## 4 Conclusion
Lymph nodes (LNs) in different LN stations vary significantly in their contextual appearance. Inspired by this, we propose a lymph node detection algorithm that employs a station branch and a multi-head detection branch to train station-specialized classifiers. Our method is effective and efficient. It also significantly outperforms various leading lesion detection and segmentation methods [1, 9, 22, 21], on two sets of patients with lung or esophageal cancers respectively. Our next step is to extend it to LNs in other body parts beyond thoracic CT scans.
Figure 5: Exemplar detection results of our algorithm. LNs in different stations and image modalities (CE, NC) are shown. Green, yellow, and red boxes indicate TP, FN, and FPs, respectively, with the confidence score displayed above the box. In (a) and (b), our algorithm can differentiate between LNs and adjacent vessels and esophagus. (e) and (f) are failure cases. In (e), an LN in station 7 has indistinguishable intensity compared with surrounding tissue in an NC image, thus were missed by our algorithm. In (f), the esophagus was mistaken as an LN due to similar intensity and shape. |
2302.08398 | KuberneTSN: a Deterministic Overlay Network for Time-Sensitive
Containerized Environments | The emerging paradigm of resource disaggregation enables the deployment of
cloud-like services across a pool of physical and virtualized resources,
interconnected using a network fabric. This design embodies several benefits in
terms of resource efficiency and cost-effectiveness, service elasticity and
adaptability, etc. Application domains benefiting from such a trend include
cyber-physical systems (CPS), tactile internet, 5G networks and beyond, or
mixed reality applications, all generally embodying heterogeneous Quality of
Service (QoS) requirements. In this context, a key enabling factor to fully
support those mixed-criticality scenarios will be the network and the
system-level support for time-sensitive communication. Although a lot of work
has been conducted on devising efficient orchestration and CPU scheduling
strategies, the networking aspects of performance-critical components remain
largely unstudied. Bridging this gap, we propose KuberneTSN, an original
solution built on the Kubernetes platform, providing support for time-sensitive
traffic to unmodified application binaries. We define an architecture for an
accelerated and deterministic overlay network, which includes kernel-bypassing
networking features as well as a novel userspace packet scheduler compliant
with the Time-Sensitive Networking (TSN) standard. The solution is implemented
as tsn-cni, a Kubernetes network plugin that can coexist alongside popular
alternatives. To assess the validity of the approach, we conduct an
experimental analysis on a real distributed testbed, demonstrating that
KuberneTSN enables applications to easily meet deterministic deadlines,
provides the same guarantees of bare-metal deployments, and outperforms overlay
networks built using the Flannel plugin. | Andrea Garbugli, Lorenzo Rosa, Armir Bujari, Luca Foschini | 2023-02-16T16:16:28Z | http://arxiv.org/abs/2302.08398v1 | # KuberneTSN: a Deterministic Overlay Network for Time-Sensitive Containerized Environments
###### Abstract
The emerging paradigm of resource disaggregation enables the deployment of cloud-like services across a pool of physical and virtualized resources, interconnected using a network fabric. This design embodies several benefits in terms of resource efficiency and cost-effectiveness, service elasticity and adaptability, etc. Application domains benefiting from such a trend include cyber-physical systems (CPS), tactile internet, 5G networks and beyond, or mixed reality applications, all generally embodying heterogeneous Quality of Service (QoS) requirements. In this context, a key enabling factor to fully support those mixed-criticality scenarios will be the network and the system-level support for time-sensitive communication. Although a lot of work has been conducted on devising efficient orchestration and CPU scheduling strategies, the networking aspects of performance-critical components remain largely unstudied. Bridging this gap, we propose KuberneTSN, an original solution built on the Kubernetes platform, providing support for time-sensitive traffic to unmodified application binaries. We define an architecture for an accelerated and deterministic overlay network, which includes kernel-bypassing networking features as well as a novel userspace packet scheduler compliant with the Time-Sensitive Networking (TSN) standard. The solution is implemented as _tsn-cni_, a Kubernetes network plugin that can coexist alongside popular alternatives. To assess the validity of the approach, we conduct an experimental analysis on a real distributed testbed, demonstrating that KuberneTSN enables applications to easily meet deterministic deadlines, provides the same guarantees of bare-metal deployments, and outperforms overlay networks built using the _Flamel_ plugin.
time-sensitive networking, container, Kubernetes, cloud continuum, network virtualization, bounded latency
## I Introduction
The promise of edge computing is that of increasingly low latency, high bandwidth communication, and improved data security and privacy. Therefore, a stronger push for edge applications and service deployment is to be expected [1]. However, in contrast to traditional cloud deployment environments, the edge has limited resources and may not be able to satisfy the overlapping and heterogeneous resource demands of all such applications. This fact has motivated researchers to extend the well-established cloud computing paradigm into the idea of _edge-cloud computing_ where an increasingly rich and heterogeneous set of resources between datacenters and the network edge, often called _cloud continuum_, can be virtualized to host cloud-like services [2]. The power of this paradigm relies on the combination of the well-known advantages of the cloud model, in particular flexibility, cost-effectiveness, and reconfigurability, with the performance advantage of running services as close to their final user as possible.
The success of this model is clear from its rapid and wide adoption in several heterogeneous domains, including application domains that embody time-sensitive requirements. As an example, the reference architecture of 5G and beyond standards relies on virtualized applications deployed in edge datacenters, or even co-located with the widely distributed base stations [3]. Control applications in the domains of Cyber-Physical Systems (CPS), Industrial Internet of Things, Tactile Internet, and in many other fields are increasingly pursuing the disaggregation trend, with virtualized application components deployed across the whole continuum of available resources, embodying heterogeneous Quality of Service (QoS) requirements, even among their internal components [4, 5]. Although many of those requirements can be easily met just by placing services physically closer to their final users, reducing key metrics such as latency or response time, core parts of these systems still struggle to balance strict performance demand with the overhead introduced by virtualization.
To mitigate this overhead, lightweight virtualization techniques like containerization have become the standard technology for platform-independent prototyping, development, and deployment of edge components. Compared to hypervisor-based virtual machines, containers are generally characterized by reduced overhead and higher scalability, representing a potential for innovation in service patterns, in virtue of setting up a unified service provisioning platform capable of adhering to applications' QoS specifications [6].
Furthermore, containers are seamlessly integrated into resource management and orchestration platforms, with Kubernetes in its full or reduced versions (e.g., k3s) as the _de-facto_ standard technology [7]. Resource management and orchestration are paramount in the edge cloud, as it automatically deploys, monitors, and migrates containerized application components across the shared infrastructure, enforcing applications' QoS specifications.
However, containerization alone is not a panacea. Given the highly distributed nature of edge cloud applications, specific attention to network and system-level aspects is paramount to effectively support the most performance-demanding components. Yet, previous work mostly focused on efficient orchestration and CPU scheduling of containers [8, 9, 10], leaving
those aspects largely unstudied.
In this paper, we design a cost-efficient solution to enable _accelerated and deterministic communication_ among containerized applications. To this end, we define a novel architecture for a container overlay network that combines two techniques for high-performance communication. First, we adopt a form of kernel-bypassing networking to remove the overhead of the kernel networking stack [11]. Second, we propose a novel userspace packet scheduler, compliant with the Time-Sensitive Networking (TSN) standard, to allow the time-bounded data distribution and communication among networked components [12]. We implement our proposal as _tsn-cni_, a novel Kubernetes network plugin that can be seamlessly integrated alongside existing options (e.g., Flannel, Calico). This way, application designers are free to choose the most appropriate support for traffic flows with different degrees of criticality. Finally, we evaluate _tsn-cni_ on a real testbed, showing that containerized TSN applications can achieve determinism and performance comparable to bare metal applications, and better than using the network fabric set up by the popular _Flannel_ plugin.
## II Background
This section provides a brief introduction to container overlay networks, their rationale, and support in the Kubernetes platform. Next, we provide a concise background on the TSN standard and kernel-bypassing techniques.
### _Container Overlay Networks_
Containers generally have four networking modes available: bridge, host, macvlan, and overlay. The overlay mode is the most popular, especially in combination with Kubernetes, as it provides better isolation, ease of use, and security; hence we limit our description to this scheme. In this mode, as depicted in Fig. 1, containers are connected on an overlay network, potentially spanning multiple physical nodes even across different networks. On each container, a virtual network interface is created, to which applications can assign an arbitrary IP address. This interface is connected to the outside through a virtual switch, located in the host operating system kernel, which has two main roles: it works as a network bridge to allow communication among co-located containers, and it tunnels network traffic toward the remote container(s) across the physical network. This way, containers on the same overlay network have an isolated address namespace and configuration settings, disjoint from the host network or from other overlays.
When using Kubernetes, by default each container has a single network interface for all the network traffic, including management and control plane interactions (e.g., with the Kubernetes master). To distinguish among different traffic classes, the Multus plugin [13] allows attaching additional interfaces to containers. Multus is a meta-plugin, as it defines a _container network interface_ (CNI) that other plugins can implement to configure a Layer 3 network fabric and optionally provide additional advanced features. Several such plugins are available, such as Flannel, Calico, or Weave. Unfortunately, none of those supports the definition of an accelerated and deterministic communication channel among containers. Compared to these alternatives, in this work we design a novel plugin architecture to offer such guarantees. We still rely on a virtual switch, but we move the sender-side datapath to userspace and provide a novel packet scheduler compliant with the TSN standard. This choice allows users to obtain enhanced network performance with no modifications to application binaries and atop off-the-shelf hardware and operating systems, without requiring any patches or specific configurations from the final user.
### _Time-Sensitive Networking_
Designed to support soft real-time industrial traffic, the set of standards grouped under the name of Time-Sensitive Networking aims to introduce determinism to IEEE 802.1 networks via a set of features, including but not limited to time synchronization, programmability, etc. [12]. First, TSN requires that all the communication participants share a unique time reference, and the IEEE 802.1AS standalone protocol provides an adequate mechanism to ensure this synchronization [14]. A second key concept in TSN is packet scheduling. The IEEE 802.1Qbv standard defines a traffic shaper, called Time-Aware Shaper (TAS), that can prioritize the frames belonging to classes of traffic with different time criticality. This prioritization is based on time-aware communication windows, called _time-aware traffic windows_, that repeat cyclically. Each window is divided into _time slots_ that can be associated to different traffic classes: frames belonging to the same class are buffered until the next opening of the associated time slot. This way, TSN guarantees bounded latency and jitter to time-critical traffic, as well as no interference from best-effort traffic.
From a practical viewpoint, to enable this kind of communication, developers must configure the kernel-based Traffic Controller (TC), which implements a TAS shaper, to set up the desired number of traffic classes, their priority, and time slots duration. Then, applications open a datagram socket with the SO_TXTIME flag, so that they can associate a desired transmission time to each outgoing message. Unfortunately, there are two obstacles to the adoption of this standard from containerized environments. First, we noted that some OS images do not support the SO_TXTIME. Second, the transmission time is never forwarded outside the container network namespace to the virtual switch. To overcome these limitations, KubernesTSN intercepts the container TSN traffic and forwards it to a novel userspace scheduler, responsible to enforce the TAS shaping. This component, which replaces the
Fig. 1: Container networking in overlay mode.
Linux-specific kernel-based scheduler, is the key architectural element that we leverage to provide time-sensitive networking features to containerized applications, and it is fully integrated into the _tsn-cni_ Kubernetes plugin.
### _Kernel-bypassing Networking_
In a container overlay network, each outgoing packet must cross the networking stack twice, one in its isolated network namespace and one in the host namespace, and must also cross through a virtual switch (Fig. 1). The combination of all these steps adds significant per-packet communication overhead [15], unacceptable for time-sensitive edge applications.
In recent years, several _kernel-bypassing_ networking approaches, also known as network acceleration techniques, have emerged to support performance-critical applications. Among them, the Data Plane Development Kit (DPDK) [16] is an increasingly popular library that adopts this approach without requiring special hardware or OS support. DPDK lets applications access a userspace version of the network device drivers (_Poll Mode Drivers_) to directly send or receive Ethernet packets on the network. Applications and drivers exchange data through a shared memory area registered with the network card for Direct Memory Access (DMA), thus communication _zero-copy_ and avoids kernel/user context changes. This way, communication is much more efficient, and, in principle, applications in the edge cloud would immensely benefit from the related performance improvements. However, DPDK exposes a low-level C interface, very difficult to use and scarcely integrated within virtualization engines [17].
In KubernesTSN, we accelerate the outgoing container data path using DPDK transparently to user applications. Specifically, we design KubernesTSN to bypass the kernel networking stack in the container namespace, sending data directly to a userspace virtual switch. Then, we adopt a userspace version of a widely used and open-source virtual switch, Open vSwitch (OVS) [18], which in turn uses DPDK to bypass the kernel networking stack in the host namespace.
Overall, KubernesTSN combines three well-known networking approaches, namely overlay networks, TSN scheduling, and kernel-bypassing networking, and leverages them to offer the option of a deterministic and accelerated inter-container communication, well integrated into the state-of-the-art Kubernetes orchestrator and complementary to existing networking approaches for best-effort traffic.
## III Related Work
Previous research on the containerization of critical application components mainly focused on orchestration strategies and CPU scheduling [8, 10]. These works investigate the best strategies to place components on suitable resources and ensure that those resources can schedule the execution of containerized applications according to their requirements. Yet, they never take network and system-related aspects into account. We consider these works complementary to our proposal, as we envision that network and computing resources for edge applications should be orchestrated together.
Despite the importance of networking for edge applications, researchers paid less attention to the networking requirements of critical applications. Abeni et al. [19] evaluate different kernel-bypass approaches for inter-container communications, outlining the great potential of DPDK as network accelerator compared to the kernel-based approach. However, their contribution is limited to a framework for performance evaluation.
Slim [15] proposes a solution to reduce the processing overhead on container overlay networks. At its crux, the proposal avoids processing packets multiple times on the same host (see Sec. II); instead, it defines a component that intercepts calls to the socket API and directly translates network addresses from the overlay into the host namespace (and vice versa). This way, packets traverse the kernel networking stack only once. SocksDirect [20] uses the same interception technique to re-route packets on an accelerated kernel-bypassing datapath, but this is possible only with the _host_ container networking mode. Both these works introduce the idea of accelerating container inter-networking, and both show significant performance advantages for a wide range of applications built on top of them. However, these solutions are not integrated with standard production-ready technologies such as Kubernetes. Furthermore, as they target datacenter environments, their focus is on accelerated support for reliable connection-oriented transport protocols (TCP), and they do not provide any support for time-sensitive applications such as TSN, a key requirement for edge applications. In this work, we adopt similar techniques (socket interception, kernel-bypassing) to accelerate network operations, but we also provide guarantees on connection determinism (through TSN) and implement our solution as a plugin for highly standard development and deployment technologies.
Finally, the use of TSN in virtual environments is a relatively new trend, as the standard was originally intended for bare-metal industrial applications. Leonardi et al. [21] first hypothesized this possibility, identifying three distinct architectural approaches to enhance hypervisor-based virtualization with time-triggered communication. In a previous work [22], we showed for the first time on a real testbed that TSN applications can execute in remote virtual machines, embodying even better performance than bare-metal thanks to the adoption of kernel-bypassing techniques. In this paper, we target containerized applications and take a step further by implementing our solution as a Kubernetes network plugin, thus allowing an application to select the most appropriate overlay network meeting their requirements.
## IV KubernesTSN: an Accelerated and Deterministic Overlay Network
KubernesTSN defines the architecture for a novel _accelerated_ and _deterministic_ container overlay network, addressing the time-sensitive requirements of containerized business or control logic. To achieve this goal, we intervene and modify the packet processing pipeline for the _outgoing_ container traffic through the use of two novel architectural components: a user library named _LibKTSN_, and a daemon named _KTSNd_.
Fig. 2 shows those components and the role they play in the definition of a new data path for time-sensitive traffic.
_LibKTSN_ exposes the standard POSIX socket interface to the application binaries. This way, any time the application issues a send operation on a datagram socket, the library intercepts it and forwards the packets to a memory area shared with the KTSNd daemon. We are interested in servicing time-sensitive traffic, so we only capture outgoing transmissions that have an explicit transmission time, i.e., TSN traffic, with the SO_TXTIME socket option. Otherwise, packets are forwarded onto the regular data path. This approach enables TSN networking regardless of the container images, unlike the currently available alternatives (see Sec. II). LibKTSN is the only component of our solution that should be present in the application container. We provide it as a shared library and use the flag LD_PRELOAD to transparently intercept traffic: hence, no changes are required to the application code.
The _KTSNd_ daemon represents the key component of our proposal, as it works both as a packet scheduler and a network accelerator. Once it detects a new packet from an application, KTSNd schedules its actual transmission based on the application-provided transmission time. Although we design the daemon to be agnostic to the specific scheduling strategy, by default it works as a Time-Aware Shaper (TAS) compliant with the IEEE 802.1Qbv standard (see Sec. II). Currently, this packet scheduling option is not available for containerized applications, as popular virtual switches (e.g., Linux bridge, Open vSwitch, etc.) do not support it. Therefore, our solution is the first to provide deterministic packet scheduling for unmodified application binaries running in containers.
When time comes to transmit a scheduled packet, the scheduler must send it on the network on behalf of the original application, preserving the source MAC, IP addresses, and UDP ports, and minimizing the packet processing delays to meet the user-required transmission time as precisely as possible. To satisfy these requirements, we adopt a kernel-bypassing approach and move the entire transmission pipeline in userspace. This way, we avoid the expensive double-crossing of the kernel networking stack and the unnecessary user/kernel thread context switches (see Sec. II) and instead provide our own simple and efficient implementation of the UDP/IP stack directly within KTSNd, using the DPDK library to forward packets on the virtual L2 link. This choice allows us to preserve the original packet metadata, as we can manipulate protocol headers directly, and significantly reduce the processing overhead. As shown in Fig. 2, packets are then handled by a userspace virtual switch that, in turn, should provide its own UDP/IP userspace stack to forward them on the physical network. In our implementation, we adopt a widely-used, state-of-the-art userspace virtual switch, Open vSwitch [18], which also uses DPDK for kernel-bypassing.
The simple yet powerful design makes KuberneTSN easy to integrate into standard platforms such as the Kubernetes orchestrator in its various distributions, making it ready to use for critical networked applications embodying stringent requirements. To this aim, we build a Kubernetes network plugin, _tsn-cni_, that implements our architecture. Specifically, _tsn-cni_ implements the Multus CNI interface [13] and thus a Layer 3 network fabric that includes our accelerated and deterministic data path. The plugin requires applications to include LibKTSN in their execution environment, and it encapsulates the KTSNd daemon in a separate container. This approach is strategic to support time-sensitive edge applications: because multiple network plugins can be used at the same time, developers can choose standard ones (e.g., Flannel, Calico) for best-effort traffic, and _tsn-cni_ for time-sensitive networking, as represented in Fig. 2. Therefore, KuberneTSN and its _tsn-cni_ implementation enhance the capabilities of the edge-cloud not only by supporting deterministic networking but also by integrating this option in a familiar ecosystem for application designers. By tagging application components as time-sensitive, they can instruct Kubernetes to automatically deploy KTSNd alongside the application containers, thus transparently obtaining support for performance-sensitive workloads.
## V Experimental evaluation
In this section, we evaluate the performance of the _tsn-cni_ plugin, which implements the KuberneTSN architecture. The purpose of the experimental assessment is twofold: on the one hand, we want to show that the _accelerated_ datapath we propose is indeed faster than the current state-of-the-art networking options; on the other hand, we demonstrate that our solution can in fact provide _deterministic_ guarantees. In particular, we compare _tsn-cni_ against two alternatives. The first is a bare metal setting that reproduces the way typical TSN applications are deployed, in order to assess the overhead introduced by the virtualization layer. The second is _Flannel_, a popular CNI plugin for Kubernetes. In its recommended configuration, Flannel uses a Linux bridge in combination with VXLAN encapsulation to implement the virtual switch, thus building an overlay network that corresponds to the _regular datapath_ of Fig. 2. By comparing _tsn-cni_ and Flannel, we
Fig. 2: The architecture for an accelerated and deterministic overlay network, implemented as a Kubernetes CNI plugin.
assess whether KuberneTSN meets its design goal of providing additional performance benefits and deterministic properties to inter-container networking.
For the purpose of this evaluation, we build a simple TSN application consisting of two processes, a talker and a listener, each running inside a container on two remote hosts. We then set up a latency test in which the talker sends UDP packets with a cycle of \(1\,\mathrm{ms}\). The test measures two representative indicators of time-sensitive communications: end-to-end latency and jitter. The end-to-end latency of a message is defined as the time interval between the time of transmission predicted by the talker, sometimes also called transmission time, and the time of actual reception by the listener. The jitter measures how much the actual arrival time of each message differs from the expected arrival time: more precisely, if \(t_{i}\) is the arrival time of the \(i\)-th message, its jitter is defined as \(Jitter(i)=t_{i}-(t_{i-1}+T)\), where \(T\) is the transmission period (in this work, \(T=1ms\)). It is noteworthy to point out that the bare-metal and the _tsn-cni_ test suites are implemented as actual TSN applications, which associate a desired transmission time to each packet. However, for the test adopting Flannel as a communication choice, this option is not available, as the TSN scheduling would not be enforced (see Sec. II). Instead, the only alternative is to send one message and then sleep, repeating this behavior every \(T\).
### _Experimental Settings_
The evaluation analysis is conducted on a real testbed which reproduces an edge deployment scenario. The testbed comprises two Dell Workstations, each equipped with an Intel I225 NIC, an Intel i9-10980XE 18/36 CPU, and \(64\,\mathrm{GB}\) RAM. The two hosts are interconnected through a physical TSN-compliant switch. Each host runs Ubuntu 22.04 with Linux kernel 5.16. When using Open vSwitch [18], we adopt its two variants, the kernel-bypassing on the sender side, and the kernel-based on the receiver side. As required by TSN, the clocks of the two hosts are synchronized using two PTP daemons. Finally, we pin the processes to dedicated cores so to avoid any bias in the measurements induced by the CPU scheduling policy.
### _End-to-end Latency_
Figure 2(a) reports the end-to-end latency and jitter measured for three typical data sizes (\(64\,\mathrm{B}\), \(256\,\mathrm{B}\), \(1024\,\mathrm{B}\)) for each of the considered deployment scenarios: bare-metal and containerized applications with _tsn-cni_ or Flannel as network plugin. A first consideration is that the performance of _tsn-cni_ is always very good, with median latency values ranging from \(21.5\,\mathrm{\SIUnitSymbolMicro s}\) in the case of small packets (\(64\,\mathrm{B}\)) to \(41.7\,\mathrm{\SIUnitSymbolMicro s}\) for \(1024\,\mathrm{B}\). These values are almost identical to those registered for the bare metal deployment, with a small variation in the \(\,\mathrm{ns}\) scale starting to appear for the 1KB packet size. Latency variability is negligible in both cases. If we consider Flannel, we note a slight, but evident latency increase (\(12\,\mathrm{\char 37}\) on average). This is the result of the expensive in-kernel packet processing, which we avoid thanks to the kernel bypassing technique embodied in our solution. The same trend observed for latency is confirmed by the analysis of the jitter metric reported in Fig. 2(b): the median value is zero in almost all cases and the variability is negligible. Therefore, we can conclude that KuberneTSN and its _tsn-cni_ implementation succeed in minimizing the packet processing overhead for containerized applications, achieving the goal of an _accelerated_ data path.
Overall, our experiments show that both _tsn-cni_ and Flannel show good latency numbers, although our kernel-bypassing solution shows lower median values. In principle, one could expect even better performance from _tsn-cni_, as raw DPDK is particularly fast [19]. However, we noted that the OSVD-DPDK implementation introduces a non negligible overhead on our userspace datapath, consisting of at least 23% of the total reported latency. Nevertheless, we decided to keep it in our system as it is a widely-used tool, supported by an active community. Even more importantly, while still demonstrating better performance, it supports a rich set of additional features for virtual networking, e.g. OpenFlow programmability, compared to the basic Linux bridges used by Flannel.
### _Determinism_
To assess whether KuberneTSN can effectively provide deterministic guarantees to time-sensitive flows, we consider again the latency test results discussed before, but in Fig. 4 we
Fig. 3: Performance comparison among three deployment options for the latency test application: bare metal, containerized with _tsn-cni_, containerized with _Flannel_. The experiment is repeated for increasing payload sizes: \(64\,\mathrm{B}\), \(256\,\mathrm{B}\), \(1024\,\mathrm{B}\).
plot the respective Cumulative Distribution Function (CDF). Ideally, the curve should be as vertical as possible, implying a highly predictable packet reception time. In this context, the bare metal application and the containerized application using _tsn-cni_ show overlapping performance, very close to the ideal behavior. In particular, for _tsn-cni_ the \(90\,\mathrm{\char 37}\) and the \(99\,\mathrm{\char 37}\) probability correspond to \(26.4\,\mathrm{\SIUnitSymbolMicro s}\) and \(28.1\,\mathrm{\SIUnitSymbolMicro s}\) respectively. Instead, for Flannel these thresholds are \(29.6\,\mathrm{\SIUnitSymbolMicro s}\) and \(30.7\,\mathrm{\SIUnitSymbolMicro s}\) respectively, implying a less precise arrival time interval.
This difference demonstrates the advantage of using KuberneTSN for time-sensitive traffic. The main reason for this behavior is the way the test application sends messages: when using Flannel, we cannot explicitly set a transmission time, as this feature is not supported in current containerized environments. Hence, we are constrained to fall back to a classic send-and-sleep loop, mimicking a periodic send operation. The effect of this difference is minimal in our experiment, as we do not have other competing flows; however, previous work [23] demonstrates that time-sensitive flows require dedicated support. _tsn-cni_ serves this purpose by providing essential support to containerized applications so as to meet heterogeneous flow requirements in mixed-criticality scenarios.
## VI Conclusion and future work
We presented KuberneTSN, an architecture for an accelerated and deterministic container overlay network. KuberneTSN defines a novel userspace TSN packet scheduler and adopts a kernel-bypassing approach to minimize packet processing delays. We implemented KuberneTSN as a network plugin for the Kuberneets orchestrator, called _tsn-cni_, so that it can be used alongside existing network fabrics to better support time-sensitive edge applications. The solution was evaluated on a real testbed, showing that containerized applications using _tsn-cni_ have the same level of performance and determinism as bare metal applications, outperforming the widely used Flannel network plugin.
Future work will include a detailed performance characterization of KuberneTSN under different traffic conditions, and a demonstration of the use of _tsn-cni_ in combination with other network plugins. In the longer term, as performance-demanding AI/ML components are increasingly moved to the network edge, we are interested in a systematic performance study of the inter-container datapath to highlight further optimization opportunities.
## Acknowledgements
This work was partially supported by the H2020 TERMINET project (Grant agreement #: 957406).
|
2305.02174 | Validation of 4D Monte Carlo dose calculations using a programmable
deformable lung phantom | Purpose: To validate the accuracy of 4D Monte Carlo (4DMC) simulations to
calculate dose deliveries to a deforming anatomy in the presence of realistic
respiratory motion traces. A previously developed deformable lung phantom
comprising an elastic tumor was modified to enable programming of arbitrary
motion profiles. 4D simulations of the dose delivered to the phantom were
compared with the measurements. Methods: The deformable lung phantom moving
with irregular breathing patterns was irradiated using static and VMAT beam
deliveries. Using the RADPOS 4D dosimetry system, point doses were measured
inside and outside the tumor. Dose profiles were acquired using films along the
motion path of the tumor (S-I). In addition to dose measurements, RADPOS was
used to record the motion of the tumor during dose deliveries. Dose
measurements were then compared against 4DMC simulations with
EGSnrc/4DdefDOSXYZnrc using the recorded tumor motion. Results: The agreements
between dose profiles from measurements and simulations were determined to be
within 2%/2 mm. Point dose agreements were within 2{\sigma} of experimental
and/or positional/dose reading uncertainties. 4DMC simulations were shown to
accurately predict the sensitivity of delivered dose to the starting phase of
breathing motions. We have demonstrated that our 4DMC method, combined with
RADPOS, can accurately simulate realistic dose deliveries to a deforming
anatomy moving with realistic breathing traces. This 4DMC tool has the
potential to be used as a quality assurance tool to verify treatments involving
respiratory motion. Adaptive treatment delivery is another area that may
benefit from the potential of this 4DMC tool. | Sara Gholampourkashi, Joanna E. Cygler, Bernie Lavigne, Emily Heath | 2023-05-03T15:12:57Z | http://arxiv.org/abs/2305.02174v1 | #### Validation of 4D Monte Carlo dose calculations using a programmable deformable lung phantom
###### Abstract
We present the validation of 4D Monte Carlo (4DMC) simulations to calculate dose deliveries to a deforming anatomy in the presence of realistic respiratory motion traces. A previously developed deformable lung phantom comprising an elastic tumor was modified to enable programming of arbitrary motion profiles. The phantom moving with irregular breathing patterns was irradiated using static and VMAT beam deliveries. Using the RADPOS 4D dosimetry system, point doses were measured inside and outside the tumor. Film was used to acquire dose profiles along the motion path of the tumor (S-I). In addition to dose measurements, RADPOS was used to record the motion of the tumor during dose deliveries. Dose measurements were then compared against 4DMC simulations with EGSnrc/4DdefDOSXYZnrc using the recorded tumor motion. The agreements between dose profiles from measurements and simulations were determined to be within 2%/2 mm. Point dose agreements were within 2\(\sigma\) of experimental and/or positional/dose reading uncertainties. 4DMC simulations were shown to accurately predict the sensitivity of delivered dose to the starting phase of breathing motions. We have demonstrated that our 4DMC method, in combination with RADPOS, can accurately simulate realistic dose deliveries to a deforming anatomy moving with realistic breathing traces. This 4DMC tool has the potential to be used as a quality assurance tool to verify treatments involving respiratory motion. Adaptive treatment delivery might be another area that may benefit from the potential of this 4DMC tool.
## 1 Introduction
4D dose calculation methods account for the effect of respiratory motion on the delivered dose by taking into account all three impacts of respiratory motion, including dose blurring, dose deformations and interplay effects (Brock _et al_2003, Keall _et al_2004). In a 4D dose calculation algorithm, dose is calculated on multiple respiratory states of the anatomy and mapped to a reference anatomy by use of deformation vector fields (DVFs) to yield cumulative dose distributions. Different dose mapping algorithms such as dose interpolation mapping
(DIM) (Rosu _et al_2005), energy mass congruent mapping (EMCM) (Zhong and Siebers 2009, Siebers and Zhong 2008) and the voxel warping method (VWM) (Heath and Seuntjens 2006) have been developed.
Prior to clinical implementation, experimental verification of 4D dose calculation algorithms with phantoms that simulate both motion and deformation of different organs (e.g. lung, liver, etc.) as well as the tumor is required. An essential design consideration for such phantoms is their capability to accommodate different dosimeters (e.g. film, ion chamber, MOSFET) to measure the dose delivered during treatment. Reproducibility of the phantom motion and geometry from one setup to the next is another important requirement. Several studies using phantoms to verify 4D dose calculation algorithms have been published (Vinogradskiy _et al_2009a, 2009b, Niu _et al_2012, Belec and Clark 2013, Ravkilde _et al_2014, Zhong _et al_2016). However, they were limited in the sense that they ignored anatomy deformations or the interplay effect as well as the impact of irregular respiratory motion patterns on the delivered dose. Furthermore, in most of the studies only point dose measurements were possible.
In a previous study (Gholampourkashi _et al_2018b) we presented experimental evaluation of a 4D Monte Carlo (MC) tool (4DdefDOSXYZnrc) using a novel deformable lung phantom. Our 4DMC tool is capable of simulating continuous motion and deformation of the anatomy using the voxel warping method to accumulate dose deposited in different anatomical states. The method has been described in a previous publication (Gholampourkashi _et al_2017). Currently, the anatomical motion is modeled using one set of deformation vectors (from end-of-exhale to end-of-inhale) and a measured motion trace is used to scale the deformation vectors to reproduce the realized anatomical states. Previous validation work compared measured and simulated doses during static and VMAT treatment deliveries on an Elekta Infinity linac in the presence of sinusoidal motion. It was found that 4DMC dose calculations agreed within 5%, or better, with the measurements. The present study expands on our previous work to study the impact of irregular respiratory motion patterns on the dose delivered to the phantom. The phantom was further modified to be capable of moving with irregular and patient-derived respiratory profiles. We characterize the phantom motion for different motion patterns and evaluate the accuracy of the programmed motions. Measurements and simulations are compared to assess the accuracy of 4DMC tool to simulate these realistic motion profiles.
## 2 Materials and methods
### Deformable programmable lung phantom
#### Phantom design
Design details of the lung phantom used in this study have been previously published (Gholampourkashi _et al_2018b). The phantom is made of tissue-equivalent foam and holds a non-rigid tumor inside a cylindrical plug. 32 Lucite beads were injected throughout the phantom to help with image registration as well as quantifying the target registration errors. The previous version of the phantom utilized a DC motor to produce a single
amplitude/multi frequency sinusoidal motion using a piston attached to the motor. In this work, the DC motor was replaced with a programmable servo motor to enable simulation of realistic respiratory motion profiles. A picture of the phantom with its components are shown in Figure 1(a). The rotational motion of the motor was converted to linear motion of the piston through a Scotch Yoke mechanism as shown in Figure 1(b).
A cylindrical disk with radius r is attached to the motor disk. Rotation of the motor causes the pin at the edge of the disk to slide in the vertical direction (upward or downward) inside the sliding yoke. This vertical motion results in the horizontal motion of the connecting rod and as a result piston. The magnitude of this linear motion is related to r and \(\theta\), which are radius and angle of rotation, respectively and can be calculated by equation 1:
\[X=r\;\;(1-\cos\theta) \tag{1}\]
where r = 2.5 cm and 0\({}^{\circ}\leq 0\leq 180^{\circ}\) for the phantom used in this study. The maximum achievable peak-to-peak (P-P) amplitude at the piston is 5 cm.
The motor can be programmed with wide range of motion profiles, from sinusoidal (varying amplitudes and frequencies) to highly irregular patient respiratory traces. The maximum breathing frequency of the phantom is 4 Hz which correlates to 4 breaths per second or a breathing period of 0.25 s.
#### 2.1.2 Motion assessment and reproducibility
In order to validate the motion of the phantom, RADPOS detectors were placed at the tumor center as well as on the top and bottom surfaces of the plug. The detector on the bottom surface of the plug was aligned with the one inside the tumor (i.e. approximately 9 cm from the piston) while the top surface RADPOS was mounted with an offset of 1 cm, towards the inferior side of the phantom, from the other two. This setup is shown in Figure 2 for all three detectors.
Figure 1: (a) Deformable lung phantom with programmable servo motor to enable simulation of realistic respiratory motion profiles. A cylindrical plug holds the silicon rubber tumor that moves inside the phantom. Details about other components of the phantom are presented elsewhere (Gholampourkashi _et al_2018**b),** (b) Diagram of the Scotch Yoke mechanism to convert the rotational motion of the motor to linear motion at the piston. Rotation of the motor disk moves the pin inside the sliding yoke in the vertical (Y) directions and as a result the connecting rod moves in the horizontal (X) direction. The radius of the disk and angle of rotation (i.e. r and \(\theta\)) determine the amplitude of the linear motion in the X direction.
The phantom diaphragm was driven with sinusoidal and irregular motion profiles and the realized motion was recorded in 3D (S-I, A-P and L-R) with a temporal resolution of 100 ms by all three RADPOS detectors. During the measurements, the motion of the diaphragm was recorded by the motion controller for comparison against the motion trace recorded by RADPOS.
Initially, sinusoidal motion profiles with P-P diaphragm amplitudes of 1, 1.5, 2, 2.5 and 3 cm and periods of 2 -7 s in steps of 1 s were tested. Measurements were repeated 6 times over 3 days (i.e. 2 datasets per day) to evaluate both inter-day and intra-day reproducibility and variations of phantom motion. The order in which motion profiles were tested was chosen randomly.
In addition to the sinusoidal motion profiles, 16 different irregular motion traces with P-P diaphragm amplitudes of 1, 1.5, 2, 2.5 and 3 cm were tested to fully evaluate the performance of the phantom. Measurements were repeated 5 times for each motion profile over a 5-day period (i.e. 1 dataset per day). A Python code, which detected the peaks and valleys of the recorded motion traces, was used to compute the average and standard deviation of the P-P motion amplitude for irregular motion patterns.
Figure 2: RADPOS detectors placed on the top surface (top), inside (middle) and bottom surface (bottom) of the plug to assess the motion of the phantom.
### 3DCT acquisition and image registration
Two sets of 3DCT images of the phantom, in uncompressed and compressed states, were acquired using a helical CT scanner (Brilliance CT Big Bore, Philips, Amsterdam, the Netherlands). The diaphragm displacement for the compressed state was 3 cm, which corresponds to the maximum P-P amplitude in the tested motion profiles. These states correspond to end-of-inhale (EOI) and end-of-exhale (EOE) states of the phantom, respectively. The resulting resolution and matrix size of the 3DCT images were 0.05\(\times\)0.05\(\times\)0.2 cm\({}^{3}\) and 512\(\times\)512\(\times\)184. Pitch values and the gantry rotation times for the scans were 0.938 and 0.75 s, respectively.
A deformation vector field (DVF) describing the phantom motion from EOE to EOI breathing phase was generated by registering the respective CT scans in Velocity AI 3.2.0. The structure guided deformable registration feature was used where the position of the Lucite beads in the phantom were used to guide the registration.
### Treatment plans
Static and VMAT treatment plans for 6 MV photon beams from an Elekta Infinity linac (Elekta AB., Stockholm, Sweden) were created on the EOI scans of the phantom in Monaco V.5.11.01 (Elekta AB., Stockholm, Sweden). Both plans were aimed to deliver 100 cGy to the center of the tumor which was contoured as the gross tumour volume (GTV) with no margins added to compensate for motion.
The static treatment plan consisted of a single 3\(\times\)3 cm\({}^{2}\) field. The VMAT plan consisted of 64 control points that delivered a full arc starting and ending at 180deg with average angular spacing equal to 5.6deg. The corresponding Monaco dose distributions are shown in Figure 3 separately for each treatment plan. The GTV was covered by the 80% and 90% isodose lines on the static and VMAT plans, respectively. The XVMC (X-ray Voxel Monte Carlo) (Fippel 1999) dose calculation algorithm in Monaco was used for dose calculations. Also, in order to be consistent with measurements (film and RADPOS calibrated in Solid Water), dose to water (D\({}_{\rm w}\)) was calculated in Monaco using a 2 mm dose calculation grid to achieve a statistical uncertainty of 1.0%.
### Irradiations of the deformable programmable phantom
After exporting treatment plans into the Elekta MOSAIQ RadOnc system, deliveries to the phantom were performed on an Elekta Infinity linac with Agility MLC. The static 3\(\times\)3 cm\({}^{2}\) field plan delivered 110.4 MU at a nominal dose rate of 600 MU/min. The VMAT plan delivered 115.9 MU with a varying dose rate. All machine delivery information such as the position of the MLC, jaws, gantry as well as cumulative MU were recorded through delivery log files (IAN V.2; Elekta AB., Stockholm, Sweden) at a temporal resolution of 40 ms.
Figure 3: Static 3\(\times\)3 cm\({}^{2}\) square field (top) and VMAT (bottom) plans: dose distribution from Monaco on (a) coronal, (b) sagittal and (c) axial planes.
Three different respiratory motion profiles (Figure 4) were simulated for a P-P diaphragm amplitude of 3 cm and deliveries were repeated three times for each treatment plan. For visibility purposes, only 80 s of the traces are shown here. The respiratory motion profiles shown in Figure 4 included one typical trace (Figure 4(a)), one with large motion variations (Figure 4(b)) and one which resulted in large hysteresis between the motion of the tumor and diaphragm (Figure 4(c)). To study the sensitivity of the delivered dose to the starting phase of the motion, the three deliveries for the first (Figure 4(a)) and second (Figure 4(b)) respiratory profiles were repeated with the beam turned on at approximately 0, 40 and 90 s after starting the phantom motion. The motion traces were recorded at a temporal resolution of 100 ms during deliveries using the RADPOS system. The motion recorded by RADPOS in the S-I direction at the tumor center was compared against the diaphragm motion in Figure 4. The time synchronization between the beam-on and phantom motion were accomplished by synchronizing clock times of the RADPOS and linac computers with the network time protocol (NTP).
For all irradiations the phantom was placed on the couch so that the diaphragm was on the superior side (Figure 5(a)) and the center of the tumor (i.e. plan isocenter) was aligned with the beam isocenter. Point doses inside and outside the plug were measured by calibrated RADPOS detectors that were fixed into special grooves. These grooves were engraved during the molding process of the plug. Calibrated Gafchromic film strips (EBT3, Ashland, Wayne, NJ, USA) were taped on top of the RADPOS probe inside the plug to measure the dose profile in the S-I direction. Film and RADPOS inside the plug are shown in Figure 5(b).
The total dosimetric uncertainties of film and RADPOS detectors measurements were determined to be 2.3%, 2.2% (top RADPOS) and 2.4% (center and bottom RADPOS), respectively (Gholampourkashi _et al_2017, Cherpak _et al_2009). The main contributors to these uncertainties were the uncertainties in beam delivery (e.g. depth and field size settings), beam dosimetry calibration (e.g. \(\rm{N_{D,w}}\), \(\rm{K_{Q}}\))" and reproducibility in ion chamber and RADPOS readings.
Figure 4: Comparison of the programmed (brown dashed-dotted line), diaphragm (green line) and tumor (orange dots) motion traces for respiratory motion profiles that are (a) typical, (b) very irregular and (c) show large hysteresis between the tumor and diaphragm motion. The P-P diaphragm amplitude was programmed to be 3 cm.
Figure 5: (a) Setup for phantom irradiations such that the piston was in the superior side of the couch and center of the tumor inside the phantom was aligned with the beam isocenter, (b) Film
and RADPOS inside the plug to measure point dose (tumor center) and dose profile along the SI direction. RADPOS is fixed inside an embedded groove and film is taped on top of RADPOS.
### Monte Carlo simulations
#### 2.5.1 User codes and simulation parameters
EGSnrc (Kawrakow 2013) (V4-2.4.0, National Research Council of Canada, Ottawa, ON, Canada) was used for all simulations in this study. A model of our Elekta Infinity linac with Agility MLC was built using the BEAMnrc (Rogers _et al_1995) user code and the incident electron beam parameters were tuned according to dose profiles in water (Gholampourkashi _et al_2018a). The DOSXYZnrc (Walters _et al._, 2016) and 4DdefDOSXYZnrc (Gholampourkashi _et al_2017) user codes were used to calculate the resultant dose from MC simulations of the stationary and breathing states of the phantom, respectively. Calculated dose was then converted to absolute dose using the formalism presented in equation 2:
\[D(cGy)=\frac{(^{D}/\#\ of\ incident\ particles)mc\ individual\ simulatiton}{(^{D}/\#\ of\ incident\ particles)mc\ calibration\ simulatiton}\times\frac{1\ cGy}{MU}\]
\[\times MU_{del} \tag{2}\]
where MU\({}_{del}\) is the monitor units (MU) delivered by a linear accelerator. In this formula \(\frac{D}{\#\ of\ incident\ particles}\) represents the dose scored per number of incident particles in a Monte Carlo simulation. The calibration simulation was performed in water for a square field of \(10\times 10\) cm\({}^{2}\) and SSD of 100 cm and dose was scored at a depth of 10 cm.
Delivery log files were converted into input files for MC simulations using an in-house Python script. The photon and electron cut-off energies (PCUT and ECUT) were set to 0.01 and 0.7 MeV and the electron range rejection was set to 2 MeV for all simulations. The target mean relative statistical uncertainty (Chetty _et al_2006) for these simulations was 0.4% which was calculated over all voxels with doses greater than 50% of the maximum dose. Achieving this statistical uncertainty required simulating 1.5\(\times 10^{8}\) and 3.0\(\times 10^{8}\) histories on the stationary and deforming geometries, respectively, with approximate CPU core hours equal to 12-15 and 31-35 per calculations. All simulations were performed on the Carleton University Physics Research Compute Cluster which consists of 644 processing cores Intel Xeon CPUs at 2.50-3.00 GHz frequency.
#### 2.5.2 Dose calculations: Stationary and deforming phantom
The 3DCT scans of the EOI phantom state were resampled to a resolution of 0.05\(\times\)0.05\(\times\)0.2 cm\({}^{3}\)to generate the dose calculation geometry. Assignment of voxel densities followed the approach introduced by Seco (Seco and
Evans 2006) that enables direct calculation of D\({}_{\rm w}\) in MC simulations to be compared against film and RADPOS measurements.
Source 21 (Lobo and Popescu 2010) of DOSXYZnrc was used to simulate the BEAMnrc linac model for the simulations performed in this work. 4DdefDOSXYZnrc simulations utilized the DVF exported from Velocity along with the respiratory motion trace recorded with RADPOS (tumour center) during irradiations to model the phantom motion and deformation. For each particle incident on the phantom, the magnitude and direction of deformations of the reference geometry are determined by scaling the DVF by the magnitude of the respiratory motion at the appropriate time point (respiratory phase). In order to conserve mass between the reference and deformed geometries, the densities of the deformed voxels are recalculated. The incident particle is transported through the deformed geometry and its energy deposition is scored. No mapping of dose between geometries is required since the same dose calculation grid is retained between the reference and deformed geometries. The cumulative dose in each voxel is calculated by dividing the total energy deposition by the mass of the reference voxel (Gholampourkashi _et al_2017, 2018b).
### Comparison metrics
Dose profiles along the motion path of the phantom (S-I direction) from MC simulations were compared against film measurements. Measurement of the profiles was limited by the deformations in the superior side of the phantom where the piston is placed. Deformations of the phantom introduced a constraint on using wider film pieces to acquire 2D dose distribution. To evaluate dose profiles from simulations, a 1D gamma analysis (Low _et al_1998) with a 2% dose-difference and 2 mm distance-to-agreement criterion with film as the reference was utilized. The dose threshold used for gamma analysis was set to be 5% of the evaluated maximum dose. To match the dose grid resolution of film with the one from MC simulations (2 mm), a moving average filter was applied to the film readings. Also, point dose comparisons between MC simulations and measurements at the center of the tumor (film and RADPOS) as well as top and bottom surfaces of the plug (RADPOS) are quoted as percentage of the measured dose value at the point of measurement.
## 3 Results
### Assessment of motion reproducibility
Measured motion magnitudes and their reproducibility in S-I, A-P and L-R directions for various diaphragm amplitudes for sinusoidal motion are shown in Figure 6.
Figure 6: Average range of motion in (a) S-I, (b) A-P and (c) L-R directions measured by RADPOS detectors at the tumor, top and bottom plug surfaces for sinusoidal motion with various diaphragm amplitudes. Error bars show reproducibility of the motion (combined standard deviations for one amplitude is the square root of the sum of the square of individual standard deviations).
The motion amplitude did not show significant variations to changes in the period of the sinusoidal motion. The variations were observed to be less than 0.1 mm in S-I and 0.02 mm in A-P and L-R directions. The overall reproducibility of the P-P amplitudes was measured to be within 0.2, 0.1 and 0.1 mm for all motions in S-I, A-P and L-R directions, respectively. The S-I motion measured by all three RADPOS detectors was very similar (within 0.2 mm or 4%) while differences of almost 0.04 mm (\(\sim\) 20%) and 0.05 mm (\(\sim\) 30%) were observed in the motion measured in A-P and L-R directions, respectively.
Maximum intra- and inter-day variations for S-I, A-P and L-R motion amplitude for all three RADPOS detectors are shown in Table 1. For the inter-day measurements, the RADPOS probes were not removed between consecutive measurements.
For the irregular respiratory motion profiles the motion amplitude varies within the motion profile. For the motion profiles shown in Figure 6, these amplitudes were calculated to be \(9.78\pm 1.19\), \(6.56\pm 1.90\) and \(13.1\)\(8\pm 0.18\) mm, respectively. Reproducibility of the average P-P amplitudes as well as the amplitude variations in 3D from all simulated profiles are shown in Table 2 for the three RADPOS detectors.
\begin{table}
\begin{tabular}{c l c c c} \hline \hline & Measurement point & A-P (cm) & L-R (cm) & S-I (cm) \\ \hline & Tumor Center & 0.02 & 0.02 & 0.09 \\ Inter-day & Top surface & 0.03 & 0.02 & 0.08 \\ & Bottom surface & 0.02 & 0.02 & 0.07 \\ \hline & Tumor Center & 0.01 & 0.01 & 0.02 \\ Intra-day & Top surface & 0.02 & 0.01 & 0.03 \\ & Bottom surface & 0.02 & 0.02 & 0.02 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Inter- and intra-day amplitude variations in S-I, A-P and L-R directions for tumor, top and bottom RADPOS detectors for sinusoidal motion profiles.
Maximum inter-day variations of the amplitude are shown in table 3 for S-I, A.-P and L-R directions for all three RADPOS detectors.
One of the applications of RADPOS as a real-time motion detector is to enable examination of the correlation between the motion of the diaphragm to the motion of any point of interest (e.g. tumor) inside the phantom. Figure
\begin{table}
\begin{tabular}{c l c c c} \hline \hline & Measurement point & A-P (cm) & L-R (cm) & S-I (cm) \\ \hline Average P-P & Tumor Center & 0.025 & 0.012 & 0.038 \\ amplitude & Top surface & 0.023 & 0.012 & 0.039 \\ & Bottom surface & 0.022 & 0.010 & 0.037 \\ \hline P-P amplitude & Tumor Center & 0.004 & 0.004 & 0.015 \\ variations & Top surface & 0.005 & 0.004 & 0.012 \\ & Bottom surface & 0.003 & 0.006 & 0.014 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Reproducibility of the average P-P amplitude and amplitude variations in S-I, A-P and L-R directions for tumor, top and bottom RADPOS detectors for irregular motion profiles. Values represent the standard deviation of the average P-P amplitude and average value of amplitude variations across individual motion traces, respectively.
\begin{table}
\begin{tabular}{c l c c c} \hline \hline & Measurement point & A-P (cm) & L-R (cm) & S-I (cm) \\ \hline Average P-P & Tumor Center & 0.027 & 0.028 & 0.087 \\ amplitude & Top surface & 0.031 & 0.028 & 0.094 \\ & Bottom surface & 0.024 & 0.025 & 0.093 \\ \hline P-P amplitude & Tumor Center & 0.010 & 0.013 & 0.030 \\ variations & Top surface & 0.013 & 0.014 & 0.030 \\ & Bottom surface & 0.008 & 0.014 & 0.036 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Inter-day amplitude variations in S-I, A-P and L-R directions for tumor, top and bottom RADPOS detectors for irregular motion profiles.
7 shows such correlations for sinusoidal (periods of 2 and 7 s) and irregular motion profiles shown in Figure 4(a and c) in the form of hysteresis plots for S-I direction.
### Validation of image registration
Performance of the deformable image registration algorithm was evaluated both visually (Figure 8) and quantitatively (Table 4). Mean values of registration errors with their standard deviations are shown in Table 4 for A-P, L-R and S-I direction. The overall 3D registration error is presented in the Table as well. The 3D registration error for each landmark and the three RADPOS detectors is calculated by adding errors in all three directions in quadrature.
\begin{table}
\begin{tabular}{c c} \hline \hline Direction of motion & Mean registration error (cm) \\ \hline A-P & \(0.05\pm 0.03\) \\ L-R & \(0.04\pm 0.03\) \\ \hline \end{tabular}
\end{table}
Table 4: Mean registration errors with their standard deviations in A-P, L-R, S-I and 3D.
Figure 7: Correlation curves between motion of the tumor and diaphragm for P-P diaphragm amplitude of 3 cm. Correlations are shown in the form of hysteresis plots for sinusoidal motion profiles with periods of (a) 4 s, (b) 7 s as well as (c) typical (Figure 4(a)) and (d) large hysteresis (Figure 4(c)) respiratory motion profiles.
The maximum and minimum 3D error values were measured to be 0.18 and 0.04 cm, respectively. The limiting factor in achieving a better registration accuracy was the resolution of the CT image slices which was 0.2 cm.
Figures 8(a) and 8(b) show the overlaid EOI and EOE images before and after registration. The difference map shown in Figure 8(c) presents the differences between the deformed EOE image from the EOI. Regions in light gray present the lowest differences between the deformed EOE and EOI images. In Figure 8(d) the Jacobian map of the registration is representative of how voxels change in size once deformations are applied. In this colormap, green regions represent no volume change between deformed EOE and EOI images. Blue and red regions present shrinkage (reduced voxel size) and growth (enlarged voxel size).
### Dosimetric comparisons
Figure 9 shows sample dose profiles from MC simulations and film measurements for the respiratory traces previously shown in Figure 4. Corresponding 1D gamma passing rates (2%/2 mm) are shown in Table 5. In addition, average values of gamma passing rates for all sets of irradiations were calculated and are shown in this table.
Figure 8: Visual evaluation of the deformable image registration on coronal view: (a) non-deformed EOE (gray) overlaid on EOI (pink), (b) deformed EOE overlaid on EOI, (c) deformed EOE subtracted from EOI and (d) Jacobian of the registration.
Also, dose values measured and simulated at the center of tumor are shown in Tables 6 with their corresponding statistical and experimental uncertainties.
\begin{table}
\begin{tabular}{l l c c} \hline \hline & & \multicolumn{2}{c}{2\%/2 mm 1D gamma passing rate (\%)} \\ \cline{3-4} Plan & Motion & Single profile & Average of 3 \\ & & & measurements \\ \hline & Typical & & \\ & (Figure 9(top-left)) & 100.0 & \(97.7\pm 2.9\) \\ \cline{2-4} Static 3\(\times\)3 cm\({}^{2}\) & Highly irregular (Figure 9(middle-left)) & 98.9 & \(98.0\pm 2.6\) \\ \cline{2-4} & Large hysteresis & & \\ & (Figure 9(bottom-left)) & 100.0 & \(99.3\pm 1.2\) \\ \hline & Typical & & \\ & (Figure 9(top-right)) & 99.7 & \(97.8\pm 1.9\) \\ \cline{2-4} VMAT & Highly irregular (Figure 9(middle-right)) & 98.9 & \(98.1\pm 1.3\) \\ \cline{2-4} & Large hysteresis & & \\ & (Figure 9 (bottom-right)) & 100.0 & \(97.9\pm 1.8\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Passing rates of 1D gamma comparisons of 2%/2 mm criteria for MC simulations against film measurements on the breathing deformable phantom for typical, highly irregular and large hysteresis respiratory motion profiles. First column of gamma passing rates corresponds to the dose profiles shown in Figure 9. The average values from all irradiation sets are shown in the second column.
Figure 9: Comparison of dose profiles for 3\(\times\)3 cm\({}^{2}\) (left) and VMAT (right) beam deliveries on the breathing deformable phantom along the S-I direction, for the typical (Figure 4(a)) (top row), highly irregular (Figure 4(b)) (middle row) breathing motions as well the breathing motion with large hysteresis (Figure 4(c)) (bottom row).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & & & \multicolumn{2}{c}{Dose (cGy)} \\ \cline{5-6} & & & & \multicolumn{2}{c}{Measurements} \\ \cline{5-6} Plan & Motion & Irradiation & MC & Film & RADPOS \\ & & \# & & & \\ \hline \multirow{6}{*}{Static 3\(\times\)3 cm\({}^{2}\)} & Large & - & \(72.1\pm 0.4\)\% & \(71.8\pm 2.3\)\% & \(73.5\pm 2.4\)\% \\ & hysteresis & & & & \\ \cline{2-6} & & 1 & \(70.9\pm 0.4\)\% & \(70.9\pm 2.3\)\% & \(70.9\pm 2.4\)\% \\ & Typical & 2 & \(77.3\pm 0.4\)\% & \(78.7\pm 2.3\)\% & \(77.8\pm 2.4\)\% \\ & & 3 & \(79.8\pm 0.4\)\% & \(79.5\pm 2.3\)\% & \(79.4\pm 2.4\)\% \\ \cline{2-6} & Highly & 1 & \(97.4\pm 0.4\)\% & \(98.6\pm 2.3\)\% & \(97.2\pm 2.4\)\% \\ & irregular & 2 & \(93.2\pm 0.4\)\% & \(95.0\pm 2.3\)\% & \(95.1\pm 2.4\)\% \\ & & 3 & \(88.6\pm 0.4\)\% & \(90.4\pm 2.3\)\% & \(91.1\pm 2.4\)\% \\ \hline \multirow{6}{*}{VMAT} & Large & - & \(98.8\pm 0.4\)\% & \(98.9\pm 2.3\)\% & \(98.0\pm 2.4\)\% \\ & hysteresis & & & & \\ \cline{2-6} & & 1 & \(98.8\pm 0.4\)\% & \(99.3\pm 2.3\)\% & \(100.0\pm 2.4\)\% \\ \cline{2-6} & Typical & 2 & \(99.4\pm 0.4\)\% & \(99.6\pm 2.3\)\% & \(100.5\pm 2.4\)\% \\ & & 3 & \(97.8\pm 0.4\)\% & \(97.3\pm 2.3\)\% & \(100.3\pm 2.4\)\% \\ \cline{2-6} & & 1 & \(104.3\pm 0.4\)\% & \(104.5\pm 2.3\)\% & \(104.3\pm 2.4\)\% \\ \cline{2-6} & Highly & & & & \\ \cline{2-6} & irregular & 2 & \(102.0\pm 0.4\)\% & \(102.6\pm 2.3\)\% & \(101.2\pm 2.4\)\% \\ \cline{2-6} & & 3 & \(101.0\pm 0.4\)\% & \(103.0\pm 2.3\)\% & \(100.0\pm 2.4\)\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: Dose values at the center of tumor from measurements with film and RADPOS as well as MC simulations on the breathing deformable phantom during typical, highly irregular and large hysteresis respiratory motion profiles.
From the values in Table 6 we can see that for the majority of the irradiations, the measured and simulated dose values at the center of tumor agree within 2% of each other. Exceptions are the third beam deliveries of the static 3\(\times\)3 cm\({}^{2}\) plan for the highly irregular motion and the VMAT plan for the typical respiratory motion that show agreement within 3% between MC simulations and RADPOS measurements which still lies within the 2\(\sigma\) of the experimental uncertainties (4.8%). Overall, for the irradiations done for the respiratory motion with large hysteresis trace, the same level of agreement (i.e. better than 3%) was found to be true. In addition, from dose profiles shown in Figure 9, we can see that sharp dose gradients exist in the delivered dose due to respiratory motion. This is especially prominent for the 3\(\times\)3 cm\({}^{2}\) plan which adds an intrinsic dose gradient compared to the VMAT plan. For some motion traces (typical and large hysteresis), the total positional/reading uncertainties of dose values can be as high as 8% for the 3\(\times\)3 cm\({}^{2}\) plan which is observable from the level of dose gradient in profiles in Figures 9(top-left) and 9(bottom-left). These uncertainties are calculated incorporating the percentage point dose differences within a 2 mm voxel including positional uncertainties in 3D (Left-Right, Sup-Inf and Ant-Post). Considering that the largest component of the motion happens in the S-I direction, such uncertainties could be twice as the overall values if we take only S-I direction into consideration. For the highly irregular motion trace on the other hand, these uncertainties did not exceed 3-4%. Table 7 presents dose values measured and simulated at the bottom surface of the plug with their corresponding experimental and statistical uncertainties.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & & & \multicolumn{3}{c}{Dose (cGy)} \\ \cline{4-5} & & & \multicolumn{3}{c}{Bottom surface} \\ \cline{4-5} Plan & Motion & Irradiation & MC & RADPOS \\ & & \# & & \\ \hline \multirow{6}{*}{Static 3\(\times\)3 cm\({}^{2}\)} & Large & - & \(55.1\pm 0.4\)\% & \(53.5\pm 2.4\)\% \\ & hysteresis & & & \\ \cline{1-1} \cline{2-5} & & 1 & \(51.9\pm 0.4\)\% & \(51.6\pm 2.4\)\% \\ \cline{1-1} \cline{2-5} & Typical & 2 & \(56.6\pm 0.4\)\% & \(55.0\pm 2.4\)\% \\ \cline{1-1} & & 3 & \(58.2\pm 0.4\)\% & \(58.3\pm 2.4\)\% \\ \cline{1-1} \cline{2-5} & Highly & 1 & \(77.2\pm 0.4\)\% & \(77.9\pm 2.4\)\% \\ \cline{1-1} & irregular & 2 & \(75.3\pm 0.4\)\% & \(77.3\pm 2.4\)\% \\ \cline{1-1} & & 3 & \(71.3\pm 0.4\)\% & \(73.6\pm 2.4\)\% \\ \hline \multirow{2}{*}{VMAT} & Large & - & \(65.7\pm 0.4\)\% & \(63.5\pm 2.4\)\% \\ \cline{1-1} & hysteresis & & & \\ \hline \hline \end{tabular}
\end{table}
Table 7: Dose values at the bottom surface of the plug from MC simulations and RADPOS measurements on the breathing deformable phantom during typical, highly irregular and large hysteresis respiratory motion profiles.
Measured and simulated dose values as well as their experimental and statistical uncertainties at the top surface of the plug are shown in Table 8. An agreement of better than 5% is observable for those values shown in this table. This level of agreement complies with the fact that this dose point is initially positioned in a high dose gradient region. It should be noted that since this point dose is placed close to the edge of the beam, the motion results in a larger drop in the dose value compared to dose values at the tumor center and the bottom surface. The overall dose reading/positional uncertainties are approximately 12% and 10% for the 3\(\times\)3 cm\({}^{2}\) and VMAT plan deliveries while considering only the S-I direction these values can go as high as 20%. In average, same level of dose difference were observed for all irradiations with the large hysteresis respiratory motion profile with some dose differences slightly larger than 5%.
In order to explain the dose differences observed between three irradiations (beam-on at respectively 0, 40 and 90 s after the motion starts) for the typical and highly irregular motion profiles, respiratory traces recorded with RDPOS during beam deliveries were extracted. The fraction of time that was spent in each respiratory phase is shown in Figures 10 and 11 for these two traces, respectively, for both plan deliveries.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & & \multicolumn{2}{c}{Dose (cGy)} \\ \cline{3-5} & & & \multicolumn{2}{c}{Top surface} \\ \cline{3-5} Plan & Motion & Irradiation & MC & RADPOS \\ \cline{3-5} & & \# & & \\ \hline Large & - & \(18.0\pm 0.4\)\% & \(18.5\pm 2.2\)\% \\ hysteresis & & & & \\ \hline Typical & 1 & \(13.8\pm 0.4\)\% & \(13.2\pm 2.2\)\% \\ Typical & 2 & \(15.4\pm 0.4\)\% & \(15.0\pm 2.2\)\% \\ & 3 & \(16.4\pm 0.4\)\% & \(15.9\pm 2.2\)\% \\ \hline Highly & 1 & \(17.1\pm 0.4\)\% & \(17.9\pm 2.2\)\% \\ irregular & 2 & \(17.4\pm 0.4\)\% & \(18.2\pm 2.2\)\% \\ & 3 & \(14.8\pm 0.4\)\% & \(15.1\pm 2.2\)\% \\ \hline Large & - & \(22.9\pm 0.4\)\% & \(23.6\pm 2.2\)\% \\ hysteresis & & & \\ \hline Typical & 1 & \(21.8\pm 0.4\)\% & \(22.6\pm 2.2\)\% \\ Typical & 2 & \(23.8\pm 0.4\)\% & \(24.0\pm 2.2\)\% \\ & 3 & \(22.3\pm 0.4\)\% & \(23.4\pm 2.2\)\% \\ \hline Highly & 1 & \(33.2\pm 0.4\)\% & \(34.1\pm 2.2\)\% \\ irregular & 2 & \(31.8\pm 0.4\)\% & \(32.5\pm 2.2\)\% \\ & 3 & \(32.9\pm 0.4\)\% & \(33.8\pm 2.2\)\% \\ \hline \hline \end{tabular}
\end{table}
Table 8: Dose values at the top surface of the plug from MC simulations and RADPOS measurements on the breathing deformable phantom during typical, highly irregular and large hysteresis respiratory motion profiles. This RADPOS was placed at an offset position of 1 cm from the tumor center.
Figure 10: Fraction of the time spent in each breathing phase during 3\(\times\)3 cm\({}^{2}\) (top) and VMAT (bottom) beam deliveries on the deforming phantom for three irradiations with the typical respiratory motion profile
shown in Figure 4(a). Breathing phase is the RADPOS normalized displacement that was recorded during beam deliveries.
From the values shown in Figure 10 we can see that 60% of the first static 3\(\times\)3 cm\({}^{2}\) beam delivery happened while the phantom was in the EOE (fully compressed or breathing phase 1) state, whereas for the other two beam deliveries this value reduces to approximately 50%. When studied in more details, it was observed that 45% of the total time spent in EOE for the first irradiation was spent in this state without any transitions to EOI. This explains the lower dose values in the first irradiation compared the two other irradiations. As for VMAT deliveries, it can be seen that almost equal times are spent in each phase for the three irradiations since VMAT plans take longer time compared to static plans (\(\sim\) 67 s vs \(\sim\)15 s) to deliver.
Figure 4(a): Breathing phase is the RADPOS normalized displacement that was recorded during beam deliveries.
For the static 3\(\times\)3 cm\({}^{2}\) beam delivery of the highly irregular motion profile, it can be seen in top plot of Figure 11 that over 70% of the beam delivery was done while the phantom was in the early- and mid-inhale states. However, looking the last irradiation, only 40% of the beam was delivered at this state and the remaining 60% was delivered as phantom was in the mid-exhale and EOE states. As a result, the dose delivered during this beam delivery was lower compared to the first two deliveries. On the second plot related to the VMAT delivery it can be seen that approximately 75% of the first irradiation happened while the phantom was in the early- and mid-inhale states which is almost 10-15% higher than similar values for the two last irradiations. As a result, we can see some dose differences between the deliveries in this case.
## 4 Discussion
In this study 4D Monte Carlo simulations using the 4DdefDOSXYZnrc user code to calculate the dose delivered to a breathing deformable lung phantom during static and VMAT beam deliveries while phantom moved with irregular breathing motions were validated. Film was placed inside the lung inserts of the phantom to measure dose distribution along the motion paths (S-I). Point dose measurements with RADPOS were performed inside the GTV (i.e. tumor). Point doses were also measured outside the tumor (still inside the lung). Recorded displacements with RADPOS inside the GTV as well as DVFs generated by DIR were used as input to 4DMC simulations to model the motions/deformations of phantoms.
Our results (Figure 9, Tables 5-8) showed that the overall agreement of point dose values at the center of the tumor from MC simulations and measurements by film were within 2% of each other. Dose differences between RADPOS measurements and MC simulations did not exceed 3% which is not larger than 2\(\sigma\) of the experimental uncertainties with RADPOS measurements. As for the dose points outside the tumor (i.e. top and bottom surfaces of the plug), simulations and measurements were found to have an average agreement of 6% or better. These agreements were found to be within the calculated positional uncertainties of these dose points. The agreements between the simulated and measured dose profiles (along the S-I direction), were good as well. Gamma comparisons of 2%/2 mm showed an overall passing rate of almost 94% or better.
In order to investigate the impact of starting phase of the respiratory cycle, treatments for deliveries of both plans were started 0, 40 and 90 s after the deformable phantom started to move with typical and highly irregular respiratory profiles. Different doses can be measured depending on the amount of time a target volume spends in EOI, EOE or transitions between these two states. In this work it was observed that for static plan deliveries (Figures 10&11 (top)), the dose delivered to the tumor as well as the bottom surface of the plug could change by 10-12% once they spent almost 45% of the delivery time in the EOE without any transitions to EOI. Similar fractions of time spent in EOE resulted in a dose change of 15-20% at the dose point on the top surface of the tumor due to the
Figure 11: Fraction of the time spent in each breathing phase during 3\(\times\)3 cm\({}^{2}\) (top) and VMAT (bottom) beam deliveries on the deforming phantom for three irradiations with the highly irregular respiratory motion profile shown in Figure 4(b). Breathing phase is the RADPOS normalized displacement that was recorded during beam deliveries.
fact that it was in the penumbra region of the beam. For VMAT deliveries (Figures 10&11 (bottom)), the longer treatment times compared to static deliveries help reduce the impact of differences in the respiratory cycle during several deliveries of the same treatment plan. As a result, dose differences between deliveries may not be as large as seen for static treatment plans. In the case of VMAT deliveries for the typical breathing trace where the target volume spends almost equal amounts of time in the EOE and EOI during different deliveries, dose differences of less than 1% were observed. On the other hand, with highly irregular respiratory traces the case could be different. Differences of almost 3-4% were observed for this respiratory profile between the one VMAT beam delivery with 10-15% more time spent in early-, mid-inhale compared to the other two deliveries. These results imply the importance of accurate detection of the starting phase of the breathing cycle and how it may impact the 4D dose calculations. In this work, uncertainty in PC clock synchronization of RADPOS and linac as well as system delays caused by the temporal resolution of RADPOS (100 ms) are two main sources that can affect proper detection of the start of the breathing trace.
Comparison of dose values from the static beam deliveries from our previous study (Gholampourkashi _et al_2018b) for the sinusoidal respiratory motion and current study with respiratory motion with large hysteresis traces revealed approximate decreases of 15% of the dose delivered to GTV (i.e. center of tumor). This result was expected considering the time spent in each breathing phase during beam deliveries for each of these motion profiles. While moving the sinusoidal motion, these dose points spend almost equal amounts of times from EOI-EOE and EOE-EOI. However, these times change to 20% and 80% while the phantom moves with the respiratory motion trace with the large hysteresis.
A current limitation of our 4DMC tool is that it relies on deformable registration between only the two extreme respiratory phases (i.e. EOE and EOI) to model the anatomical motion. Although this was found to be adequate for modeling the deformations of our phantom, it will not be able to model the hysteresis (Suh _et al_2008) that can occur in patient respiratory motion. We are working to extend the motion model to include deformation vectors determined from registrations between multiple respiratory phases of a 4DCT dataset with aim of applying it to patient 4D dose reconstruction.
## 5 Conclusions
We investigated and established the accuracy of 4D Monte Carlo simulations, using the EGSnrc/4DdefDOSXYZnrc user code, of dose delivered to a programmable deforming phantom in presence of realistic breathing motion. Measurements were performed on an Elekta Infinity linac equipped with Agility MLC during static square and VMAT plan deliveries. Delivery log files were used to reproduce the measurements during these deliveries. Our findings demonstrate that combining the motion recorded by RADPOS with DVFs generated by a reliable DIR algorithm into our 4DMC simulations, leads to accurate calculation of cumulative dose delivered to a deforming anatomy undergoing irregular breathing motion. |
2310.09297 | A Framework for Inference Inspired by Human Memory Mechanisms | How humans and machines make sense of current inputs for relation reasoning
and question-answering while putting the perceived information into context of
our past memories, has been a challenging conundrum in cognitive science and
artificial intelligence. Inspired by human brain's memory system and cognitive
architectures, we propose a PMI framework that consists of perception, memory
and inference components. Notably, the memory module comprises working and
long-term memory, with the latter endowed with a higher-order structure to
retain extensive and complex relational knowledge and experience. Through a
differentiable competitive write access, current perceptions update working
memory, which is later merged with long-term memory via outer product
associations, reducing information conflicts and averting memory overflow. In
the inference module, relevant information is retrieved from two separate
memory origins and associatively integrated to attain a more comprehensive and
precise interpretation of current perceptions. We exploratively apply our PMI
to improve prevailing Transformers and CNN models on question-answering tasks
like bAbI-20k and Sort-of-CLEVR datasets, as well as detecting equilateral
triangles, language modeling and image classification tasks, and in each case,
our PMI enhancements consistently outshine their original counterparts
significantly. Visualization analyses reveal that relational memory
consolidation, along with the interaction and integration of information from
diverse memory sources, substantially contributes to the model effectiveness on
inference tasks. | Xiangyu Zeng, Jie Lin, Piao Hu, Ruizheng Huang, Zhicheng Zhang | 2023-10-01T08:12:55Z | http://arxiv.org/abs/2310.09297v2 | # Understanding AI Cognition: A Neural Module for Inference Inspired by Human Memory Mechanisms
###### Abstract
How humans and machines make sense of current inputs for relation reasoning and question-answering while putting the perceived information into context of our past memories, has been a challenging conundrum in cognitive science and artificial intelligence. Inspired by human brain's memory system and cognitive architectures, we propose a PMI framework that consists of perception, memory and inference components. Notably, the memory module comprises working and long-term memory, with the latter endowed with a higher-order structure to retain more accumulated knowledge and experiences. Through a differentiable competitive write access, current perceptions update working memory, which is later merged with long-term memory via outer product associations, averting memory overflow and minimizing information conflicts. In the inference module, relevant information is retrieved from two separate memory origins and associatively integrated to attain a more comprehensive and precise interpretation of current perceptions. We coloratively apply our PMI to improve prevailing Transformers and CNN models on question-answering tasks like bAbI-20k and Sort-of-CLEVR datasets, as well as relation calculation and image classification tasks, and in each case, our PMI enhancements consistently outshine their original counterparts significantly. Visualization analyses reveal that memory consolidation, along with the interaction and integration of information from diverse memory sources, substantially contributes to the model effectiveness on inference tasks.
## 1 Introduction
Cognitive science, neuroscience and AI (artificial intelligence) collectively advance our grasp of intelligence, defined as the general mental abilities of perception, memory and reasoning, each with a unique role in human cognition. To construct more human-like intelligent systems, often referred to as the standard model of the mind (Laird et al., 2017), it is imperative to delve into the interactions among perception, memory, and reasoning in a unified system. Recently, scholars have uncovered a significant flaw in previous deep learning architectures: the absence of dedicated memory module that is critical for long-term information retention and relational reasoning. This drawback becomes evident when considering the constraints of many intelligent systems, which either exclusively concentrate on perception and reasoning or intricately interweave computation with implicit memory. Therefore, many memory-based studies have emerged, mainly focusing on designing item-based memory models with recurrent neural networks (RNNs) (Hopfield, 1982; Hochreiter & Schmidhuber, 1997; Dai et al., 2019; Ramsauer et al., 2020; Schlag et al., 2021) and memory-augmented neural networks (MANNs) (Graves et al., 2014, 2016; Le et al., 2018; Liang et al., 2023).
Nonetheless, existing approaches expose four limitations: (_i_) Implicit memory (hidden state) may gradually lose previous information as the model constantly updates its weights to accommodate new inputs, which prevents reusing the precomputed relations in sequential tasks (Vaswani et al., 2017; Santoro et al., 2017; Devlin et al., 2018). (_ii_) The memory system is configured in one of two
forms: either as a singular memory unit without hierarchical construction or as multiple separate memory components with identical data structures, both of which struggle to align with human memory traits and achieve robust generalization (Goyal et al., 2022; Dai et al., 2019; Jaegle et al., 2021; Wu et al., 2022; Kang et al., 2023; Liang et al., 2023). (_iii_) The memory-memory relation is either crude, expressed as weighted summation via neural networks or dot product attention, or it undergoes intricate memory transformation algorithms. (Vaswani et al., 2017; Santoro et al., 2018). (_iv_) Memory exploitation is confined to rudimentary retrieval, whether it's content-based addressing (Wu et al., 2020; Goyal et al., 2022; Kang et al., 2023) or explicit address (Graves et al., 2016; Liang et al., 2023). Arguably, modern MANNs have yet to develop general architectural frameworks for learning both diverse memory components and how they should interact internally and externally.
Multiple Memory Systems Theory (MMS) asserts that working memory (WM) and long-term memory (LTM) stand as pivotal subassembibles of human cognitive processes (Atkinson and Shiffrin, 1968; Baddeley and Hitch, 1974; Eichenbaum and Cohen, 2004), where the former serves to temporarily buffers and processes data for current tasks, while the latter is responsible for the retention of enduring knowledge and experiences. Additionally, the Global Workspace Theory (GWT) (Baars, 1993; Dehaene et al., 2021) suggests a communication and coordination scheme, in which disparate cognitive units write information into a shared workspace that is broadcast to all modules, along with the notion that write access is restricted.
Inspired by the MMS, GWT and cognitive theories, we assume that optimizing the structure of memory module and its internal and external correspondence mechanisms holds great promise in surmounting the extant restrictions. Accordingly, we hypothesize a cognitive framework called PMI that consists of perception, memory and reasoning modules, wherein memory is posited as a dual-layer memory block featuring distinct communion principles of the inner and outer. More concretely, in terms of its structure, WM exists separately from LTM, with the latter possessing a higher-order structure to preserve intricate patterns and relations. When it comes to interactions, there are two exterior procedures: perception-based competitive writing and inference-oriented information service, alongside one inner channel--designed to establish heterogeneous associations among the two memory units to facilitate efficient information filtering, storage, and knowledge consolidation. We apply modern different neural network architectures like Transformers attention-based (Vaswani et al., 2017; Brown et al., 2020) and convolutional networks (He et al., 2016), which all equipped with our dual-memory module, to multifarious tasks that may require both WM and LTM: text and visual question-answering, visual relations calculation and image classification. Across all these tasks, models integrated with our memory block consistently outperform their original counterparts.
## 2 Method
### Overview
An overview of our PMI framework is illustrated in Fig. 0(a), which contains three pivotal components: perception, memory and inference (both potentially learned). Given an input \(X\) (e.g., text, an image, or an audio signal), it is processed through a series of computational stages indexed by \(t\) to derive the cognitive understanding \(U\) of the current perception, as outlined below:
1. \(P\) component: (Perception) -- Convert the incoming input \(X\) to an internal feature representation \(H=\mathcal{P}(X)\).
2. \(M\) component: (Memory) -- Update old memories given the input representation \(H^{t-1}\): \(M^{t}=\mathcal{M}(H^{t-1},M^{t-1})\).
3. \(I\) component: (Inference) -- Reason (interpret) the current content given the updated memory: \(U=\mathcal{I}(H^{t-1},M^{t})\).
In this framework, trainable parameters are learned through backpropagation, while memory blocks are updated solely through the forward process, which constitute the process of memory precipitation through multiple iterations. A more elaborate description of our method is presented as follows.
### perception
The perceptual operation maps the original input data to internal entity representations. Focusing on the prevailing models and taking Transformers as an example, text inputs undergoes embedding and positional encoding to yield initial feature representation \(h^{0}\in\mathbb{R}^{T\times D}\), where \(T\) is the sequence length of \(D\) dimension. In the ViT model, a \(2D\) image \(x\in\mathbb{R}^{H\times W\times C}\) is split into \(N\) patches \(x_{p}\in\mathbb{R}^{N\times(P^{2}\cdot C)}\), each of which is linearly embedded. Then positional embeddings are added to obtain the final embedding vector \(h_{0}\) = [ \(x_{class}\) ; \(x_{p}^{1}\) ; \(x_{p}^{2}\) ; \(E\) ; \(\cdots\) ; \(x_{p}^{N}\) ]+ \(E_{pos}\), \(E\in\mathbb{R}^{(P^{2}\cdot C)\times D}\), \(E_{pos}\in\mathbb{R}^{(N+1)\times D}\), which contains fundamental feature information of the image, such as color and texture, where \((H,W)\) is the resolution of the original image, \(C\) is the number of channels and \((P,P)\) is the resolution of each image patch. This process resembles the human perceptual system that receives external information and converts it into understandable internal representations, laying a foundation for subsequent memory and reasoning.
### Global Shared Dual-Level Memory
This section provides a detailed exposition of the proposed dual-layer memory module and the internal and external communication mechanisms utilized for its update, as illustrated in Fig. 0(b). We posit that the memory module should be globally shared, including the working memory \(M_{w}\), which temporarily stores and processes information required for the current task, and the long-term memory \(M_{l}\), which persistently stores knowledge and experiences. We opt for \(M_{w}\in\mathbb{R}^{N\times D_{m}}\) as a form of WM, which is indexed based on \(m_{i}\) slot. Here, \(N\) is the number of slots, each with a dimension of \(D_{m}\). While the LTM is represented as a \(3D\) structure \(M_{l}\in\mathbb{R}^{C\times N\times D_{m}}\), with \(C\) denoting memory fragments, which is biologically plausible (Marr and Thach, 1991).
#### 2.3.1 External channel: \(\mathcal{M}_{w}\)-Write
External communication serves to update the contents of WM via two pivotal steps: competitive writing and forgetting, which informed by a fundamental aspect of the human memory system -- our inclination not to retain a permanent record of every perception but rather to discerningly preserve essential elements within memory. Collectively, these processes guarantee the storage of the most critical information pertinent to the ongoing task, an indispensable facet in tasks involving reasoning.
_Write with competition_ This process aims to selectively inscribe perceived inputs into \(M_{w}\) with finite capacity, also inspired by Miller's Law, which states that the number of information units that human WM can handle simultaneously is limited, often around \(7\pm 2\). We use a multi-headed sparse cross-attention mechanism (MHSC) for this execution, as expressed in Eq. 1, 2. Cognate to the MH mechanism used in Transformers, but MHSC exhibits two distinctive aspects: (_i_) it necessitates separate origins for Q and K and (_ii_) it introduces a sparsity-inducing operation on the attention weight matrix. Specifically, the result of the \(t-1\) step \(h^{t-1}\in\mathbb{R}^{T\times D}\) is projected into keys and
Figure 1: Model overview and the process of grasping the current input at calculation step \(t\). (a) The memory module consists of WM \(M_{w}\) and LTM \(M_{l}\), each characterized by distinct data structures. (b) WM is updated by current perception via a differentiable and constrained write access, which is then integrated into LTM through outer product association. (c) The inference component retrieves pertinent data from both WM and LTM using content-based addressing MHC and MHSC. Subsequently, through integration steps, it consolides info from these sources to generate fresh insights into the input, which is used for next rounds of inference or to directly support the decision-making process.
values, along with \(M_{w}^{t-1}\in\mathbb{R}^{N\times D_{m}}\) that is projected into queries. The current inputs compete to write through our MHSC, in conjunction with some other operations to yield the intermediate state \(\widetilde{M}_{w}^{t}\). The whole formulas are as follows:
\[s_{k}=softmax\left(\frac{M_{w}^{t-1}W^{Q}(h^{t-1}W^{K})^{T}}{ \sqrt{d^{K}}}\right) \tag{1}\] \[\widetilde{M}_{w}^{t}=s_{k}^{*}h^{t-1}W^{V}\] (2) \[\widetilde{M}_{w}^{t}=LN_{1}(\widetilde{M}_{w}^{t}+M_{w}^{t-1})\] (3) \[\widetilde{M}_{w,i}^{t}=ReLU(MLP_{i}(\widetilde{M}_{w,i-1}^{t})), \quad i\in\{1,\dots,k\}\] (4) \[\widetilde{M}_{w}^{t}=LN_{2}(M_{w}^{t-1}+\widetilde{M}_{w,k}^{t}) \tag{5}\]
It's noteworthy that the input needs to be linearly projected to the same dimension \(D_{m}\) as \(M_{w}^{t-1}\) (following the traditional practice of \(D=D_{m}\)). \(W^{Q}\), \(W^{K}\) and \(W^{V}\) are weight matrices. \(s_{k}\in\mathbb{R}^{N\times(T+N)}\) is the attention weight scores of \(M_{w}^{t-1}\) and \(h^{t-1}\). Unlike the standard soft competition, we use a top-k softmax (Ke et al., 2018) to select a fixed number of entities for updating the \(M_{w}\). \(s_{k}^{*}\) denotes the pre-softmax value, please consult Algorithm 1 for details. \(LN_{1}\) and \(LN_{2}\) signify different LayerNorms, employed to uphold memory stability over prolonged time steps. \(ReLU\) is the \(ReLU\) function, \(MLP_{i}\) is the \(i^{th}\) multilayer perceptron and \(\widetilde{M}_{w,k}^{t}\) is the intermediate output through \(k\) multilayer perceptrons.
_Forgetting_ Memory forgetting entails the elimination or reduction of previously stored data to make space for new info, optimizing memory performance. It is reasonable to adopt the gating mechanism since it emulates the biological memory process and effectively alleviates information conflicts. This is implemented in Eq. 6, where \(I_{t}\) and \(F_{t}\) indicate the input and forget gates respectively, as proposed in RMC (Santoro et al., 2018). Further details can be found in Appendix C.1.
\[M_{w}^{t}=F_{t}(M_{w}^{t-1},h^{t-1})\odot M_{w}^{t-1}+I_{t}(M_{w}^{t-1},h^{t-1 })\odot\widetilde{M}_{w}^{t} \tag{6}\]
#### 2.3.2 Internal channel: \(\mathcal{M}_{l}\)-Write
The internal channel is utilized to update LTM, which boasts a larger capacity to accommodate more info. As illustrated in Eq. 7, we conduct an outer product calculation between the updated \(M_{l}^{t}\) and the previous-step LTM \(M_{l}^{t-1}\in\mathbb{R}^{C\times N\times D_{m}}\) to merge novel vital info into the current LTM \(M_{l}^{t}\). In contrast to scalar product computation that only yield a numerical value, the outer product operation is used to capture relations and interactions between vectors, which not only enhances higher-order representational capacity but also contributes to information precipitation and memory reinforcement.
\[M_{l}^{t}=LN_{3}\left((M_{w}^{t}\otimes M_{l}^{t-1})+M_{l}^{t-1}\right) \tag{7}\]
Here, \(LN_{3}\) denotes LayerNorm, and \(\otimes\) signifies the outer product operation.
### Inference
Inference component, guided by the updated memories, provides insights of current perceptions. Our interpretation of the inference is that it stems from an assumption on the form of the joint distribution between perceptual inputs and current memory. To mimic human-like ability to focus on crucial details of the ongoing task while leverage extensive knowledge and experience to navigate complex situations, we use content-based addressing MHC that is equivalent to MHSC without sparsity and MHSC to retrieve relevant memories from \(M_{w}^{t}\) and \(M_{l}^{t}\) based on current input \(h^{t-1}\)
getting \(h_{w}^{t}\) and \(h_{l}^{t}\) respectively, as shown in Eq. 8-10.
\[U_{w}^{t} =\textit{MHC}\left(h^{t-1}\widetilde{W}^{Q},M_{w}^{t}\widetilde{W} ^{K},M_{w}^{t}\widetilde{W}^{V}\right) \tag{8}\] \[\widehat{M}_{l}^{t} =\frac{1}{C}\sum_{i=1}^{C}M_{l}^{t}[i,:,:]\quad where\quad\widehat {M}_{l}^{t}\in\mathbb{R}^{N\times D_{m}}\] (9) \[U_{l}^{t} =\textit{MHSC}\left(h^{t-1}\widehat{W}^{Q},\widehat{M}_{l}^{t} \widetilde{W}^{K},\widehat{M}_{l}^{t}\widetilde{W}^{V}\right) \tag{10}\]
Subsequently, the understanding \(U_{l}^{t}\) from LTM serves to further revise and supplement the understanding \(U_{w}^{t}\) from WM via the MHC mechanism, where \(U_{l}^{t}\) creates queries that match with keys and values from \(U_{w}^{t}\) to generate a richer representation \(U_{w}^{t}\). Then a linear combination of \(U_{w}^{t}\) and \(U_{wl}^{t}\) is conducted with a hyper-parameter \(\alpha\) to yield the final cognition \(U^{t}\), as shown in Eq. 12. This process of multiple correlation and fusion of various information sources contributes to extract richer and more valuable insights that support higher-level decision-making and reasoning.
\[U_{wl}^{t} =\textit{MHC}\left(U_{l}^{t}\bar{W}^{Q},U_{w}^{t}\bar{W}^{K},U_{ w}^{t}\bar{W}^{V}\right) \tag{11}\] \[U^{t} =\alpha U_{w}^{t}+(1-\alpha)U_{wl}^{t} \tag{12}\]
## 3 Related Work
**Cognitive Science** In cognitive neuroscience, memory studies endeavor to unravel the intricacies of information storage, organization and retrieval in brains, and their profound impact on thinking, cognition and behavior, building on the pioneering work of Ebbinghaus (1885) and Bartlett & Bartlett (1932). Afterwards, Atkinson & Shiffrin (1968) proposed a multi-store model including sensory, short-term and LTM, which contributes to our insights of different memory types and stages. The successor Baddeley & Hitch (1974) further refined and delineated this model by substituting short-term memory with WM--a transient storage that can interact with LTM. Sigma (Rosenloom et al., 2016) and Soar (Laird, 2019) are canonical cognitive frameworks of recent advancements, both of which employ a similar memory system comprising WM and LTM that play crucial roles in complex reasoning and problem-solving tasks. Moreover, the Global Workspace Theory (Baars, 1993) put forward a coordination and collaboration mechanism with restricted write access, which sheds light on the interaction of diverse cognitive components.
**Memory networks** Semi-parametric MANNs, as a form of using implicit knowledge to perform complex reasoning tasks, are a persistent theme in neural network research. Today MANNs typically rely on explicit memory and attention mechanisms, with pioneering models like Memory Networks (Weston et al., 2014) and Neural Turing Machines (NTMs) (Graves et al., 2014), both of which are equipped with a storage for vector representations accessible via attention. Memory Networks use addressable memory to execute tasks through a series of read operations. In contrast, NTMs also utilize addressable content storage, but unlike Memory Networks, which pre-load memories using all the inputs, NTMs write and read the memory one input at a time. Following this are Differentiable Neural Computers (DNC) (Graves et al., 2016a) and Sparse DNC (Rae et al., 2016), which are realized as recurrent neural networks (RNNs) capable of read and write operations on memory over time and are trained via BPTT (Werbos, 1990). A parallel research path involves enhancing RNNs like LSTM by incorporating data structures such as lists, stacks or queues (Joulin & Mikolov, 2015; Grefenstette et al., 2015).
**Transformers with memory extensions** Memory is a topic of active exploration in diverse Transformer studies. Transformer-XL (Dai et al., 2019) and its successors, Compressive Transformer (Rae et al., 2019), RMT (Bulatov et al., 2022) and Scaling Transformer (Bulatov et al., 2023) re-introduce the notion of memory and recurrence by caching self-attention hidden states from each layer into a fixed-size queue and reusing them in subsequent attention computations, with the difference that Compressive Transformer utilizes a compression network to further compress its memories into fewer vectors. In addition, various forms of global representations are introduced as a model memory that learns to gather information from input sequence tokens. Notable examples of these approaches include Set Transformers (Lee et al., 2019), ETC (Ainslie et al., 2020), Longformer (Beltagy et al., 2020) and TR+HSW (Goyal et al., 2022), all of which redesign the self-attention mech
anism to reduce computational complexity. Memory modules, with their read-write global memory operations, have recently attracted attention for their potential to remember prior information, driving a movement towards more structured models. For instance, Memformer (Wu et al., 2020) proposes a dedicated external dynamic memory module after the primitive self-attention layer and interacts with it through memory reader and writer components to store previous hidden states in concise representations for efficient sequence modeling. More recently, DT-Mem (Kang et al., 2023) introduces a WM that contains N memory slots between the Transformer module and the MLP to store and retrieve information through an attention-based approach, where the Transformer module is similar to the GPT-2 (Radford et al., 2019) module without the feedforward layer. Most pertinent to our work, Goyal et al. (2022), taking cues from the GWT theory, replace Transformers' pairwise interactions with a shared workspace featuring constrained write access--a concept equivalent to our WM that can read and write. While these endeavors are closely related to explicit memories, their memory structures are monolithic, which leads to boundaries in representing certain higher-order information or relations. Hence, one takeaway from our work is that it may be prospective to revisit previous memory enhancement methods in light of insights from cognitive science into memory structures.
## 4 Experiments
To assess the efficacy of the PMI module in discovering and learning inference entities and their relations, we conduct a preliminary exploration by incorporating it as a replacement for the pairwise self-attention layers in Transformers and ViT (Dosovitskiy et al., 2020), where memory components are shared globally. This modified architecture, called MITR, are then applied to a diverse range of tasks, including visual QA, text-based QA, image classification and visual relations calculation. Readers can refer to the Appendix D E for full details on each task and details on hyperparameter settings for the model.
### Relational Reasoning : Sort-of-CLEVR
Sort-of-CLEVR (Santoro et al., 2017) is a dataset similar to CLEVR, designed specifically for research on relational reasoning. Each 2D image in Sort-of-CLEVR is of size 75 \(\times\) 75 and comes with 6 randomly placed geometric shapes of 6 possible colors and 2 possible shapes. There are 10 non-relational and 20 relational questions that are equally divided into binary and ternary types per image, along with corresponding answers (details in Appendix E.2). Given the bounded answer space, this task is treated as a classification task. Each image is partitioned into a sequence of uniform patches and then encoded as in ViT. Subsequently, we concatenate the image embedding with its corresponding question embedding as input into our MITR, in line with Goyal et al. (2022).
For this task we evaluated our MITR with the following five baselines: Transformers [TR] (Vaswani et al., 2017), Set transformer [ISAB]: Transformers where self-attention is replaced by ISAB module (Lee et al., 2019), Transformers with Shared Workspace with top-k competition [TR+HSW] (Goyal et al., 2022) and High Capacity Transformers [TR+HC]: Same as TR but with different parameters across layers.
The test accuracy curves over 200 training epochs are illustrated in Fig. 2. We observe that transformers equipped with our global shared memory module converge faster
Figure 2: Test accuracy vs training iterations for the Sort-of-CLEVR task.
lines, demonstrating superior performance on both relational and non-relational tasks. In contrast, the TR+HSW model excels in addressing non-relational questions but struggles with relational problems. We conjecture this might be because non-relational problems frequently demand the model to tackle only small amounts of information about individual objects. Interestingly, the single-level memory slots in the global workspace (similar to our WM) possess the capability to store and process scant present information, allowing them to handle these issues with ease. However, relational questions regularly necessitate multi-step reasoning to obtain an answer, such as extracting object attributes followed by relational analysis. The introduction of LTM enables the model to deposit crucial information during the learning process. Consequently, it can retrieve pertinent knowledge from this memory module in upcoming reasoning steps, going beyond its reliance solely on the current input. This contributes to a more comprehensive understanding and handling of relational questions.
### Text-based QA : bAbI
BAbI is a pure text-based QA dataset (Weston et al., 2015) that is widely used to assess the ability of MANNs, attention mechanisms and other types of models to remember and reason on textual information. This dataset contains 20 challenging tasks, each corresponding to a particular type of reasoning, such as logical deduction, counting, pathfinding and induction, which possibly require both WM and LTM. Each question is associated with a set of supporting facts. For example, the facts "John journey to the office" and "John left the milk" support the question "Where is the milk?" (answer: "office") (more in Appendix E.1). Following Le et al. (2020), each story is preprocessed into a sentence-level sequence, which is fed into our MITR model as the input sequence.
A model succeeds on a task if its performance surpasses 95%. We compare our model with recent memory networks and report the results in Table 1 (more in Appendix F.2).
### Detecting Equilateral Triangles
In this binary classification task, our goal is to determine whether a 64 x 64 sized image contains an equilateral triangle composed of three randomly placed point clusters (Ahmad & Omohundro, 2009). For equilateral triangles, the midpoints of these clusters are equidistant from each other. To feed an image into our MITR, we adopt the same methodology as employed in ViT (Dosovitskiy et al., 2020). Specifically, each image is divided into equally sized 4 x 4 patches, which are then utilized as distinct input positions for the MITR. In order to make precise judgments, this task requires the model to adeptly comprehend and memorize the spatial relations between disparate point clusters, embodying the relative positions and distances among them. By incorporating our PMI module, with shared WM and LTM across all layers, the model can preserve decisive info concerning each point cluster for subsequent inference procedures. Moreover, the constrained capacity of WM compels the model to selectively inscribe crucial information into the memory module, which coincides favorably with the inherent sparsity intrinsic to the task.
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Error} \\ \cline{2-3} & Mean & Best \\ \hline LSTM (Hochreiter \& Schmidhuber, 1997) & 27.340.8 & 25.2 \\ TR\({}^{\dagger}\)(Vaswani et al., 2017) & 22.1 & N/A \\ DNC (Graves et al., 2016) & 12.844.7 & 3.8 \\ H-Mem (Limbacher \& Legenstein, 2020) & 10.8 & N/A \\ NUTM (Le et al., 2020) & 5.641.9 & 3.3 \\ MemNet (Dou \& Principe, 2023) & 5.6 & N/A \\ \hline \hline TR+HSW (Goyal et al., 2022) & 3.640.46 & 3.25 \\
**MITR (ours)** & **2.554.0**.11 & **2.32** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test error rates: mean \(\pm\) std. (in %) on the 20 bAbI tasks for models trained using 10k examples and best error over 10 runs. \({}^{\dagger}\) is reported from Dehghani et al. (2018)
Figure 3: Detecting Equilateral Triangles. This figure compares the performance of Transformers with our PMI [MITR] against other Transformer baselines.
The results in Fig. 3 reveal that MITR outperforms the standard TR model in terms of both convergence speed and accuracy, converging faster (curves available in Appendix F.1) and achieving an impressive accuracy of 97.9%, an 8.1% improvement. Additionally, our approach surpasses other baselines, further confirming the efficacy of the PMI module. Here, STR denotes Transformers with sparse factorizations of the attention matrix (Child et al., 2019), [MITR+S] is a variant of the MITR without top-k sparsity, and other baselines are detailed in experiment 4.1.
### Image Classification : Cifar-10
CIFAR-10 is benchmark image dataset commonly used in the field of computer vision, which consist of 50k training and 10k test images of resolution \(32\times 32\) with the total number of classes 10. For this task, in addition to MITR mentioned above, we also incorporate PMI into the convolutional series model. Explicitly, the original images, after four convolutional layers, serve as perceptions into our PMI module to obtain understandings, which then undergo linear and softmax transformations to yield final classification results. The performance of the best models on test sets is reported in Table 2, where CNN_MI w/o refers to CNN_MI without guidance from LTM. It's obvious that our MITR and CNN_MI models both exhibit superior performance, achieving accuracies of 79.12% and 78.69% in CIFAR-10, respectively, with an improvement of 2.94% (compared to TR) and 0.08% (compared to CNN_MLP). These results further underscore the universality of our PMI module.
### More Explorations of Memory Module
#### 4.5.1 Memory Attributes and Communication Modes
In this section, we conduct a more qualitative exploration of memory properties and the modes of internal and external communication on bAbI and Sort-of-CLEVR tasks. The study on memory properties aims to address three questions: _(i)_ How does the capacity of WM and LTM impact model performance? _(ii)_ How does the global shared memory (persistence of memory) affect model performance? _(iii)_ Does knowledge consolidation occur in the shared LTM? Experiments with communication modes aim to determine whether competitive writing, as well as the correction and supplement of LTM-derived data to relevant info obtained from WM is effective. To tackle these questions, we set up three models of distinct sizes \(MITR_{s}(l=4,h=4),MITR_{m}(l=8,h=8)\) and \(MITR_{l}(l=12,h=16)\), and run them on various combinations of \(N\), \(M\) and \(k\), where \(l\) and \(h\) are the number of layers and heads in MITR, respectively, considering their critical roles in model performance.
The results are reported in Table 3, where \(MITR_{m}w/o_{1}\) denotes MITR without memory sharing among its layers, while \(MITR_{m}w/o_{2}\) indicates that info retrieved from LTM is directly aggregated with data from WM via \(\alpha\) without correction step. _soft_ is a standard soft competition mode, not a top-k strategy. We can derive following key findings. For memory properties, firstly, greater memory capacity doesn't necessarily equate to better performance. The optimal results are achieved at \(N=8\) and \(M=5\), aligning with discoveries in cognitive neuroscience. Secondly, memory persistence markedly improves the performance and speed of convergence in relational inference tasks, especially in binary and ternary relations, respectively, by 7.32%, 6.88% over non-globally shared cases (more in Appendix B.1). Notably, independent memory modules result in an eightfold increase in trainable parameters. Regarding the communication mode, constrained writing exhibits heightened sensitivity in binary and ternary inference tasks, albeit with contrasting effects. We speculate that this divergence may be attributed to the larger volume of info storage required for ternary problems, thus necessitating a slightly larger \(k\) value. Moreover, without the guidance of LTM, as an erudite scholar, there is a minor uptick in error rates for bAbI task under the same setup, and the three types of Sort-of-CLEVR task exhibit respective decreases of 0.31% (unary), 7.34% (binary)
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{Trans.} & \multicolumn{3}{c}{Conv.} \\ \cline{2-7} & Vit & ISAB & TR+ISW & MITR (ours) & CNN_MLP & CNN_MI (ours) & CNN_MI w/o (ours) \\ \hline \hline Acc (\%) & 76.18 & 76.39 & 76.28 & **79.12** & 78.61 & **78.69** & 78.63 \\ \hline Params (M) & 0.75 & 2.21 & 2.01 & 2.0 & 0.11 & 1.75 & 1.68 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of different models on CIFAR-10
and 4.54% (ternary) in accuracy, underscoring the constructive influence of previously accumulated knowledge on relational reasoning.
#### 4.5.2 Visualizations of Attention Patterns
To explore whether knowledge accumulates in LTM, we use visualizations of attention patterns between current perceptions and LTM on the bAbI task, shown in Fig. 4. Here, current inputs act as queries, and the LTM matrix serves as keys and values for cross-attention computation. As the depth increases, a clear trend emerges in the heatmaps: more colored regions appear that gradually stabilize and resemble, implying a growing correlation between inputs and LTM that gradually converges (more explanations in Appendix B.2). This may indicate that richer knowledge is accumulated in LTM, leading to a more consistent grasp of different elements within the input data across these layers.
## 5 Conclusion
Inspired by multiple memory systems and global workspace theory in cognitive neuroscience, we propose the PMI module containing perception, dual-layer memory and inference components, to generate a more comprehensive and accurate understanding of current inputs while depositing vital contents into memory to cope with more complex situations. We have explored PMI's dual utility:
Figure 4: Attention patterns between perceptions and memories across different layers of the MITR.
\begin{table}
\begin{tabular}{c c c c c c|c c c c} \hline \hline \multirow{3}{*}{Model} & \multirow{3}{*}{N} & \multirow{3}{*}{M} & \multirow{3}{*}{Top-\(k\)} & \multicolumn{3}{c}{bAbI} & \multicolumn{3}{c}{Sort-of-CLEVR} \\ \cline{5-10} & & & & Params & Err\% & Params & Unary\% & Binary\% & Ternary\% \\ \hline \multirow{4}{*}{\(MITR_{s}\)} & 6 & 3 & 5 & 2.00M & 2.81 & 2.03M & 99.14 & 77.12 & 61.29 \\ & 8 & 5 & 5 & 2.27M & 2.72 & 2.29M & 99.45 & 86.06 & 62.85 \\ & 8 & 5 & 7 & 2.27M & 2.73 & 2.29M & **99.50** & 82.84 & 64.35 \\ & 10 & 7 & 9 & 2.53M & 2.78 & 2.55M & 99.19 & 80.24 & 59.48 \\ \hline \multirow{4}{*}{\(MITR_{m}\)} & 6 & 3 & 5 & 2.07M & 2.61 & 2.09M & 99.40 & 80.13 & 65.93 \\ & 8 & 5 & 5 & 2.33M & **2.55** & 2.36M & 99.34 & **87.61** & 62.45 \\ & 8 & 5 & 7 & 2.27M & 2.57 & 2.36M & 99.19 & 81.93 & 60.89 \\ & 10 & 7 & 9 & 2.59M & 2.62 & 2.62M & 99.40 & 80.18 & 65.83 \\ \hline \multirow{4}{*}{\(MITR_{l}\)} & 6 & 3 & 5 & 2.20M & 2.73 & 2.22M & 99.40 & 81.92 & 64.21 \\ & 8 & 5 & 5 & 2.46M & 2.58 & 2.49M & 99.14 & 84.73 & 65.52 \\ & 8 & 5 & 7 & 2.46M & 2.59 & 2.49M & 99.29 & 80.68 & **66.94** \\ & 10 & 7 & 9 & 2.73M & 2.71 & 2.75M & 99.09 & 81.47 & 65.01 \\ \hline \hline \(MITR_{m}\)_w/\(o_{1}\)_ & 8 & 5 & 5 & 16.48M & 2.84 & 16.5M & 99.14 & 80.29 & 60.06 \\ \hline \(MITR_{m}\)_w/\(o_{2}\)_ & 8 & 5 & 5 & 2.33M & 2.91 & 2.36M & 99.19 & 79.96 & 62.40 \\ \hline \(MITR_{m}\)_ & 8 & 5 & \(soft\) & 2.33M & 2.75 & 2.36M & 99.15 & 79.64 & 61.87 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of ablation studies on memory properties and communication modes.
as an alternative to self-attention layers in Transformers and as a complement to convolutional networks. Extensive experiments on images and texts provide compelling evidence for the effectiveness and adaptability of PMI, meaning it could be a core component in diverse architectures. We look forward to a broader application of our method, including its integration into various frameworks and its extension to a wide range of tasks across varying modalities.
## Ethics Statement
The authors do not foresee any negative social impacts of this work, but of course the accumulation of improvements in ML could be misused as it may give more power to nefarious agents.
|
2306.13529 | Full Transparency in DBI frameworks | Following the increasing trends of malicious applications or cyber threats in
general, program analysis has become a ubiquitous technique in extracting
relevant features. The current state-of-the-art solutions seem to fall behind
new techniques. For instance, dynamic binary instrumentation (DBI) provides
some promising results, but falls short when it comes to ease of use and
overcoming analysis evasion. In this regard, we propose a two-fold
contribution. First, we introduce COBAI (Complex Orchestrator for Binary
Analysis and Instrumentation), a DBI framework designed for malware analysis,
prioritizing ease-of-use and analysis transparency, without imposing a
significant overhead. Second, we introduce an aggregated test suite intended to
stand as a benchmark in determining the quality of an analysis solution
regarding the protection against evasion mechanisms. The efficiency of our
solution is validated by a careful evaluation taking into consideration other
DBI frameworks, analysis environments, and the proposed benchmark. | Vlad Crăciun, Andrei Mogage, Dorel Lucanu | 2023-06-23T14:50:08Z | http://arxiv.org/abs/2306.13529v1 | # Full Transparency in DBI frameworks
###### Abstract
Following the increasing trends of malicious applications or cyber threats in general, program analysis has become an ubiquitous technique in extracting relevant features. The current state of the art solutions seem to fall behind new techniques. For instance, dynamic binary instrumentation (DBI) provides some promising results, but falls short when it comes to ease-of-use and overcoming analysis evasion. In this regard, we propose a two-fold contribution. First, we introduce COBAI (Complex Orchestrator for Binary Analysis and Instrumentation), a DBI framework designed for malware analysis, prioritizing case-of-use and analysis transparency, without imposing a significant overhead. Second, we introduce an aggregated test suite intended to stand as a benchmark in determining the quality of an analysis solution regarding the protection against evasion mechanisms. The efficiency of our solution is validated by a careful evaluation taking into consideration other DBI frameworks, analysis environments and the proposed benchmark.
## I Introduction
The current research proposes a framework for binary analysis, having the main focus set on malicious applications that are evasive, in attempt to overcome their capabilities of escaping the analysis environment or the refusal of execution under such scenarios.
Binary analysis is a tedious domain, which demands a continuous process [56, 52] of designing and developing new protocols, mechanisms and techniques, in order to survive and overcome industry changes. Apart from that, malicious applications raise the bar even further, implementing various techniques to slow or deter the analysis [57, 32].
The main challenge raised by the analysis of evasive malware is that the analysis environment must be indistinguishable from an ordinary one, e.g. belonging to a regular user and which lacks any analysis tools. Therefore, the analysis solution should be able to provide a high level of _transparency_. By analysis environment we refer to the entire scope where the application is analyzed, comprising the operating system, virtualization / emulation technologies, debuggers, additional static or dynamic analysis tools and so on. However, in order to be able to analyze evasive malware, one may need to adjust its environment so that any analysis artifacts are concealed. The flow of installation, preparation and configuration leads to a secondary problem: ease of use.
The increasing trend on techniques and strategies for analysis evasion [27] have consisted the main motivation for the development of a new framework for binary analysis instead of relying on already existing ones. As the evaluation in Section III reflects, the state of the art candidates are not entirely suitable, without a significant effort, for a proper analysis of highly evasive malware.
### _Dealing with DBI Transparency_
A dynamic binary instrumentation framework is a popular solution for binary analysis where the code can be arbitrarily executed. As a result, a high level of control may be obtained and this allows a better manipulation of the application according to the purpose of the analysis. As a consequence, unfortunately, attackers started developing various strategies to detect and counter-attack DBI solutions. There is a general lack of countermeasures against transparency detection and attacks [35]. Moreover, it is of utmost importance to be able to verify the transparency properties of an analysis solution or to compare multiple tools in this regard, therefore an aggregated set of transparency tests is an essential factor.
A generic DBI framework, for instance, is usually correct while instrumenting benign code, but it does not totally prevent the detection of the analysis process while instrumenting malicious code. It does not deploy mechanisms to trick the application into considering it is not detected, which is not a direct feature of the instrumentation. We consider that the transparency aspect also relates to the ease of use earlier discussed, since the instrumentation is correct, but the desired results are different. This leads to either integrating additional projects purposely created to aid in this type of tasks, or having to develop new ones.
For instance, the users, whose purpose is to analyse an evasive malware with an existing DBI solution, might be enforced to resort to at least one of following two steps: integrating an existing tool / plugin whose purpose is to make the engine transparent or to develop it by themselves. Both solutions are highly sensitive for engines without a special care for transparency, as they might prove to be difficult to integrate, break backwards compatibility, or even generate side effects, including crashes. Alternatively, users may increase the DBIs transparency leveraged by specific instrumentation hooks. These hooks allows slightly shifts of the application behavior, such that some of the DBI exposed resources may become invisible. This type of mitigation include instruction-level or API-level access to incomplete virtualized DBI resources. However, there are cases where only DBI developers may assist with such functionalities, as they either were not present in the first place, or they were not sufficiently tested.
When the application being instrumented implements analysis evasion techniques, it either targets some resources that
are not correctly virtualized, or it pushes the limits of available resources and performance counters, forcing the application to either generate execution crashes (as a side effect of exceeding some physical boundaries) or increase some overheads in a visible way. The subject of DBI virtualization was not enough debated along the road. A question to answer is whether the full virtualization should be handled by the DBI engine, or should users be responsible for it. To our surprise, seeing the large number of DBI tools designed to increase the DBI transparency, we concluded that indirectly the DBI developers made users responsible for the missing virtualization features. Our statement is confirmed by a considerable community effort, where projects like SoK [29, 15], PinVMShiield [13], BluePill [4], JuanLesPIN [22, 46], all attempt to patch the virtualization gap in PIN [45] DBI framework. This lack of virtualization did not raised any concerns when the DBI frameworks were first developed about two decades ago, mainly because the benign application used to test the DBIs required actually a lower amount of resource-virtualization, compared to what a DBI was capable to provide. In the meantime, hackers increased the awareness of their malicious applications, implementing countless analysis-evasion scenarios, meant to disrupt any attempt to reverse-engineer the binaries. Their efforts contributed with interesting testing scenarios questioning the virtualization responsibilities and also raised some architectural issues such that developers behind the DBI frameworks may have difficulties handling them correctly. We believe that apparently making users responsible for the virtualization gap, is just an unconventional solution to provide some architectural fixes without forcing developers to deal with the whole process of integrating those fixes at the engine level. Malicious applications usually exploit either the boundaries of the virtualization rate (known uncovered virtualization aspects, or virtualization issues unknown to the public) or the increased resources overhead (time, CPU, memory) added by the presence of a DBI.
An analyst may use sandbox technologies specially designed for malware analysis, such as Cuckoo sandbox [5], Any Run [2], or Hybrid Analysis [9]. Unlucky, the attackers adapted on this field as well by: refusing to execute under a VM or a sandbox, trying to escape the debugger, attempting to kill the processes of analysis tools, etc. This challenge is of the utmost importance, as a detected analysis environment will not be able to reveal the necessary level of details. For instance, regarding the countermeasures for OS environment transparency that Cuckoo sandbox should handle, [47] (pg. 113) states: _"It is difficult to make all the sensors get false values, but at least we can try to reduce the detection. It is said that we can reduce about 90 percent of it."_. The authors statement is relative to the API hooking analysis technology present in Cuckoo Sandbox monitoring module. Projects like BluePill [4], PinVMShiield [13] and JuanLesPIN [46, 22] attempt to extend the DBI virtualization functionalities above the DBI, including OS resources. If the previously discussed monitoring module relies only on API hooking to reduce about 90% of a sandbox fingerprints, a DBI is more suitable for this task as its features also extends to instruction level. To see how ready are DBIs to improve the transparency of analysis-environments, we provide some results in Section III-B.
Last, but not least, there is an issue regarding the checking whether a system supports these transparency attacks and to which extent. While a hard problem to formalize, at least a standard set of tests, constantly updated to latest tactics, should aid developers and users to correctly compare analysis systems from this point of view.
### _Problem Description_
The full range of DBI transparency problems can be summarized as follows: P1 - missing virtualization features; P2 - architectural gaps; and P3 - runtime overhead. Here is a brief description of them.
1. The lack of DBI virtualization for specific resources at the application or OS environment levels. This is due to the following shortcomings. 1. The usage of DETOUR API hooks for main OS event handling, which adds a significant overhead during the execution. 2. Insolity to deny access to DBI address space (where DBI cache and DBI-engine resides), through standard OS API or SYS calls. 3. Incomplete virtualization delivered through various execution contexts. Examples of such contexts include: 1. CPU FPU context; 2. TIB (Thread Information Block) entries like TLS (Thread Local Storage) or reserved fields used to backup Context transition variables between DBI engine and DBI cache, execution stacks, handles, etc.; 4. Missing environment virtualization features, responsible for revealing some of the DBI resources (e.g. OS modules enumeration APIs, OS handles, etc.), or able to fingerprint a sandbox (CPU vendor, BIOS version, specific services, etc.).
2. Architectural gaps, which refers to: 1. missing behaviors, e.g.: missing support for some assembly instructions, execution of 64 bit code in 32 bit processes, incorrect handled exceptions, etc.; 2. inability of the DBI to stack multiple DBI tools, in order to assist users to fill the gap between the lack of DBI virtualization and their particular analysis (this issue does not allow users to merge together the existing PIN projects to increase the DBI virtualization features).
3. Bad overhead management which includes: 1. increasing the overall allocated memory: 1. increasing the DBI cache size, by exploiting the fact that a DBI also instruments OS library code, the larger the number of unique API calls, the larger the cache; 2. increasing the number of contexts required for threads and various instrumentation scenarios; 3. spotting the increased size of an instrumented process, compared to the native execution;
* increasing the time taken by various instrumentation behaviors (library load, branch translation, thread creation, exception handling, large loops, etc.).
### _Contribution_
Our contribution comes two fold. First, we provide a test suite comprising an extended set of state of the art transparency tests, as described in Section III. This suite covers all of the above mentioned issues and represents an important step towards creating a benchmark which may be publicly used for checking the transparency features of any analysis solution based on DBI frameworks. Analyzes of the failed tests led us to the conclusion that the main causes are given by the presence of at least one of the P1-P3 problems. The test suite was designed to extend as much as possible the taxonomy presented in [35] pg. 11, while at the same time we also provide the relation between the problems covered by the taxonomy, and the problems we have surfaced above, in Table I.
Secondly we addressed the above problems within COBAI (Complex Orchestrator for Binary Analysis and Instrumentation), a DBI engine written from scratch, specifically designed for malware analysis. A main feature of COBAI targets the DBI and OS environment transparency issues, in an attempt to successfully instrument evasive malicious binaries. As revealed by Section III, COBAI evaluates as successfully in roughly 95% of the test scenarios. The novelty of COBAI consists in a flexible and efficient combination of anti-anti-analysis techniques in order to easily obtain extendable DBI frameworks, where a main focus is providing transparency against malicious applications. While new attacks or anti-evasion techniques might surface in the future, COBAI provides an easy environment to integrate protection against them. This is sustained by the technical implementation and architecture of the core technical advances, as it allows the combination of multiple plugins "collaborating" for obtaining better results and continuous fixes through new plugins. Even if the contribution is of incremental nature, it puts the bases for a new approach in the design of DBIs.
Paper organisationThe paper continues as follows: Section II provides the design and implementation details for our solution, Section III highlights the results of various evaluations also involving other state of the art solutions, Section IV presents some related work, both academically and industry oriented, and Section VI concludes our paper and offers a preview to our future work.
## II COBAI: Design and Implementation
In this section we present an overview of the internal COBAI design, with the focus on the components assuring transparency, scalability, and analysis proliferation. The high-level design of COBAI is described in Section II-A, while the main plugins in charge with the binary analysis are introduced in Section II-C, along with their purpose and the communication process.
### _Architecture_
During our research and evaluations of other state of the art solutions, we have noticed two important and common drawbacks (validated by the evaluation, as described in Section III). First, the overall architecture of a DBI is not specifically designed to provide full virtualization and the components cannot be easily replaced. DBIs, such as Pin, usually have a closed architecture to which additional plugins may be connected. Second, critical components, such as the disassembler, API hooking mechanisms or even the DBI engine, are highly coupled. The two issues mentioned earlier are, from our point of view, deeply related, in the sense that the components that might interfere with the DBI's ability to provide transparency cannot be easily replaced without affecting the entire framework.
Therefore, developing a plugin whose sole purpose is to conceal the presence of the DBI or other system artifacts might prove useful in some cases, but will fail at tasks strictly dependent on other components and where the transparency plugin cannot interfere. The issue is, therefore, also linked to a lack of control over what the application may do, even under scenarios where tricking it is a trivial task (see Section III).
In this sense, COBAI's architecture is a modular one, where each individual component has a clear purpose and may easily be replaced or adjusted. Fig. 1a describes COBAI's architecture on a high level. The main engine is the DBI component and it has two main tasks: instrumenting the analysis and handling the plugins. The DBI loads all available plugins and, through an initialization protocol (see Figure 2), registers all APIs exposed by the plugins, which then are proxied such that any plugin may benefit from functionalities of the others, thus extending their capabilities.
A launcher is used to start both the analysis instance and a centralized analysis server by reading the analysis parameters
\begin{table}
\begin{tabular}{|c|c|} \hline Taxonomy leaf & Problem \\ \hline Unsupported Assembly Instructions & P2.a \\ Unsupported Behaviors & P2.a \\ Stalling Code & P3.b \\ Memory exhaustion & P3.a \\ Code Cache Fingerprints & P1.b \\ Instruction Pointer in Unexpected Memory Regions & P1.c \\ Incorrect Handling of Self-Modifying Code & P2.a \\ Unexpected Context & P1.c \\ Memory Region Permission Mismatches & P1.b \\ Process Hierarchy & P1.d \\ Xmode Code & P2.a \\ Incorrect Emulation of Supported Assembly Instructions & P2.a \\ Command-Line Arguments & P1.d \\ Process Handles & P1.d \\ File Handles & P1.d \\ Event Handles & P1.d \\ Shared Section Handles & P1.d \\ Signal Masks & P2.a \\ Fingerprints of DBI-related Binary Programs & P1 \\ Thread Local Storage Presence & P1.c \\ Environment Variables & P1.d \\ System Library Hooks & P1.a \\ Excessive Number of Full Access Memory Pages & P1.b \\ Common API Calls & P1.a \\ Peak Memory Usage & P3.a \\ Performance Degradation & P3.b \\ \hline \end{tabular}
\end{table} TABLE I: Taxonomy-Problem relation
from _config.json_ \(\Dartsuit\). A process \(\Cartsuit\) is created with the target application mentioned in the configuration file, and a payload \(\Dartsuit\) is injected into the freshly created process, further loading the COBAI controller(r) module. The latter is responsible for loading and initializing all other dependencies (plugins). Each analysis instance registers itself through the server-client interface(r). Should the analyzed application try to create a new local thread, remote thread or process (be it local or remote), the controller will announce it to the main launcher. This way, the analysis-context is extended, providing control over all available instances.
Depending on the analysis purpose and traits of interest, the user may configure COBAI by specifying a configuration file as input, using the JSON format. Fig. (b)b highlights the three main sections of this configuration file. It includes information related to the analysed application (path, new name), parameters for individual plugins, logging flags and so on. Due to the development under the low coupling, high cohesion principles, the user may easily enable or disable any component, changing the behavior and the expected results.
One of the main components, "Transparency shield", has been specifically designed for ensuring transparency not only for the framework itself, but also for concealing the true nature of the environment. This is achieved by intercepting certain CPU instructions or system calls and providing "forged" results that would normally yield from a "regular" system. More details about this process are discussed in Section II-C.
The entire framework is almost self-contained, the only external dependencies being BeaEngine [34], which the disassembler plugin is based on, the Lohmann JSON library [44], used by the launcher, and Mongoose, a networking library [11]. All the libraries are statically linked, therefore the user should not worry about them.
The architectural shift to make possible the solutions mentioned above was achieved by following the low coupling high cohesion principles, by designing a flexible set of problem/issue-specific plugins, and by developing an enclave-like memory management (see Section II). The current development stage of COBAI is the result of constantly improving previous attempts and design choices. In this paper, we mainly focus on how COBAI targets the DBI and OS environment transparency issues, in an attempt to successfully instrument evasive malicious binaries.
* _availability_: while it was designed as a framework, it can be used as an out-of-the-box solution, even on highly evasive malware (see Section II-A);
* _ease of use_: no programming skills are required to produce and configure execution traces, by only adjusting a configuration file (see Section II-A);
* _development scalability_: every component is independent, having a clear and separate purpose, which leads to a dynamic architecture, such that developers may replace core functionalities with their own (including the translator or disassembler) (see Section II-C).
### _How COBAI addresses P1-P3_
* COBAI implements a _Shield_ plugin designed to extend the resource virtualization beyond the DBI, covering a significant number of OS environment resources: 1. COBAI does not hook any of these APIs in order to capture OS events, it just captures event registration
Fig. 1: (a) COBAI Architecture, using an enclave-like memory design. (b) Example of the config file.
Fig. 2: DBI-Plugin Initialization protocol
APIs, by reserving DBI-cache address space at the registration time. The basic API hooking to capture the registration of the events is achieved through API hooks at the instrumentation level.
2. COBAI implements specific instrumentation hooks in order to slightly change the behavior of all memory-access APIs. The application under analysis is basically unable to iterate memory ranges belonging to the DBI, including DBI libraries, heap ranges, thread contexts or the DBI-cache.
3. COBAI provides virtualization for all possible execution contexts.
4. COBAI provides virtualization features for a comprehensive list of OS environment resources.
5. COBAI is prepared to face the missing behaviors, by using a higher granularity for the DBI-engine sub-modules, thus some missing virtualization features may be added, or some plugins updated or replaced: 1. While COBAI still has some missing functionalities (execution of 64 bit code inside 32 bit process), its modular approach allows for easy localization of incomplete or wrong behaviors in order to implement the missing functionality. For instance, any missing ASM instruction should be solved by either replacing the entire disassembler module or perform an update of the module. 2. COBAI is designed to layer the execution of different DBI-tools at the same time. In this regard, it is able to execute the _Shield_ plugin and the instruction and API tracing plugins all at the same time.
6. COBAI implements a minimum of overhead management, such that while not fully able to make it disappear, the measurements are adjusted such that it gets apparently lower compared to the actual one: 1. While still under development, COBAI may face this challenge by moving large DBI-cache portions outside the process address space, if this limit is pushed behind the edges. 2. COBAI is able to overcome some of these time overheads through a carefully sync between CPU counters and execution context counters or time-specific APIs. While some applications indeed take longer to instrument, the application is unable to determine that the time overhead is grater compared to a native execution.
### _Plugins_
By design, each feature or group of features is handled by a specific plugin or a small set of dependent plugins, in order to be easily updated or even replaced. The core set of plugins is presented in Fig. 3, where _COBAI Controller_ itself is a plugin.
While the modularity should provide decoupling, _Controller_ along with _Translator_ and _DISASM_ plugins are mandatory for a proper binary translation, but a basic analysis may be performed without the other plugins. The main idea, in this case, is that the user may customise the usage of COBAI according to countless analysis scenarios, thus increasing its efficiency: track only the executed instructions or API calls, enable or disable the handling of polymorphic applications, handle exceptions, provide transparency and so on. Another important remark is that, while plugins are natively independent, some of their features rely on the presence of others. For instance, the plugin providing transparency requires the existence of plugins handling APIs or CPU instructions. The dependency relations are included in the plugins' description.
Following, we briefly describe the plugins currently implemented in COBAI and describe scenarios where they are needed:
_COBAI Controller:_: This plugin, along with the _Translator_ and _DISASM_, are the minimum requirements for COBAI to operate. This plugin itself provides support for: the management of other plugins; API interface between plugins; instantiation of a new analysis (be it a different process, a different thread inside or outside the process under analysis, or a callback or exception handler).
_Translator:_: The flow of an actual instrumentation of the target code is found in _Translator_ module. This plugin is responsible with code translation, cache management, and thread state management. Compared to other tools and frameworks where the translator is part of the DBI, in COBAI it is possible to totally replace the translation mechanics and all its underlying aspects, with a totally different approach, either for supporting additional architectures, for performance or for academic reasons.
_DISASM:_: This plugin handles the disassembly of instructions and it is currently an interface for the BeaEngine disassembler. It is mainly used by the _Translator_ plugin and also provides the string version of instructions, in the NASM Intel syntax. Replacing the disassembler also requires a specific interface to bind to _Translator_ module.
_ExceptionHandler:_: This plugin provides support for handling exceptions and also aids in translating exception filters and handlers. It supports the following exceptions: Win32 Structured Exception Handling (SEH); Vectored Exception Handling (VEH); Unhandled Exceptions; C++ exceptions.
_APIControl:_: This plugin assists in:
* API detection, along with providing details regarding the API name, parameters, and library;
* registration of user-callbacks to be called before and/or after a specific API, along with the option to skip the actual execution of the function or simulate its behaviour, correctly adjusting the stack and other necessary parameters.
Instead of relying on endless lists of symbols (such as PDB files) provided by vendors (e.g. Microsoft), the plugins makes use of a lightweight binary structure providing various details, such as API names, parameters, their type and so on. The binary structure is inspired by the _apis_def_ component presented in [20], which is a plugin for x64dbg [19].
_InstrControl:_: This plugin assists, similarly to the APIControl, in the registration of user-callbacks which are to be called before and/or after an instruction or group of instructions. The group refers to the fact that a plugin may register a callback not only for a specific instruction (referenced by opcode, for instance), but also by a regex, instruction type (control, transfer, branching, etc) or any combination of the above. The operations are handled on a binary level, such that the overhead is minimal. Also, any registered callback has the possibility of manipulating the instruction or specifying that it should not be executed.
_Logger:_ The main role of this plugin is to facilitate logging capabilities for other plugins. It allows the following: registration of callbacks when needed; creation of new trace logs; multithreaded support; logging any type of output (this is also helpful for plugins needing to dump memory regions); cache support, increasing the performance of the overall analysis.
_Shield:_ This plugin is responsible with transparency policies both for the DBI engine and for the OS environment. The transparency is mainly achieved through various routines registered as callbacks, using the _APIControl_ and _InstrControl_ plugins. Therefore, the plugins ensures that forged results are passed to instructions or API (system) calls for events that would trigger details about the environment. For instance, fake results are provided to instructions or APIs querying the presence of a VM context (VM features are present, files or registry corresponding to several VM hypervisors or emulators, time difference between instructions and so on).
These routines are also grouped, such that a user may enable or disable sets of routines, according to the scenario (file manipulation, system queries, registry access, network connections, etc). Additionally, this plugin partially enhances the DBI address space, such that it behaves similarly to a trusted execution environment (TEE) / enclave. This ensures that the analysed application has no access to the DBI memory. Furthermore, any instrumented child/remote process will be forbidden to access the instrumented parent DBI address space.
## III Evaluation
This section evaluates the virtualization features of COBAI along with four other DBI frameworks (PIN, DynamoRIO, FLOW, QBDI), considering two points of view: the virtualization rate of the DBI framework itself facing specific DBI-transparency attacks in Section III-A, and the virtualization of the analysis environment resources (e.g.: Operating System, Sandbox - where the DBI executes) facing specific Sandbox fingerprints in Section III-B.
To evaluate the transparency of the DBI, the following ingredients are required:
1. an analysis environment as the running context for the DBI;
2. a DBI-engine (back-end of the analysis, mandatory);
3. a DBI-tool running on top of the DBI-engine, capable to fill the virtualization gap;
4. a target application executing certain transparency attacks (either for the DBI-engine or for the Sandbox environment).
These ingredients are configured differently across Sections III-A and III-B, to capture a series of problems we have identified during our experiments.
### _DBI Transparency Evaluation_
This experiment highlights the transparency features of COBAI, compared with several other DBI frameworks. In some of our experiments, we have used specific DBI-tools (to assist the mitigation of the transparency issues) that were publicly available, while in others we only used the DBI-engine itself performing the basic binary instrumentation. The motivation for the chosen configuration for DBI-tools is presented in more details in Section III-A2, while sections III-A1 and III-A3 describe the test-suite we have used to perform the transparency attacks and the results we obtained, respectively.
#### Iii-A1 Test suite
The test suite consists of 57 applications. A part of them were collected from other research projects, while the rest were either inspired from the informal description found in the literature, or derived from our experience. As a consequence, we expanded the initial set of tests. Therefore, our test suite has been designed and developed based on four groups of public available proof of concepts, while the last two groups extend some uncovered scenarios:
1. Publicly available tests developed for Windows OS: * 12 test created by AVAST in 2014 for [38]; * 1 test for a timing attack described in [49]; * 4 tests described in [29].
2. 12 publicly available applications developed for other OS (e.g.: Linux), but adapted by us to execute on Windows described in [59], the remaining one (12 out of 13) is Linux-specific and could not be ported to Windows OS because of the differences in the OS kernels.
3. 14 tests based on informal descriptions presented in [35] and partially on source-code fragments described in [43, 42, 36, 39].
4. 14 tests designed by us, implementing transparency attacks as an extension of the previous group of tests.
_A brief description of the tests:_ The name of the tests can be found in the second column of Table III. The tests are split into six groups based on their source. The first four groups of tests were public available [38, 49, 59, 29]. The fifth set of tests was developed based on informal descriptions and partial source code found in the references of [35]. The sixth group has been fully developed and implemented through our research, as the attack scenarios were created from scratch or extending/deriving from others.
The first [38] and fourth [29, 15] sets of tests, include experiments in which the memory access (code, heap, stack, execution contexts) is manipulated, triggering various exceptions in order to expose the lack of DBI resources virtualization, or
Fig. 3: Plugin Infrastructure
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Ref & Test Name & **COBAI** & **AR** & **VMS** & **SoK** & **PIN** & **BP10** & **BP7** & **JLP** & **DR** & **QBDI** & **FLOW** \\ \hline & ExecuteData1 & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ \\ \hline & ExecuteData2 & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ \\ & ExecuteUnmap1 & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ & ExecuteUnmap2 & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & ExecuteUnmap3 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ & ExecuteUnmap4 & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ & ExecuteUnmap5 & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ & FpuContext1 & ✓ & ✗ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ \\ & FpuContext2 & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & ServiceException1 & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & TransientException1 & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & TransientException2 & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline
[49] & Detector & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ \\ \hline & enter & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & envvar/strings & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ \\ & find-constant & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ \\ & jit-branch & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗� & ✓ \\ & jit-lib & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ \\ & mapname & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ & ✓ & ✗� & ✗ \\ & NX & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗� \\ & PagePerm & ✓ & ✗� & ✗ & ✗� & ✗� & ✗� & ✗� & ✗� & ✗� \\ & FXSAVE & ✓ & ✗� & ✗� & ✗� & ✗� & ✗ & ✗� & ✗� & ✗� \\ & IPSIGINFO & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & SMC & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ & VMLeave & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline & PageGuard Single & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ & ✓ & ✗ & ✗ \\ & PageGuard Multi & ✓ & ✗ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & read CC & ✓ & ✓ & ✗ & ✓ & ✗ & ✗ & ✓ & ✓ & ✓ \\ & FPU & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ \\ \hline & ProcessHierarchy & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ &cmdangs & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ \\ & ProcessHandles & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & FileHandles & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ \\ & SectionHandles & ✓ & ✓ & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & LibraryHooks & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ \\
[35] & HeapStack & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & MemExhaus & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & Xmode & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & TLSPPresence & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & APIMonitor & ✓ & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & StallingCode & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & escape\_dbi & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & peak\_mem\_usage & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & unhandled\_instrs & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ \hline & exception\_mix & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ & stacktrace & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & additional\_threads & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ III-A1 & additional\_stacks & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & free\_unknown & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & API\_unbook & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & abuse\_ exceptions & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & Transit\_x54 & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ \\ & ChildFeedback & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ \\ & ThreadExhaustion & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ \\ & ThreadContext & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ & thread-increase\_DBI-cache & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ \\ & API-calls-increase-DBII-cache & ✓ & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✓ & � & ✗ \\ \hline \multicolumn{10}{c}{ ✓ - test based} & ✗ & - test failed & ✗ & - instrumentation crashed & & & & & & & & & \\ \hline \end{tabular}
\end{table} TABLE II: Extended evaluation for DBI transparency issues
**AR**: Arancino (x86 on Win10 x64) for PIN 2.14; **VMS**: PinVMShield (x86 on Win10 x64) for PIN 3.20; **SoK**: SoK (x86 on Win10 x64) mitigations for PIN 3.20; **PIN**: stand-alone PIN (x86 on Win10 x64) 3.20; **BP10**: BluePill (x86 on Win10 x64) for PIN 3.20; **BP7**: BluePill (x86 on Win7 x64) for PIN 3.16; **JLP**: JuanaesPIN for PIN 3.20;**DR**: stand-alone DynamoROIO (x86 on Win10 x64); **QBDI** (x86 on Win10 x64) 0.8.0; **FLOW** (x64 on Win10 x64)
to lead to unexpected behavior. These tests expose various faces of problem P1. The second group consisting of a single test [49] describes an application performing a timing attack. The difference is an overhead visible during the analysis (this test is related to problem P3). The third [59], fifth [35] and the last sets of tests (designed by us) include experiments exposing a wide range of faces for problems P1, P2, P3. Overall, our test-suite covers the DBI transparency problems as follows:
* architectural gaps: six tests (_enter, abuse_exceptions, exception_mix, Transit_x64, unhandled_instrs, Xmode_);
* increased runtime overhead: ten tests (jit_branch, API_Calls_Increase_DBI_cache, peak_mem_usage, StallingCode, thread_increase_DBI_cache, MemExhaus, abuse_exceptions, jit_lib, Detector, ThreadExhaustion);
* missing virtualization features: 41 tests (all the others).
The 41 tests triggering P1 problem, expose various common lacks of virtualization. While they could have been also passed by any other DBI framework, it seems that none of the DBI frameworks / DBI tools was able to fully pass them. COBAI is capable to stack various plugins as DBI tools and execute them at once, addressing this way the P2.b problem. Below we highlight home some interesting tests are capable to triggering P2.a and P3 problems:
* also known as Haven's Gate [8]), to a 64 bit piece of code and then back to 32 bit using the 32 bit code segment. The _Xmode_ test, extends this approach and leverages the WOW gateway to go even further and make API calls to legitimate 64-bit APIs in a 32 bit process.
* P2.a_-abuse_exceptions_: All the DBIs implement some minimum support for exception handling, however, we have noticed that abusing the exceptions mechanisms leads to crashes and sometimes to executions refusing to finish.
* P3_-Detector_: Test causing significant performance issues, creating a behavioral difference between the CPU cache and the DBI cache, when dealing with thread synchronization.
* P3_-StallingCode_: Test leading to a significant performance degradation compared to native execution (the difference exceeds 1000%).
#### Iv-A2 Evaluation Process
We executed all the test applications in a Sandbox-like environment running on top of VMWare 16 with hardware virtualization enabled. The operating system inside the virtual machine was a Windows 10 x64 for most of the tests (or Windows 7 x64 for PIN 3.16 - an exception test) with an jp-10885H CPU running at 2.4GHz. The virtual machine had 2 cores (4 threads), and a total of 4GB of memory.
We used other four DBIs to compare to COBAI: _Intel PIN_[45, 10] versions: 3.20, 3.16, 2.14, DynamoRIO [25, 6] version: 9.0.18983, QBDI [37] version: 0.8.0, FLOW [7]. Because COBAI, at the moment, is built for the x86 architecture, three (PIN, DynamoRIO, QBDI) of the DBIs were also used under a similar context. FLOW, on the other hand, was configured for Windows x64, as it only supports this architecture. Given the differences between the DBIs, we built the tests both for Windows x86 and Windows 64, operating the low-level attack differences to behave the same. Beside the two different versions of the tests source-code (Windows x86, Windows x64), we also had to build a third version of the tests, to execute on QBDI as the instrumentation in this case is performed for a function and not for a standalone application.
We found DBI-tools only for PIN, this is the reason for which we also considered PIN with no tool at all running on top. The motivation for also using the standalone version of PIN is motivated by our will to compare the increase of fixed transparency issues provided by the tools, with the results of not using a tool at all. For COBAI we have used the Shield plugin presented in Section II, while for DynamoRIO, FLOW and QBDI we only used the DBI-engine itself. As PIN has a lot of versions and some of them are pretty different, we configured the PIN-tools and PIN versions as follows:
* for PIN 2.14 we used Arancino [50, 3];
* for PIN 3.16 we used BluePill [30, 4];
* for PIN 3.20 we used PinVMShield [51, 13], SoK [29, 15], the PIN DBI-engine itself without a specific tool, and BluePill;
The set of tests was executed using the required architecture for each DBI (x64 for FLOW and x86 for all the others). The following labels were used to tag the possible results in Fig. 4:
* the DBI was not detected (TEST PASSED);
* the DBI was detected (TEST FAILED);
* the execution generated an application crash as a result of the evasion mechanism being instrumented (EXEC ERROR).
None of the tests generate any crash by themselves (i.e. without being instrumented), ensuring the fact that any generated crash, during the instrumentation, was caused by the DBI-engine or the logic inside the DBI-tool. For BluePill we used both PIN 3.16 (as suggested by the developers - where Windows 7 was also a requirement) and PIN 3.20 with additional changes required to make possible the build process.
#### Iv-A3 Results
The results may be interpreted from two points of view. The first one is a statistical overview of the results in Fig. 4, while the second one is a discussion on side-effects of implementing or lacking features to assist problems P1,P2,P3.
At the highest level, Fig. 4 may be interpreted as follows:
* Overall COBAI passed \(\approx\) 95% of all tests and missed three tests where everyone else, also failed;
* PIN passed \(\approx\) 34% on average and DynamoRIO \(\approx\) 38%; as the results show, DynamoRIO has an increased \(\approx\) 4% for passed tests compared to PIN, and the same rate for lower failed tests; these differences tells us that DynamoRIO performs better compared to PIN;
* QBDI and FLOW scored both 61%-66% for the failed tests, having the same number of ERROR tests; the passed tests are also close for these DBIs (\(\approx\)37% for QBDI and \(\approx\)33% for FLOW).
Below is a short discussion of the side-effects generated by the presence or the lack of virtualization features leading to problems P1-P3:
_P1 - Full virtualization coverage for own DBI resources:_ Our set of tests highlights the fact that this problem is a general one. None of the DBIs approach in a serious way to this subject, leaving DBI-tools to fix this lack of virtualization for the moment (and possible forever). This virtualization gap is possible because the developers of these analysis frameworks and their users, do not agree the same on how far the virtualization of the DBI resources should go. In this regard, most of the DBI engines only cover some basic traits, such as context switching, or strict behavioral changes for event handling. From our experiments, none of the existing DBIs attempt to fully isolate themselves. Even the border-line isolation is not bullet-proof. Tests **APIMonitor, VMLeave, IPSIGINFO** described in Section III-A1 confirm our statement. From our point of view, to increase the security of the DBI at this level, would require to fully isolate the DBI resources. COBAI creates a border between the DBI resources and the instrumented application, and isolates itself by slightly shifting the application behavior on queries targeting any of the DBI resources.
_P2 - Architectural gaps:_ While this is the most challenging problem to solve, COBAI already incorporates some features ready to assist upcoming issues in this area. The engine of the DBI uses modularity in a way that core sub-components can be individually replaced or updated. COBAI is able to pass 4 out of 6 tests revealing this problem and the other 2 are work in progress, whereas all the other DBIs may take more to add the missing functionalities. A lateral face of the COBAIs architecture and possible one of the reasons for which we were able to obtain the highest score, is the possibility to execute multiple DBI tools at once. To support this approach, a DBI must have its architecture designed such that multiple DBI-tools hooking the same instructions would not cause any conflict. The other DBIs are required to merge together different DBI-transparency mitigation rules, or even merge a custom logic to perform a specific analysis with the focus on fixing the same time some virtualization gaps. This feature is not something that is visible in our evaluation but has an indirect impact. During the development of COBAI we have used both the Shield plugin and a tracing tool,allowing us to understand and solve most of the issues we have faced in a timely fashion.
_P3 - Runtime overhead:_ This problem affects the ability of the DBI to provide the desired results by significant performance degradation. This is an indirect approach to reveal the presence of the DBI, as it does not rely on DBI artifacts or fingerprints, but on critical differences in execution time, or resource usage. **StallingCode** and **Detector** described in Section III-A1 provide concrete examples for such scenario. To be able to perform well under such circumstances, a DBI should mix runtime optimizations and behavior shifts in time measurement.
### _Sandbox/Environment Transparency Evaluation_
The second experiment highlights the capacity of a DBI framework to conceal the true nature of the environment, making it look like a system of a regular user. In other words, the analysed application might look for certain artifacts, such as files, processes, registry keys, resources usage and so on, causing the environment to be detected as a Virtual Machine, Sandbox, Analysis environment, etc. This draws attention to the importance of concealing any environment artifacts that reveal its true purpose. This experiment is related to the previous one, but with additional tweaking of the virtualization support behind the OS, and we also reduced the number of tests to those implementing explicit sandbox evasion techniques.
#### Iv-B1 Test suite
For this evaluation we have found two publicly available projects (pafish [12] and al-khaser [1]) that together implement more than 300 sandbox fingerprinting tests. The tests are classified based on a common purpose (i.e.: detection
Fig. 4: DBI Transparency across a total of 57 applications and a range of 5 (COBAI, PIN, DynamoRIO, QBDI, FLOW) different DBIs, where PIN had configured 5 different PIN tools (Arancino, VMShield, SoK, BluePill, JuanLesPin) as well as the standalone version (PIN).
of virtual machines like VMWare [17] or VirtualBox [16], detection of debuggers, detection of injected modules, timing attacks, etc.). Other public projects (_unprotect_[21] and _evasions checkpoint_[41]) with the same target (exposing specific sandbox and analysis environments fingerprints), share a lot of code from the two selected applications.
#### Iii-B2 Evaluation Process
This evaluation is based on executing the two applications (_pafish 6.2_ and _al-khaser 0.80_) on four different virtual machines, with support for hardware virtualization (Intel VMX in our case): VMWare Workstation 16.1.1, VirtualBox 5.2.24, Hyper-V and KVM using QEMU 2.5.0. The configuration of the Sandbox was identical with that used for the previous evaluation, in Section III-A2. Table III highlights our results. We used a red cross where the execution crashed, or the total number of triggered sandbox artifacts. The lower the number, the better.
#### Iii-B3 Results
The obtained results are included in Table III. Each line in the table contains the total number of detection tests for the specific DBI, out of a maximum of 283 on _al-khaser_ and 55 on _pafish_. This experiment highlights the fact that, while COBAI was able to pass all the test scenarios, other DBIs either kept crashing (Arancino, VMShield, BluePill) or triggered a significant number of detections. Some of the DBIs have also triggered an increased number of detections compared to a native execution of the test (no instrumentation). This is because the DBI itself introduced aspects that triggered additional detections, even though they do not directly check the presence of a DBI framework, but rather of an analysis solution in general. For instance, Flow triggered an increased number of detections for VirtualBox (by 6%) and KVM (by 30%), while DynamoRIO has a significant number of total detections for Hyper-V (by 13%). The only detection triggered by COBAI may be considered as a false positive, since the test involves the detection of how the binary code is aligned, but it is meaningless as long as the application is compiled with size optimizations. We have also expected BluePill to provide better results for this experiment, since it is in a more advanced stage towards transparency compared to its corresponding PIN DBI tools (Arancino, SoK, PinVMShield). However, we found that it constantly triggered application crashes for _pafish_, due to Windows10 WMI (Windows Management Instrumentation) usage, and for _al-khaser_ due to incorrect exception handling. Testing the instrumentation of _al-khaser_ on all versions of PIN, up to 3.22, we found that most probable there is an architectural issue, as PIN kept crashing without using a specific tool.
### _Overall discussion_
From our evaluation, we conclude that existing DBIs are not yet ready to virtualize a wide range of OS resources. This lack of virtualization makes them unsuitable for automating the analysis of malicious binaries. While other analysis environments may also share some of these transparency issues and lack of resource virtualization, we believe that DBIs may be the first analysis environments to provide good results in this regard. COBAI provides reliable solutions to problems P1,P2,P3, with some caveats related to P2 and P3. We draw attention again to the architecture, as these issues are difficult to solve for other DBI frameworks, since they might require a complete redesign. Moreover, the sandbox related issues only seem to add more challenges to the already existing ones, related to the DBI itself.
## IV Related Work
Generally, DBIs may assist the analysis of applications with consistent data collected at runtime, that may further be used for code profiling, statistics or behavior extraction. Our use-cases target behavior extraction in malicious binaries performing analysis-evasion scenarios. For our evaluation we have used DBI frameworks like PIN [45], DynamoRIO [25], QBDI [37] and FLOW [7]. In last decade, other analysis tools developed on top of these frameworks, some of them targeting benign binaries, while others targeting malicious applications. Because it is pointless to talk about the transparency of these frameworks dealing with benign binaries, we came across some projects and research works that also deal with this attack surface. TRITON [18] is a framework based on PIN, capable of performing concolic execution. While it does not target malicious binaries in a specific way, we know that existing PIN-tools attempting to fix PIN transparency issues, may be mixed with it, to perform concolic execution for APTs (Advanced Persistent Threats), for instance. Other DBI-based analysis tools like BAP [26] and ANGR [53] are able to analyze malicious binaries or payloads up to a certain level, but they do not attempt to overcome the analysis evasion as we would expect to.
The benefits of using DBI frameworks for malware analysis were explored by many research projects, and, as a consequence, the general interest and demand has risen. For instance, tools and frameworks like Pindemonium [28], RePEconstruct [40],
\begin{table}
\begin{tabular}{|c|c|c|c||c|c|c|c|c|c|c|} \hline
**VM** & **Sample** & **COBAI** & **AR** & **VMS** & SoK & **PIN** & **BP** & **DR** & **QBDI** & **FLOW** & **Native** \\ \hline VMWare & parish & 0 & 10 & X & 10 & 10 & X & 11 & 10 & 11 & 11 \\ \cline{2-11} & al-khaser & 1 & X & X & X & X & X & 45 & X & 49 & 44 \\ \hline VirtualBox & parish & 0 & X & 20 & 21 & X & 21 & 21 & X & 22 \\ \hline \multirow{2}{*}{Hyper-V} & al-khaser & 1 & X & X & X & X & 79 & X & 83 & 78 \\ \hline \multirow{2}{*}{Hyper-V} & parish & 0 & X & X & 5 & 5 & X & 5 & 6 & X & 5 \\ \cline{2-11} & al-khaser & 1 & X & X & X & X & 33 & X & X & 29 \\ \hline QEMU-KVM & parish & 0 & X & X & 5 & 5 & X & 3 & 6 & X & 5 \\ \cline{2-11} & al-khaser & 1 & X & X & X & X & X & X & X & 39 & 30 \\ \hline \end{tabular}
* Total number of available tests for al-khaser: 283
\end{table} TABLE III: Environment transparency for virtual machines; the lower the values, the more transparent the environment becomes to the instrumented application. X represents an application crash.
Arancino [50], PinVMShield [13], BluePill [30] and PEMU [58] provided good results at certain specific tasks, but the lack of a higher purpose, transformed them into proof of concepts for possible other projects. Other types of issues have been uncovered targeting DBI frameworks. For example, in Section E of a similar research [42], authors identify four distinct DBI evasion mechanisms found in commercial protectors/packers which allows applications to escape the analysis environment. Considering all these premises, DBIs seem to reach a dead-end, which might explain the reluctance of security products or analysis environments to incorporate such techniques. Our purpose was to overcome this disadvantages and change the current mindset, thus developing COBAI.
As an alternative to binary instrumentation, researchers have been replacing it with emulation introspection like QEMU [24], hypervisor introspection found in [55, 54, 33], or API hooking found in tools like FRIDA [48] and the monitor component of Cuckoo Sandbox [5]. During our research, we came across various research projects (be them theoretical or practical) describing specific DBI or Sandbox transparency attacks, or providing transparency-fixes for specific DBI frameworks:
* SafeMachine [38] is a set of 12 demos proposed by AVAST, to attack the transparency of DBIs through exceptions; most the demos involve exceptions to leak DBI specific content (e.g.: DBI stack, DBI cache, DBI memory management);
* SoK [29] is a research proposing four transparency attack scenarios as well as their countermeasures for PIN;
* PvIN [59] is a research discussing the transparency issues of Intel PIN on Linux; the authors implement a set of about 13 demos to state their point and include the most variate set of attacks, including some that are able to detect the DBI execution cache, and even escape the analysis;
* Detector [49] is a research proposed by the same author of the FLOW DBI; the demo exploit a low-level CPU-cache behavior taking place in a thread-sync mechanism;
* Some classifications for the DBI transparency issues in [35] and [31]; we took the informal descriptions and references found in these papers to implement 13 demos for our evaluation;
* Code examples for concrete DBI transparency issues in [42, 39, 43, 36, 23];
* A series of DBI tools like Arancino [50], BluePill [30], PinVMShield [51], ProcTracer [14], RePEConstruct [40];
* Sandbox transparency attacks in al-khaser [1], pafish [12] and some public available sources [41, 21] citing most of the examples from the two projects.
All these works draw special attention to how DBIs are able to virtualize specific execution-context resources like certain CPU instructions, API functions, memory access, callbacks and exceptions, execution time, etc. The aspects handled by the authors cover: leaking DBI internal state, transparency, performance, escaping instrumentation etc. We considered all the possible DBI-transparency issues presented in these research papers as a whole, and even extended it with new scenarios in an attempt to create a reference benchmark where COBAI for the moment is on top of all DBIs we have tested.
## V Limitations and Future Work
COBAI has been designed with a flexible architecture that facilitates its extension with new features and functionalities. Because COBAI is still under development, it is currently limited to Windows 32-bit applications. Some other limits are the inability to instrument x64 code on x86 processes, and lacking some runtime optimizations for overhead management. However, the overcoming og these limits are merely a time problem. While COBAI successfully overcomes multiple others analysis challenges, we developed a plan for future improvements, separated into categories, based on the purpose and type:
_DBI Transparency Benchmark Improving::_ Starting from the current state of our work, we plan to extend the set of tests both for DBIs and also for the environment and make it publicly available. The benchmark we proposed will be updated such that it would become a measuring instrument for DBIs promising transparency and we also hope to formalize and automate the definitions for transparency-attacks as well as testing a certain DBI or sandbox environment against a specific transparency-attack.
_Performance and soundness of instrumentation::_ COBAI will be extended to support a wider range of transparency features for both DBI-level and environment-level, while improving the same time its performance and robustness. While at the moment we use a large set of regression-tests, we believe that we can design a formal approach to prove the soundness of an analysis, or at least a way to profile the DBI itself.
_Concolic Execution and Taint Analysis::_ At the moment we are adding support for taint analysis and concolic execution on top of COBAI. We believe that we can manipulate an application whose execution flow relies on certain command line-arguments, or include evasive behaviors. In this way, we may force a desired behavior starting from an initial application execution path (e.g. activate the malicious behavior in sandbox environments, generate a key based on the desired seed, etc.). This is different and more generic compared to current transparency-specific mitigations and may assist the behavior extraction from highly evasive malicious binaries, as well as from APTs (Advanced Persistent Threats) known to target specific conditions to trigger the execution of malicious payloads.
_Automated Tasks::_ In order to build a malware removal tool (for persistent, hard to remove malicious applications) or ransomware decryption tool (to decrypt and recover files encrypted by Ransomware), one must analyze malicious samples and then combine the results with the whole process of software development. We plan to reduce and automate the redundant work by interpreting the results of COBAI's analysis and use them to fill templates for such tools. Therefore, an analyst could use COBAI to obtain a malicious trace and also be able to generate a removal tool based on this trace, with minimal effort.
## VI Conclusion
In this paper, we proposed COBAI as a DBI framework aimed towards mitigating transparency attacks from malicious binaries, while also successfully analyzing their behaviour. Despite common DBI engines which forces developers to
combine different tools functionalities in a single one or to rely on the core functionalities of the DBI engine, COBAI has a dynamic architecture supporting multiple tools at once (some of them as plugins), making possible the replacement of core-modules, and at the same time the usage of standalone DBI tools in the same analysis. The dynamic architecture also allows the usage of the DBI as a highly configurable stand-alone tool, able to produce out-of-the box execution traces involving no programming skills. This is different from most existing DBIs like PIN and DynamoRIO, which force users to either use existing open-source DBI-tools to add analysis- functionalities to the DBI-engine, or develop custom ones.
The development of COBAI started in late 2018 as a proof of concept DBI, and since mid 2019 its development and architecture was directed towards the improvement of transparency issues, starting with the DBI itself and extending these features up to the OS environment level in an attempt to automate malicious binary analysis. From our experiments, until now, it was successfully used to reduce ransomware analysis, from days to minutes. Before COBAI was developed, we tested PIN and DynamoRIO, and the negative experiences required to develop and maintain such projects were used to optimize COBAI architecture and usage since the beginning.
Our test-suite both aggregates all of the public available tests (developing DBI-transparency attack scenarios) and also considers new scenarios. While all the transparency issues are covered by problems **P1,P2,P3**, we consider there is still room for unexplored attack scenarios. Uncovering these possibilities in the near future will assist the improvements of automated binary analysis for malicious applications.
Our evaluation for DBI-transparency and environment-transparency issues shows that COBAI is capable of mitigating DBI-transparency issues for \(\approx\) 96% of the tests, and all of the environment-transparency issues described in _pafish_ and _al-khaser_. Based on our results, while still aiming toward optimizations and improvements, COBAI might be one of the few fine-grained binary analysis engines, capable of reducing the time involved in reverse-engineering of malicious binaries. The goal is for COBAI to achieve the capability of analyzing complex malicious binaries in any sandbox environment, and maybe new features and performance optimizations will make it more likely to replace conventional analysis tools like debuggers. We believe that technologies developed in COBAI will extend the context of using DBIs for benign and malign binary analysis. Here are several possible scenarios that support our claims:
* The dynamic architecture and highly configurable usage might open the doors for: involving the usage of DBIs in academic research (involve students in extending and optimizing specific core DBI modules); application profiling made easier; system administrators may gather additional logs from applications; easy to add extensions to support a plethora of architectures.
* The transparency module can assist in providing consistent traces for malicious binaries, impossible to achieve with existing technologies.
* The server-client interface can contribute to: network-level debuggers, debugging a countless number of processes and payloads across the network at the same time; the ground for DBI services either offloading part of the DBI analysis to more performant systems, or provide access to real-time execution logs and analysis results.
|
2306.04055 | QPOML: A Machine Learning Approach to Detect and Characterize
Quasi-Periodic Oscillations in X-ray Binaries | Astronomy is presently experiencing profound growth in the deployment of
machine learning to explore large datasets. However, transient quasi-periodic
oscillations (QPOs) which appear in power density spectra of many X-ray binary
system observations are an intriguing phenomena heretofore not explored with
machine learning. In light of this, we propose and experiment with novel
methodologies for predicting the presence and properties of QPOs to make the
first ever detections and characterizations of QPOs with machine learning
models. We base our findings on raw energy spectra and processed features
derived from energy spectra using an abundance of data from the NICER and RXTE
space telescope archives for two black hole low mass X-ray binary sources, GRS
1915+105 and MAXI J1535-571. We advance these non-traditional methods as a
foundation for using machine learning to discover global inter-object
generalizations between - and provide unique insights about - energy and timing
phenomena to assist with the ongoing challenge of unambiguously understanding
the nature and origin of QPOs. Additionally, we have developed a publicly
available Python machine learning library, QPOML, to enable further Machine
Learning aided investigations into QPOs. | Thaddaeus J. Kiker, James F. Steiner, Cecilia Garraffo, Mariano Mendez, Liang Zhang | 2023-06-06T23:06:07Z | http://arxiv.org/abs/2306.04055v1 | QPOML: A Machine Learning Approach to Detect and Characterize Quasi-Periodic Oscillations in X-ray Binaries
###### Abstract
Astronomy is presently experiencing profound growth in the deployment of machine learning to explore large datasets. However, transient quasi-periodic oscillations (QPOs) which appear in power density spectra of many X-ray binary system observations are an intriguing phenomena heretofore not explored with machine learning. In light of this, we propose and experiment with novel methodologies for predicting the presence and properties of QPOs to make the first ever detections and characterizations of QPOs with machine learning models. We base our findings on raw energy spectra and processed features derived from energy spectra using an abundance of data from the _NICER_ and _Rossi X-ray Timing Explorer_ space telescope archives for two black hole low mass X-ray binary sources, GRS 1915+105 and MAXI J1535-571. We advance these non-traditional methods as a foundation for using machine learning to discover global inter-object generalizations between--and provide unique insights about--energy and timing phenomena to assist with the ongoing challenge of unambiguously understanding the nature and origin of QPOs. Additionally, we have developed a publicly available Python machine learning library, QPOML, to enable further Machine Learning aided investigations into QPOs.
keywords: accretion, accretion disks -- black hole physics -- stars: individual (GRS 1915+105, MAXI J1535+571) -- X-rays: binaries
## 1 Introduction
At the ends of their lives, massive stars "do not go gentle into that good night" (Thomas, 1952). Instead, if their initial mass exceeds \(\sim 8\) M\({}_{\odot}\), core collapse leads to spectacular Type II supernovae (Schlegel, 1995). If the compact remnant remains bound or becomes bound to a non-degenerate companion star, the result can be a neutron star (NS) or black hole (BH) remnant (Gilmore, 2004). In special cases, this object maintains a non-degenerate partner, and together these may form an X-ray binary (XRB) system, in which the non-degenerate star engages in mass-exchange with its compact partner (Tauris and van den Heuvel, 2006). Such systems are characterized by accretion from the donor star, through accretion disks (Shakura and Sunyaev, 1973) and are the sources for jets (Gallo et al., 2005; van den Eijnden et al., 2018) and winds (Neilsen, 2013; Castro Segura et al., 2022). Additional exotic phenomena like thermonuclear surface burning (Bildsten, 1998) have also been observed in neutron star binaries. Both BH and NS systems are both observed to emit thermal X-ray radiation with temperatures \(\sim 1\) keV that is understood to arise from the conversion of gravitational potential to radiative energy. Neutron stars can produce thermal emission at their surfaces, and the optically thick, geometrically thin accretion disks around both NSs and BHs can produce strong thermal X-ray emission (Shakura and Sunyaev, 1973). Furthermore, BH and NS XRBs both also show hard X-ray flux coming from Compton up-scattering of thermal disk emission by a cloud of hot electrons around the compact source known as the corona (Galeev et al., 1979; White and Holt, 1982). Comptonized emission is commonly modeled by a power law relationship \(N(E)\propto E^{-\Gamma}\), where \(\Gamma\) is the photon index (McClintock and Remillard, 2006). Strongly-Computized spectra commonly exhibit reflection features like a fluorescent, relativistically broadened 6.4 keV Fe K\(\alpha\) line (Fabian et al., 1989) and \(\sim 30\) keV Compton hump (Ross and Fabian, 2005). These systems can be transient in activity and undergo evolution in spectral states (Gardenier and Uttley, 2018), ranging from hard, to intermediate, and to soft (McClintock and Remillard, 2006), which are coupled with mass-accretion rate (Done and Gierlinski, 2004), spectral hardness or thermal dominance, and thereby position on a hardness-intensity or color-color diagram track (Ingram and Motta, 2019), and the presence/absence of quasi-periodic oscillations (QPO) of the observed X-ray radiation (McClintock and Remillard, 2006). These QPOs are detected as narrow peaks in power-density spectra (Homan and Belloni, 2005). In the past thirty years, numerous theories, including but not
limited to relativistic precession (Stella and Vietri, 1998), precessing inner flow (Ingram et al., 2009), corrugation modes (Kato and Fukue, 1980), accretion ejection instability (Tagger and Pellat, 1999), and propagating oscillatory shock (Molteni et al., 1996) have been advanced to explain the occurrence of QPOs in black hole, as well as neutron star, XRB systems. Yet, there is not consensus as to which model is most plausible. In black-hole systems, most of the observed QPOs have been at low frequencies (LF) \(\leq\) 30 Hz (Belloni et al., 2020). Only a small subset has BHXRBs have exhibited high-frequency QPOs (HFQPO). LF QPOs are further subdivided into three classes (Casella et al., 2005): Type-A QPOs are the rarest, sometimes appearing in the intermediate or soft state as broad, low amplitude features centered between 6-9 Hz and usually lacking harmonic companions (Motta et al., 2011). Type-B QPOs are more common, and can be seen during the short soft intermediate state and have shown some connection with jet behavior (Gao et al., 2017; Garcia et al., 2021). Finally, type C QPOs are the most common, and can be detected as narrow features in the low-hard and hard-intermediate states with harmonic companions (Fragile et al., 2016). Their fundamental frequencies range from \(\sim\) 0.1-30 Hz depending on state, and almost always correlate strongly with spectral features like \(\Gamma\) and luminosity (Motta et al., 2015). As for HFQPOs, we recommend readers to Motta et al. (2011), Mendez et al. (2013), and Stella and Vietri (1999). QPOs are also observed in neutron star systems (Belloni et al., 2002; Wang, 2016). We focus on LFQPOs from BHXRBs in this paper and recommend van der Klis (2006) and Wang (2016) for reviews of neutron star specific QPOs and Ingram and Motta (2019), Jonker et al. (1999), Kato (2005), Revnivtsev et al. (2001), and Mendez and Belloni (2021) of QPOs in XRBs in general. All in all, hundreds of XRBs have been observed since the discovery of Sco X-1 (Giacconi et al., 1962; Liu et al., 2007; Corral-Santana, J. M. et al., 2016) and a large fraction show some type of QPO.
Machine learning is a revolutionary subfield of artificial intelligence in which models teach themselves patterns in data rather than operating by externally supplied hard-coded rules (Goodfellow et al., 2016). With data available to astronomers approaching the petabyte domain (Ivezic et al., 2014), this aspect of machine learning has helped it supplement traditional methods in addressing the ever growing volume and increasing complexity of astronomical data, while also providing new perspectives on old phenomena (Kremer et al., 2017; Rodriguez et al., 2022). Consequently, machine learning has been used prolifically to classify variable stars (Richards et al., 2011), search for exoplanets (Pearson et al., 2018), detect pulsars (Zhu et al., 2014), predict solar flares (Li et al., 2020), classify and even discover galaxies (Dieleman et al., 2015; Kojima et al., 2020). However, although machine learning techniques has been applied to a number of problems related XRBs as well, e.g, to classify and identify X-ray binaries (Huppenkothen et al., 2017; Arnason et al., 2020; Sreehari and Nandi, 2021; de Beurs et al., 2022; Orvat-Kapola et al., 2022; Yang et al., 2022), predict compact object identity (Patnaniak et al., 2021), and study gravitational waves (Schmidt et al., 2021), this subfield contains tens of thousands of observations that have never been explored with machine learning to detect QPOs themselves. For the first time, in this work we seek to develop a methodology for using machine learning to detect QPOs, because we believe that our theoretical understanding of QPOs and their exotic progenitor systems would benefit from insights this approach could provide (Fudenberg and Liang, 2020). Our approach is unique, because although the externally determined presence of QPOs has been used as a binary input parameter in accretion state classifiers such as those in Sreehari and Nandi (2021), QPOs have never before been the output of machine learning prediction themselves. The rest of this paper is structured as follows: in Section 2 we describe the observations upon which we base our work. Following this, in Section 3 we describe the energy and spectral fitting procedures we employ to produce input/output data from these observations for the machine learning models and methods which we detail in Section 4. We present our results in Section 5, and we discuss these results contextually in Section 6. Finally, we conclude in Section 7. Additional work concerning demonstrating QPOML and model comparison are presented in following appendices.
## 2 Observations
### GRS 1915+105
GRS 1915+105 is a well studied galactic low mass XRB system composed of a \(12.4^{+2.0}_{-1.8}\) M\({}_{\odot}\) primary and a \(1.2\) M\({}_{\odot}\) K III secondary (Greiner et al., 2001; Greiner, 2003) on a 34 d period located at a distance of \(8.6^{+2.0}_{-1.6}\) kpc from the Earth (Reid et al., 2014). The secondary star in this system overflows its Roche lobe. GRS 1915+105 was one of the first microquasar jet systems, with (apparent) superluminal motion detected from a ballistic jet launched with an inclination \(70\pm 2\) deg (Mirabel and Rodriguez, 1994). Since its discovery in 1992 (Castro-Tirado et al., 1992), this somewhat peculiar source has displayed unique timing and spectral patterns which have been organized into 14 separate variability classifications depending on its variability state (Belloni et al., 2000; Hannikainen et al., 2005). With its 16-year archive observations of this source we considered all data from the Rossi X-ray Timing Explorer (RXTE) Proportional Counter Array (PCA; \(2-60\) keV) that are also included in Zhang et al. (2020), Mendez et al. (2022), and Garcia et al. (2022). These include a great number of detections of type C QPOs between 1996 and 2012. Energy and power-density spectra (PDS) have been derived from binned, event, and GoodXenon data as described in Zhang et al. (2020). Briefly, PDS have been constructed by averaging 128 s long intervals at 1/128 s time resolution, normalized according to Leahy et al. (1983), and Poisson noise subtracted (Zhang et al., 1995). Of the 625 timing observations in Zhang et al. (2020), we have 554 matching energy spectra.
### MAXI J1535-571
MAXI J1535-571 was discovered by the MAXI/GSC nova alert system as a hard X-ray transient system undergoing outburst in 2017 by Negoro et al. (2017), and it was first suggested to be black hole system by Negoro et al. (2017). Since discovery, it has been suggested as a \(\sim 10.39M_{\odot}\) BH, \(\sim 5\) kpc distant (Sridhar et al., 2019). MAXI J1535-571 has displayed state transitions (Nakahira et al., 2018), reflaring events (Cuneo et al., 2020), and hysteresis during its main outburst (Parikh et al., 2019). Furthermore, it has been determined to possess a near-maximal dimensionless spin parameter of \(a=\frac{cJ}{GM^{2}}>0.99\)(Miller et al., 2018; Liu et al., 2022). To study this source we use data from the International Space Station mounted, soft X-ray (0.5-12 keV) observatory Neutron star Interior Composition ExploreR (NICER) (Gendraeu et al., 2012) which has unequaled spectral-timing capabilities in soft X-rays.
We have filtered our _NICER_ data following standard practices, excluding South Atlantic Anomaly passages in order to identify continuous good time intervals (GTIs) which are extracted and analyzed individually. Data from detectors 14, 34, and 54 have been excised owing to a propensity for elevated noise or spurious events in those detectors. Additionally, for each GTI, the average event rates of overshoot, undershoot, and X-ray events are compared amongst
the detector ensemble, and any detector which has a median absolute deviation (MAD) \(>15\) is also excised for that GTI1). All spectra have been corrected for deadtime (generally \(<1\%\)). _NICER_ backgrounds have been computed using the 3C50 background model (Remillard et al., 2022), as well as using a proprietary and similar background model which replaces the 3C50's "hrej" and "big" indexing with cutoff-rigidity "COR_Sax" and overshoot-rate indexing. We have removed any data with a background count rate \(\geq 5\) counts/s, exclude observations for which the source-to-background count ratio is \(<10\), and reject observations with exposure times \(t\simeq 60\) s. Additionally, we require at least 5000 net source counts to ensure reliable energy and power-density spectral results, and we consider the remaining data sufficiently bright and insensitive to the selection between these similar background models. Energy spectra have been rebinned from the 10 eV PI channels by a factor ranging from 2-6 in order to oversample _NICER_'s energy resolution by a factor \(\geq 2\), while also requiring a minimum of 5 counts per bin. From \(1-4096\) Hz, PDS are computed using events in the energy range from \(0.2-12\) keV, for a light-curve sampling at \(2^{-13}\)s (\(\approx 122\mu\)s). PDS are computed individually and averaged together using 4s segments for \(t<160\)s and 16s segments for \(t\geq 160\)s. Below 1 Hz, PDS are computed by averaging together results for 128s segments for \(t>128s\) 64s segments for \(64\leq t<128s\) and 4s segments for \(t<64s\). The resulting PDS is then logarithmically rebinned in \(\sim 3\%\) frequency intervals, the Poisson noise subtracted, and the rms\({}^{2}\) Hz\({}^{-1}\) normalization adopted.
Footnote 1: The MAD is a robust statistic which is insensitive to outliers. 15 MAD corresponds to approximately \(10\sigma\) for a Gaussian-distribution.
Although we have less MAXI J1535-571 observations with QPOs for analysis (in large part due to the source's transient nature), one benefit of using _NICER_ over _RXTE_ data for this source (if we could have used _RXTE_ data) is that _NICER_ spectral channels do not suffer from gain drift over epochs like _RXTE_ PCA (which affected energy-channel conversions), and thus we can use the _NICER_ energy spectra as raw inputs to our regression and classifier models, in addition to the engineered features discussed in Section 3 and Section 4.2.
Overall, we selected these two sources for this initial evaluation of our methodology because they represent two very different types of LMXRBs. On one hand, GRS 1915+105 has long been known as a markedly unusual source in terms of its outburst behaviors and states (e.g. its very abnormal, three-decade long transient outburst, regular/irregular bursts, dips, etc., behaviors influenced by GRS 1915+105's orbital period and accretion disk size, the longest and largest respectively known among LMXRBs), wheres on the other hand, MAXI J1535-571 is, in comparison to GRS 1915+105, a far more typical source in terms of outburst states, QPO-spectral parameter associations, and tracks through the hardness-intensity diagram (Taam et al., 1996; Truss and Done, 2006; Nakahira et al., 2018; Bhargava et al., 2019; Coneo et al., 2020; Koljonen and Hovatta, 2021; Garcia et al., 2022). Hence, between these two sources we aim to evaluate our methods across a spectrum of typical to challenging spectral-timing relationships. Furthermore, in choosing objects observed with different instruments, we aim to take advantage of the different strengths of each instrument, such as the plethora of _RXTE_' observations and the high spectral resolution of _NICER_(Gendreau et al., 2012).
## 3 Data Analysis
### Energy Spectra
As previously mentioned and discussed in more detail in Section 4.2, we base our detection of QPOs on energy spectra and processed features from the energy spectra. Thus, to generate the processed spectral features we fit the energy spectra for both sources with XSPEC version 12.12.0 (Arnaud et al., 1999) using the three component model tbabs*(diskbb+nthcomp), which represents a Tuebingen-Boulder absorbed multi-temperature blackbody and thermally Comptonized continuum (Mitsuda et al., 1984; Zdziarski et al., 1996; Kubota et al., 1998; Zycki et al., 1999). We fixed the equivalent hydrogen column densities to canonical values of \(6\times 10^{22}\) atoms cm\({}^{-2}\) for GRS 1915+105 and \(3.2\times 10^{22}\) atoms cm\({}^{-2}\) for MAXI J1535-571 based on Sreehari et al. (2020) and Cuneo et al. (2020), respectively, with solar abundances in accordance with Wilms et al. (2000) and Verner et al. (1996) cross-sections. We tied the nthcomp seed photon temperature to T\({}_{\rm in}\) of diskbb for both sources, and let high energy rollover (electron temperature) freely vary between \(4-40\) keV for GRS 1915+105 and \(4-250\) keV during fitting for MAXI J1535-571, basing these ranges on Zhang et al. (2022) and Dong et al. (2022), respectively. For GRS 1915+105, we ignore channels \(<2.5\) keV or \(>25\) keV during fitting, calculate net count rate from the resulting range, and compute hardness as the sum of the ratio of the background subtracted channel net count rates for the ranges in Zhang et al. (2022), except as a proportion rather than a ratio, i.e. \([13\)-\(60\) keV\(]\). Regarding MAXI J1535-571, we note the presence of instrumental residuals in the \(1.7-2.3\) keV _NICER_ range, likely related to _NICER_'s A mirror coating and residual in the Si K \(\alpha\) fluorescence peak, and following Miller et al. (2018), we address these by excluding the \(1.7-2.3\) keV energy band from the spectral fitting process, and otherwise fit the range \(0.5-10.0\) keV. We compute net count rate normalized to the number of _NICER_ detectors, and hardness ratios for MAXI J1535-571 observations as the proportion of the total net count rate contributed by the \(3.0-10.0\) keV range, i.e. \([0.5-1.7]+[2.3-3.0]+[3.0-10.0]\) keV\(\,\). Altogether, for both sources we use the net count rate, hardness ratio, asymptotic power-law photon index, nthcomp normalization, inner disk temperature, and diskbb normalization for input parameters, which we discuss in more detail in Section 4.2.
### Power Density Spectra
Throughout this work, all QPOs for both sources are parameterized as Lorentzian distributions given by Equation 1,
\[A(f)=\frac{K(\frac{\sigma}{2\pi})}{(f-f_{0})^{2}+(\frac{\sigma}{2})^{2}} \tag{1}\]
where \(f\) is frequency in Hertz, \(\sigma\) is full width at half maximum (FWHM), and \(K\) is the normalization, as per Arnaud et al. (1999). In the case of GRS 1915+105, QPO properties are obtained by fits to PDS following Zhang et al. (2020). A QPO is considered significant when the ratio of the QPO power integral divided by its \(1\sigma\) error \(>3\) or quality factor \(Q=\frac{\nu_{0}}{\sigma}>2\)(Nowak et al., 1999), provided their frequency does not change significantly in an observation. Our primary use for this GRS 1915+105 data is to train machine learning regression models to predict the properties of the fundamental QPO feature, since _only_ data with matching QPO detections are used in our GRS 1915+105 machine-learning analysis. In all, this corresponds to 554 QPOs. In contrast to this approach of fitting individual
QPOs solely for regression, we use the energy and timing data from MAXI J1535-571 to explore both classification of observations into binary states of QPO presence/absence as well as multiclass QPO cardinality states 2 based on binned raw energy spectra and processed features. Additionally, for MAXI J1535-571 we predict the properties for both the fundamental and frequently appearing harmonic in the PDS based on binned energy spectra and spectral parameterizations derived from energy spectra. Our QPO detection method for MAXI J1535-571 is slightly different than that of GRS 1915+105. Specifically, we determine the presence and properties of QPOs in PDS from MAXI J1535-571 by first fitting two zero-centered Lorentzian functions to PDS and then iteratively fitting a third Lorentzian over a logarithmically sampled set of 268 frequencies / between 1 and 20 Hz, where width is kept \(\sigma<\frac{f}{10}\) for an initial fit, and then freed for a subsequent refined fitting step. A peak of qualifying distance (\(\Delta\chi^{2}\) distance to neighboring samples) and threshold (horizontal distance between samples) is identified with the scipy function find_peaks(Pedregosa et al., 2011) in the resulting distribution of \(-1\cdot\chi^{2}\) fit-statistic with peak height greater than the \(\Delta 10\) Akaike Information Criterion (Akaike, 1998). Finally, a visual inspection is required to accept a QPO candidate detection (to avoid potential spurious detections, e.g., at the frequency boundary). In 68 of observations the fundamental is accompanied by the second harmonic (the fundamental itself is called the first harmonic), in 14 observations it is alone, and in 188 observations no QPO is detected.
Footnote 2: Also called multinomial classification (Bouveyron et al., 2019), when number of classes totals to \(\geq 3\)
## 4 Machine Learning Methods
### Model Selection
In machine learning, models can be broadly divided by two sets of classification: (i) whether they operate in a supervised or unsupervised manner; and (ii) whether they are built for classification or regression (Bruce and Bruce, 2017). Since we are providing our models with explicit targets for loss minimization, our approach falls under the umbrella of supervised learning (Singh et al., 2016), and as we are attempting to connect spectral information about XRBs with real-valued output vectors that describe QPOs in their power-density spectra, we also fall under (multi-output) regression (Xu et al., 2019). In selecting our machine learning models for regression, we seek those that natively support multi-output regression, incorporate capabilities for mitigating overfitting, have precedents of working successfully with medium to small sized data sets, and natively communicate feature importances. Additionally, we seek to
Figure 1: Light curves of GRS 1915+105 (left) and MAXI J1535-571 (right) for the observations used in this work. Net count rates are calculated as the sum of the background subtracted counts divided by observation time for every observation of each source. Note the persistent nature of GRS 1915+105 versus the transient flare of MAXI J1535-571 (reflaring epochs of MAXI J1535-571 are not included given the lack of QPOs detected there in previous works).
Figure 2: Example energy and power density spectra and models for MAXI J1535 observation 1050360105-21 on the top and the same for GRS 1915+105 observation 4016-01-01-07 on the bottom. For each row, from left to right, the first plot shows the energy spectrum and folded tbabs*(thcomp+diskbb) model, the second shows energy spectrum model alone, the third shows the power density spectrum in the relevant frequency range, and the fourth shows the best fit Lorentzian PDS model alone. Best fit QPO features have been superimposed over zero centered Lorentzians used to model the power-density continuum. Only the fundamental (i.e. first harmonic) is fit for the GRS 1915+105 QPO (as discussed in Section 3, this was an intentional choice to see how the models fair with seemingly simpler outputs).
evaluate a collection of models against each other in light of the No-Free-Lunch-Theorem (Wolpert, 2002; Lones, 2021).
Based on these criteria, we settle on a set of tree-based models and their descendants, specifically decision trees (Breiman, 1984), random forests (Breiman, 2001), and Extremely randomized trees (Geurts et al., 2006). Here we provide a brief summary of these models for context. Decision trees are the original tree-based regression model which operate by inferring discriminative splits in data and making predictions via a series of "if-then-else" decisions (Breiman, 1984). Random forests are more powerful derivatives of decision trees, and are based on an ensemble of decision trees trained via bootstrap aggregation (Breiman, 1996, 2001). By incorporating predictions from such an ensemble, random forests reduce prediction variance while increasing overall accuracy when compared to a single decision tree (Lakshminarayanan, 2016). Finally, Extremely randomized trees (also known as extra trees) are similar to random forests in this respect but operate with more randomization during the training process, as instead of employing the most discriminative thresholds within feature spaces for splits, extra trees select the best performing randomly drawn thresholds for splitting rules (Geurts et al., 2006; Pedregosa et al., 2011). Details on training and optimization are given in Section 4.3, where we also discuss our steps to avoid overfitting (Bruce and Bruce, 2017).
Together, these represent some of the most powerful yet lightweight machine learning models available, and meet our criteria for multi-output regression (Xu et al., 2019), robustness to overfitting (Boine et al., 2008; Ampomah et al., 2020), success with small/medium sized datasets (Floares et al., 2017), and feature importances (Yasodhara et al., 2021). An additional benefit of these models is that they are natively supported by the TreeExplainer method in the SHAP Python package (Lundberg and Lee, 2017), which frees us from common pitfalls related to impurity and permutation based feature importances, which we discuss in more detail in Section 6. Overall, we explore all the above models in addition to ordinary linear regression (to provide a base performance comparison) for the regression cases, but focus on random forest and logistic regression (Berkson, 1944) for classification cases.
### Feature Engineering
As Casari and Zheng (2018) detail, feature engineering is the process of transforming raw data to maximize predictive performance. After experimenting with different formats, we settled on the following in order to use derived features from spectral fits or raw spectral data as predictors and timing features as outcomes. We will hereafter refer to and experiment with two types of input data for our models: the first are rebinned net energy spectra, which we discuss below and will simply call "energy spectra." The second type is the combination of XSPEC model-fit parameters and spectrum derived features like net count rate and hardness which we will designate the "engineered features" input type. When using engineered features for inputs, we format our input data as a matrix composed of vectors containing the net count rate, hardness ratio, asymptotic power-law photon index, nthcomp normalization, inner-disk temperature, and diskbb normalization for every observation. Hereafter, we refer to and present these values by the letters \(\{A,B,C,D,E,F,G\}\) as shorthand. This input structure is visualized in Equation 2 as follows,
\[\text{IN}_{m\times 7}=\begin{bmatrix}A_{1}&B_{1}&C_{1}&D_{1}&E_{1}&F_{1}&G_{1}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ A_{m}&B_{m}&C_{m}&D_{m}&E_{m}&F_{m}&G_{m}\end{bmatrix} \tag{2}\]
where \(m\) is the number of observations. This format can be extended to any \(n-\)dimensional number of features, which we take advantage of when using raw energy spectra as input data. For the case of MAXI J1535-571, we compare the predictive performance of the models and provide different insights by using raw spectral data in the form of count rate values from 19 channels, 0.5 keV wide apiece spanning the energy range \([0.5-10.0]\) directly as the input vectors within the input matrix, similar to Pattnaik et al. (2020). This coarse spectral input strikes a balance between sparsity and precision, allowing us to determine importances for specific 0.5 keV ranges while not overwhelming the models with too many input features given the overall sample size (Raudys and Jain, 1991; van de Schoot and Miocevic, 2020). With regards to regression, our QPO output matrix is similarly formated as a vector matrix, with rows that match by index to vectors in the input matrix, but with an important addition regarding ordering (detailed below). A significant challenge relates to the prediction of not only the presence versus absence of QPOs in a given PDS, as well as (for present cases) the specific number of QPOs and the physical parameters of each QPO present. Over the course of an outburst, the number of QPOs present can change, as these are transient phenomena (Remillard et al., 2006; Ingram and Motta, 2019). We account for this challenge of variable output cardinality by first identifying all QPO occurrences associated with an observation. Then, we order these occurrences and their features in a vector of length \(L=N_{f}\times\max(N_{s})\), where \(N_{f}\) is the number of features describing every QPO (e.g. \(N_{f}=3\) for frequency, width, and amplitude), and \(N_{s}\) is the maximum number of simultaneous QPOs observed in any particular PDS in a data set. We then structure each output vector as a repeating subset of features for every QPO contained, and order these internal QPO parameterizations by frequency. If one or more of these occurrences are not detected in a PDS, their feature spaces in the vector are populated with zeros. This allows us to circumvent the aforementioned difficulty with variable output cardinality, because the models will learn during training to associate indices populated with zeros as QPO non-detections (Choller, 2017). As with input features, Equation 3 provides a visualization of the general QPO matrix output returned by our model, where each row corresponds to one observation matched with a row in the input matrix (both out of \(m\) total observations).
\[\text{OUT}_{m\times n}=\begin{bmatrix}f_{1,1}&\sigma_{1,1}&K_{1,1}&\dots&f_{1, n}&\sigma_{1,n}&K_{1,n}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ f_{n,1}&\sigma_{m,1}&K_{m,1}&\dots&f_{m,n}&\sigma_{m,n}&K_{m,n}\end{bmatrix} \tag{3}\]
In the case of MAXI J1535-571, the maximum number of QPOs simultaneously observed in a PDS is two, and each QPO is described in terms of its frequency, width, and amplitude, so the output matrix takes the shape \(\text{OUT}=m\times 6\). Since we only regress for the fundamental in the GRS 1915+105 PDS, its output matrix takes the form \(\text{OUT}=m\times 3\). Prior to reformatting the data in this manner, we applied a columnar min-max standardization to the XSPEC, and hardness input features, as well as the QPO Lorentzian output features, which linearly transformed each distribution into a \([\max(x^{\prime}),\min(x^{\prime})]=[0.1,1]\) range (as opposed to the traditional \([0-1]\) range given our decision to denote QPO non-detections with zero values) while preserving their shapes, according to Equation 4 (Kandanaarachchi et al., 2019).
\[x^{\prime}=\frac{x-\min(x)}{\max(x)-\min(x)}\times\frac{\max(x^{\prime})-\min(x^ {\prime})}{\min(x^{\prime})} \tag{4}\]
This step is necessary to prevent features with relatively larger absolute amplitudes receiving undue weight, and it also frees the models from dependency on measurement units (Akanbi et al., 2015; Han et al., 2012). We did not apply this standardization step to channel count and net count rate input features, however, as the imposition of _a priori_ theoretical limits to these features is not as readily justifiable (Pattnaik et al., 2020). 3
Footnote 3: Standardization prior to splitting data into train and validation sets does not impair our model’s predictive validity when input features are derived from KSPEC because its pre-adjusted inputs will always be constrained within the theoretical bounds applied during standardization for each feature (e.g. I will always initially range between \(x-y\) for a source, where \(x\) can be a hard lower limit like \(\Gamma=1.1\) and \(y\) can be the corresponding hard upper limit during fitting, such as \(\Gamma=5\)).
### Training, Validation, and Hyperparameter Tuning
To better understand our models in different data combinations and minimize statistical noise, while guaranteeing every observation gets included in a training, as well as at a separate time, validation instance, we employ a repeated \(k\)-fold cross-validation strategy (Olson and Delen, 2008; Vanwinckelen and Blockeel, 2012) for model evaluation (as opposed to solely using a default proportion-based train-test split). According to this procedure, our data is first split into a 90% training and validation set, and then a 10% held out test set. Before evaluating the models on this test set, the training and validation set is randomly split into \(k=10\) folds. Given the relative class imbalance in the MAXI J1535-571 data in favor of observations without QPOs, for MAXI J1535-571, the folds for both regression and classification cases are also stratified during splitting, which means each fold maintains the same proportion of observations with QPOs (Ma and He, 2013). Then, every model is evaluated on each unique fold after being trained on the remaining folds, with the individual \(k\)-fold performance taken as the mean of these evaluations across the ten folds. We repeat this process five times (randomly shuffling the data between each iteration), and the final score for each model is calculated as the mean performance across the ten \(k\)-fold instances, either as the \(f-\)score for classification cases (a harmonic mean of the precision and recall), or the median absolute error for regression (Pedregosa et al., 2011; Kuhn and Johnson, 2019). Random initialization is kept the same between models to make sure each model is trained/tested on the same data within each fold, and to ensure fair comparison between these models, each was subject to automatic and individualized hyperparameter tuning via grid search prior during this evaluation (Dangeri, 2017). The specific hyperparameter values from which combinations were derived and evaluated for each model are presented in Table 3.
### Feature Selection
Through feature selection, it is generally important to deal with potential multicollinearity by calculating Variance Inflation Factors (VIF) and removing features with VIF values \(\gtrsim 5\)(Kline, 1998; Sheather, 2008). However, we have chosen not to remove potentially collinear features prior to regression for the following reasons: first, the tree based models like random forest that we focus on are by design robust from the effects of multicollinearity (Strobl et al., 2008; Chowdhury et al., 2021). Second, since multicollinearity only affects the estimated coefficients of linear models, but not their predictive ability, applying a linear model to potentially collinear data is perfectly reasonable in our case, as we are using the linear model solely as a baseline against which we will compare the predictive capabilities of the more complicated random forests model; i.e., as we are applying the linear model, we are not interested in its components (Lieberman and Morris, 2014; Mundfrom et al., 2018). We will, however, revisit multicollinearity when we interpret feature importances in Section 5.
## 5 Results
### Regression
As demonstrated in Figure 3, on average our tree-based models outperform linear regression in every regression case, regardless of source or input feature type. Interestingly, as shown in Figure 3 and Figure 7, linear regression also seriously struggles to correctly assign 0 values to observations lacking QPOs for both processed and rebinned energy spectra input data, a problem not faced by the other models (except random forest with rebinned energy spectra to a lesser degree). Furthermore, linear regression always has higher dispersion in the relationship between actual and predicted QPO frequency. Yet, despite their unified superiority versus linear regression, the machine learning models do differ significantly within fold amongst themselves, as shown in Figure 3, Figure 5, Figure 6, and 7. Specifically, although decision tree provides a notable improvement in dispersion between true and predicted values, as well as a slope between these closer to unity, it is by far hosted by random forest, and extra trees. Two additional interesting divergences in model performance occur between the sources, as well as between their input types. Regarding the former, all models trained and evaluated on GRS 1915+105 data have more overall dispersion and slopes tending further away from unity in their mapping between true and predicted frequency when compared to the same models for MAXI J1535-571 QPOs with processed input features. This can be clearly seen when comparing Figure 5 with Figure 6. The superior performance of the algorithms on MAXI J1535-571 are surprising for several reasons: first, with GRS 1915+105 the models never face the problem of false negatives or false positives because there are no QPO-absent data in this set. In contrast, MAXI J1535-571 observations are of varying composition, imbalanced in favor of QPO absence. Second, GRS 1915+105 has around two times more total observations, and around six times more observations with QPOs than MAXI J1535-571; in most cases training models on more data leads to corresponding increases in accuracy (Kalinin and Foster, 2020; Brefeld et al., 2020). However, this assumption may not hold in instances like this, where models are being tested on different objects, as there may exist fundamentally stronger/more pronounced associations between spectral and QPO in one of the systems. The most likely reason for the inferior performance on GRS 1915+105 QPOs is that the underlying relationships between the input and output QPO features are likely
\begin{table}
\begin{tabular}{l l l l} \hline \hline & Decision Tree & Random Forest & Extra Trees \\ min\_samples\_leaf & {1,3} & {1,3} & {1,3} \\ min\_samples\_split & {2,4,6,8} & {2,4,6,8} & {2,4,6,8} & {2,4,6,8} \\ n\_estimators & & {50,100,150,} & {50,100,150,} & {50,100,150,} \\ & & 200,250,500] & 200,250,500] \\ warm\_start & & {True\_False} & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Feature spaces for model hyperparameter tuning
more convoluted for GRS 1915+105, which is understandable given GRS 1915+105 has long been known to have complex variability states, and is in fact a bit of an oddball among black-hole systems. Additionally, potential confusion could arise because the models fitted on fundamental QPOs only in GRS 1915+105 intentionally lack the freedom to predict aspects about harmonics, which could lead to these models to potentially confuse signals for harmonics with fundamentals (this is an unexpected insight from our initial decision to only predict for the fundamental in GRS 1915+105 in an effort to explore how the models behave with simpler output space). Finally, to evaluate the performance of the multioutput aspect of the regression, we carry out pairwise nonparametric two-sided goodness-of-fit Kolmogorov-Smirnov (KS) tests on permutations of QPO parameter residual arrays (Massey, 1951; KS-, 2008), and fail in all instances to reject the hypothesis that any pair of distributions of residual arrays between actual and predicted QPO parameters are not drawn from the same distribution (\(p>0.76\) for all GRS 1915+105 and \(p>0.99\) for all MAXI J1535-571 residual pair permutations, regardless of input type). This shows that the the models do not favor any particular QPO parameter in their regression and instead regress for each with statistically insignificant differences in accuracy (i.e. accuracy is not different for QPO features, both for the fundamental, as well as the harmonic when present). As for the second interesting divergence in model performance (by input type), surprisingly there is a pronounced difference in model performance when these regression models are trained on processed features as opposed to rebinned energy spectra: in all model cases, dispersion and slope both drastically worsen when models rely on the rebinned energy spectra directly. This is shown for MAXI J1535-571 regression between Figure 6 and Figure 7 demonstrates that although the models could hypothetically learn some lower level representation of the concepts of hardness, overall net count rate, etc. from the data and not require the engineered features, with the amount of data provided engineered features provide significant additional insight for the models to base decisions on that, exceeding what is provided by energy spectra alone. This would be an interesting idea to investigate with deep learning methods, which would far exceed these classical models' ability to learn abstractions in the data through automated feature extraction (Nadeau and Bengio, 2004).
### Classification
At least for MAXI J1535-571, binary classification of QPO absence/presence appears to be a fairly trivial task, as shown by the confusion matrices of the first repetition tenth folds in Figure 8. Additionally, as Figure 8 also shows, our logistic regression classifier corollary to linear regression performs just as well as random forest in terms of accuracy and other classification metrics when trained on processed input data, with negligible difference for rebinned energy spectra as well. This is corroborated by the corresponding ROC curves also shown in Figure 8. The ROC curves show how a model has optimized between specificity (on the abscissa) and recall (also known as sensitivity; on the ordinate), with the ideal model displaying an ROC curve enclosing an area under curve (AUC) of 1 (Bruce and Bruce, 2017). The curves in Figure 8 represent the average ROC and AUC values with \(1\pm\sigma\) deviations across all folds and repetitions evaluated. Both logistic regression and random forest decrease in average AUC when trained on rebinned energy spectra, but the decrease is most dramatic for logistic regression. We also present multiclass classification results for multinomial logistic regression and random forest based on processed and rebinned energy spectra input data in Figure 9. In the case of processed input data, random forest clearly outperforms logistic regression, but both models actually experience noted decreases in accuracy when tasked with predicting multiple outputs corresponding to the actual number of QPOs in a MAXI J1535-571 observation based on rebinned energy spectra input. In fact, in the case of energy spectra inputs, random forest actually performs worse than logistic regression. Overall, the decreased performance of both models here is likely do to the class imbalance in the data set (as mentioned in Section 3), which gives the models very few single QPO observations to use as training data per round.
## 6 Discussion
Now that we have demonstrated QPOs properties can be predicted--and in the following section show how features useful to these predictions can be analyzed--on the sources MAXI J1535-571 and GRS 1915+105 individually, we propose the next step would be to apply these methods in a future work on source-heterogeneous input data, a capability we intentionally incorporate into our QPOML library. To achieve this, it would be beneficial to construct a large standardized database of QPO and spectral data with a scope _a la_Corral-Santana, J. M. et al. (2016), for which the wealth of _RXTE_ observations will prove invaluable. Additionally, while increasing
Figure 3: Gaussian kernel density estimate violin plot representations of aggregated median absolute error for each tested model across \(k=10\) validation folds repeated \(r=5\) times on GRS 1915+105 (feature input) data (top) and MAXI J1535-571 (feature input) data (bottom). The abbreviations DT, RF, and ET stand for the decision tree, random forest, and extra tree models, respectively. As further discussed in Section 5, linear regression is outperformed by the classical machine learning models models models across folds for each rebintion round. Furthermore, the two ensemble tree based models clearly outperform the single decision tree model, which is to be expected.
Figure 4: Example PDS with over plotted QPO predictions for the GRS 1915+105 observations 80701-01-54-02, 50703-01-28-01, 50703-01-24-01 ordered by column left to right from least (best-fitting) median to greatest (worst) Pythagorean sum of normalized errors on the three predicted QPO Lorentzian parameters (with corresponding models alone in bottom row). Note that the seemingly diminished height of the predicted QPOs is actually a consequence of how they were determined in the processing procedure, and in the case of the best observation 80701-01-54-02, the amplitude only differs by less than 0.3% from the “true” amplitude value it was predicting, as the derived amplitudes had reduced amplitudes originally.
Figure 5: A results regression plot for all QPOs predicted from the test set for the source GRS 1915+105 as returned (from left to right) by linear regression, decision tree, random forest, and extra trees. The best models\(-\)random forest and extra trees\(-\)both minimize dispersion between true and predicted values (as quantified by \(r^{2}\)), while simultaneously producing the most 1:1 relationships between them (as quantified by best fit slope).
source sample size like this, it would also be fruitful to include neutron star LFQPOs and kHz QPOs in a followup study to generalize between sources, because unlike BH XRBs, NS XRBs are predominantly persistent and have significantly more observations with QPOs in archival _RXTE_ data in general (Mendez et al., 1999; Migliari et al., 2003; Belloni et al., 2005; Raichur and Paul, 2008). That being said, the likely trade-off of using _RXTE_ data for these sources is that these QPOs will be predicted based on engineered XSPEC features instead of raw spectra given gain drift, as was the case with our analysis of GRS 1915+105 versus MAXI J1535-571. Another potential avenue for extending this work would involve exploring new input features to associate with QPOs, such as black hole spin, mass, inclination, jet properties, and QPO phase lags, and tracking the importance of variable features throughout outburst and accretion states to see if they evolve in tandem. Including scattering fraction as an input parameter promises interesting results as well, because QPO frequency and scattering fraction exhibit a correlation for sources like MAXI J1535-571 but an anti-correlation for other objects including GX 339-4, H1743-322 and XTE J1650-550 (Garg et al., 2022). Finally, how these non-parametric machine learning models interact with the polynomial/exponential versus sigmoidal relationship between frequency and power-law index for some black holes versus neutron stars (Titarchuk and Shaposhnikov, 2005), as well as how well models trained on distinct outbursts of certain objects perform for outbursts withheld from their training, would both also be of interest if these models are applied on samples that differ not only by source, but also by source type (BH or NS). Now, we turn to discussing feature importances in Section 6.1 and statistically compare the models we used throughout this work in Section 6.2.
### Feature Importances and Interpretation
Feature importances refer to the relative attributed weights a model gives to different input features (Saarela and Jauhainen, 2021). In other words, they are measures for how helpful different features are for the model in making correct predictions, regardless of whether these predicted values are categorical or real-valued (Fisher et al., 2018). Before we discuss these, however, we will briefly describe our efforts to ensure the interpretability of our machine learning models. Interpretability is defined parsimoniously by Miller (2017) as the degree to which a human can understand the cause of a decision. Since most of our models are intrinsically complex (except for linear and logistic regression and decision trees), we seek _post hoc_ interpretability through feature importances (Vieira and Digiampietri, 2022). These values should not be interpreted as substitutes for other e.g. parametric importances, because they seek to explain how a machine learning model learns and interacts with its data. However, we believe that properly calculated feature importances may offer alternative helpful insight about the origins of QPOs, and we therefore take steps to avoid common pitfalls associated with these measures. For example, although it is common to discuss default impurity-based feature importances, this approach is flawed because it is both biased towards high-cardinality numerical input features, as well as computed on training set statistics, which means it may not accurately generalize to held-out data (Pedregosa et al., 2011).
Figure 6: Same as Figure 5, except for MAXI J1535-571 observations (processed feature input). The lesser number of points in these plots stems from both the smaller sample size of MAXI J1535-571 observations, as well as the clustering of values correctly predicted as zeros at the point \((0,0)\) where points cannot be seen individually in this plot).
Figure 8: Confusion matrices and ROC Curves with labeled AUC values for MAXI J1535-571 binary classification cases. The left pairs correspond to logistic regression, whereas the right correspond to random forest. The confusion matrices are taken from the first tenth fold, whereas the ROC curves are averaged across all folds with \(\pm 1\sigma\) deviations denoted by the grey regions. The superior performance of the models working from processed inputs in the top row compared to their rebinned energy spectra input analogues in the bottom row is intriguing and discussed in more detail in Section 5.
Figure 7: Same as Figure 6, except for MAXI J1535-571 observations (rebinned energy spectra as inputs). Note the increased dispersion and much less 1:1 relationships between true and predicted values for every model in the these plots compared to their equivalents in Figure 6
Additionally, although permutation importances are commonly put forward as a superior alternative, these suffer from multicollinearity, as in the process of permutating single features, an impactful feature could be erroneously ascribed as having little-to-no effect on model performance if it has high correlation with another feature (Strobl et al., 2007; Nicodemus et al., 2010; Hooker et al., 2019). Therefore, we chose to to determine feature importances with the contemporary TreeSHAP algorithm as implemented in the Python package shap by Lundberg & Lee (2017). This model extends game theoretic coalitional Shapley values to calculate SHapley Additive exPlanations (SHAP) in the presence of multicollinearity by incorporating conditional expected predictions (Shapley, 1952; Lundberg & Lee, 2017; Molnar, 2022). As hinted earlier and detailed in Lundberg & Lee (2017) and Molnar (2022), an additional benefit of using tree based models is that through tree traversal and dynamic programming the computational cost for computing SHAP values is brought down from exponential time \(\mathcal{O}(2^{n})\) to \(\mathcal{O}(n^{2})\) polynomial time. We calculate feature importances shown in Section 5 for each model \(f\) by treating the model from the tenth fold in the first repetition as if they were taken from the test set, and averaging their \(\phi_{i}(f,x)\) from Equation 5, which represents the weighted average of differences in model performance when a feature \(x\) out of \(M\) simpler input features is present versus absent for all subsets \(z^{\prime}\subseteq x^{\prime}\).
\[\phi_{i}(f,x)=\sum_{z^{\prime}\subseteq x^{\prime}}\frac{|z^{\prime}|!(M-|z^{ \prime}|-1)!}{M!}\left[f_{x}(z^{\prime})-f_{x}(z^{\prime}\setminus i)\right] \tag{5}\]
One of the most important things shown by Figure 10 and 11 is that there are significant interesting differences between the feature importances attributed to the processed features for GRS 1915+105 and MAXI 1535-571, which may be related to the nuances of the process driving QPOs in these systems. For example, in GRS 1905+105, net count rate and hardness ratio are clearly the most important features, after which importance falls precipitously and remains uniformly modest for the rest, with this proportional decrease ranging from a factor of three for nthcomp asymptotic power law to six for nthcomp and diskbb normalization. Because we have used SHAP values for importance, we can rule out the un-importance of these features stemming from multicollinearity or training set artifacts, which means they could _potentially_ be related to curious physical related conditions. However, there is no ambiguity about the importance of net count rate and hardness, because an XRB outburst's q-shaped state-evolution in the hardness-intensity diagrams (HIDs) is known to also be indicative of changes in timing (e.g., QPO) properties as tracked in HIDs (Motta et al., 2015; Motta, 2016). This is also in agreement with the findings of Figure 2 of Garcia et al. (2022), in which the QPO frequency of GRS 1915+105 is shown to vary with a somewhat inverse relationship with hardness ratio across mostly horizontal and vertical gradients in inner disk temperature and power law index, respectively.
In contrast to GRS 1915+105, the feature importances for both the best regression and classification models on processed MAXI J1535-571 input features favor a single feature above all others: diskbb normalization (although in the case of classification, net count rate and nthcomp normalization are still significant for MAXI J1535-571). This quantity (ignoring relativistic and plasma corrections) approxi
Figure 9: Confusion matrices for multiclass MAXI J1535-571 output, where the left column corresponds to logistic regression, the right column to random forest, the top row to processed input features, and the bottom row to rebinned energy spectra input features. Although only the accuracy of logistic regression decreases from binary to multinomial classification based on processed XSPEC input features, both models are significantly more inaccurate for the multinomial case based on energy spectra inputs compared to either binary case in Figure 8.
Figure 11: Similar to Figure 10, except for the best classification models for MAXI J1535-571 binary output based on engineered inputs (left, random forest), and energy spectral inputs (right, random forest). As seen for regression, hard energy channels similarly dominant feature importances for energy spectra input, yet, while diskbb normalization is still the most important processed feature for classification, more importance is attached here to net count rate and nthcomp normalization here than for regression on MAXI J1535-571.
Figure 10: Tree-SHAP calculated average of absolute value SHAP feature importances for the most accurate predictive regression models for GRS 1915+105 engineered inputs (left, extra trees), MAXI J1535-571 engineered inputs (middle, extra trees), and MAXI J1535-571 energy spectra inputs (right, extra trees). The features denoted \(A-F\) correspond to net count rate, hardness ratio, asymptotic power-law photon index, nthcomp normalization, inner-disk temperature, and diskbb normalization features, respectively. The error bars on each importance correspond to 99% confidence intervals on mean importances, the dashed line the median importance of all features, and the dotted line the mean of the same. Features corresponding to hard channel count rates are significantly more important than the median and mean feature importance, which is likely related to the higher energy origin of QPOs. An interesting difference between these plots and that for GRS 1915+105 in Figure 10 is that Extra Trees primarily weights diskbb normalization for MAXI J1535-571 regression but splits primary importance for GRS 1915+105 between the net count rate and hardness ratio engineered inputs.
mately corresponds to the projected area of the inner-disk on the sky: \(N_{\text{disk}}=(\frac{R_{\text{in}}}{B_{\text{tr}}})^{2}\cos(\theta)\), where \(R_{in}\) is the apparent inner disk radius in km, \(D_{10}\) is the distance to the source in 10 kpc units, and \(\theta\) the angle of the disk (Arnaud et al., 1999). This prominent importance is intriguing because it implies a dependence between QPO presence and frequency on diskbb normalization and therefore inner disk radius. This is corroborated by Garg et al. (2022), who find that QPO frequency correlates significantly with the inner disk radius for MAXI J1535-571 in data provided by _AstroSat_ according to the power law relationship \(v_{\text{QPO}}\propto\dot{M}R_{in}^{\rho}\), where \(\dot{M}\) is mass-accretion rate (Rao et al., 2016). However, (Garg et al., 2022) do not find a clear relationship between diskbb normalization and QPO frequencies in the \(\sim 1.6-2.8\) Hz range. Overall, the similarity in feature importances for engineered features for regression and classification in MAXI J1535-571 shows that the same features that are important in determining the parameterizations of QPOs are those important in determining their presence as absence. Regarding the feature importances derived from the energy spectra, the highest energy channels are the most important for both regression and classification, with the five most important channel counts rates for each coming from the equivalent \([9.5-10]\), \([9.0-9.5]\), \([8.5-9.0]\), \([8.0-8.5]\) and \([7.5-8.0]\) keV channels for regression and \([9.0-9.5]\), \([9.5-10.0]\), \([8.5-9.0]\), \([8.0-8.5]\) and \([3.0-3.5]\) keV channels for classification. Notably, for both classification and regression only hard channels \(\geq 3\) keV have importances significantly greater than the mean and median importances for all features in their respective sets at the 99% confidence level. The fact that the high-energy spectral data is most informative of the QPOs is interesting and we speculate that this may be related to the fact that QPOs manifest more prominently at higher energies above the disk's peak temperature. A broader perspective which generalizes these relationships to other BH systems is of high interest, but outside the scope of this work. Consequently, we are currently working on a comprehensive follow-up work, in which we will evaluate these models on data identically reprocessed for numerous black holes and neutron stars simultaneously. One additional difference between this preliminary work and that prospective one will be full inclusion of all LF QPO features for all sources (such as GRS 1915+105), because although focusing on the dominant frequency for QPOs in GRS 1915+105 served our purposes here, this would be a limitation in the future because such focus would not make it clear whether these trained forest methods would predict many false positives and false negatives for sources similar to GRS1915+105 that do include QBO-absent data, yet perform well nonetheless.
### Statistical Model Comparison
As mentioned in Section 4, we included an ordinary least squares model as a benchmark for their utilization. As Figure 3, Figure 5, 6, and 7 demonstrate, each of our models outperform linear regression. In order to assess the significance of the improvements, we employ the Nadeau and Bengio (2004) formulation of the frequentist Diebold-Mariano corrected paired t-test (Diebold and Mariano, 1995),
\[t=\frac{\frac{1}{k^{r}}\sum_{i=1}^{k}\sum_{j=1}^{r}x_{ij}}{\sqrt{(\frac{1}{k_{ r}+\frac{n_{\text{test}}}{n_{\text{train}}}})\sigma^{2}}} \tag{6}\]
where \(k=10\) and represents the number of k-fold validation folds, \(r=10\) and equals the number of times we repeated the \(k\)-fold procedure, \(x\) is the performance difference between two models, and \(\hat{\sigma}^{2}\) represents the variance of these differences (Pedregosa et al., 2011). It is necessary to correct the \(t-\)values in this manner because the performances of the models are correlated with each fold upon which they are tested, as some folds may make it harder for one of, or all of, the models to generalize, whereas others make it easier, and thus the collective performance of the models varies. The results of these pairwise tests for all permutations of two models on both sources is shown in Table 1.
We additionally implement the Bayesian Benavoli et al. (2016) approach, which allows us to calculate the _probability_ that a given model is better than another, using the Student distribution formulated in Equation (7):
\[\text{St}(\mu;n-1,\overline{x},(\frac{1}{n}+\frac{n_{\text{test}}}{n_{\text {train}}})\hat{\sigma}^{2}) \tag{7}\]
where \(n\) is the total number of samples, \(\overline{x}\) is the mean score difference, and \(\hat{\sigma}^{2}\) is the Nadeau and Bengio (2004) corrected variance in differences (Pedregosa et al., 2011). Both sets of these pairwise tests are also shown in Table 1.
Based on these tests, it is clear that extra trees significantly out performs all other models, and interestingly, that each model that follows it in decreasing order of performance is significantly better than the remaining models following it, confirming the findings in Figure 3. In fact, in all cases of regression (), the order of model performances is extra trees, random forest, decision tree, and finally, linear regression. This result is expected, with decision trees being more accurate than linear regression (because the former can leverage non-linear relationships between input features and QPOs), as well as for random forest to outperform individual decision trees (because random forests are ensemble aggregations of decision tree forests). The similar yet superior performance of extra trees in comparison to random forest is notable but not striking (Mathew, 2022), yet this improvement should be considered with the additional size of an extra trees model compared to a trained random forest counterpart (this difference ranges from larger in terms of leaf count) (Geurts et al., 2006). Nevertheless, based on these findings it is clear that these classical machine learning models have been able to fairly accurately optimize for individual sources. However, although extra trees may perform best in these individual source scenarios, it remains yet to be seen whether these classical models will be generalizable for accurate cross-source analyses (as proposed earlier) or if other models like neural networks will be required (Neyshabur et al., 2017). Although it may seem reasonable to combine data from these two sources and evaluate the predictive performance of these models in such a source-heterogeneous space, this would not be appropriate because any the resultant feature importances would not communicate whether or not the input engineered or raw spectral features are being leveraged for intuition into the physical state of the objects, or if their importances just reflect the models picking up on artifacts from the data generation procedure. In other words, this could be considered a form of data leakage, considering differing instrumental sensitivities, QPOs identification methods for each source, etc. (Hannun et al., 2021; Yang et al., 2022). Hence, this provides additional motivation for follow-up, in which energy and timing spectra from a single instrument are reprocessed in an identical manner for multiple objects to prevent instrumental artifacts from contaminating the findings potentially recoverable from such a source-heterogeneous data-set.
## 7 Conclusion
In this paper we have advanced novel approaches utilizing machine learning algorithms to link energy spectral properties (as both re-binned raw energy spectra and alternatively via engineered features
derived from spectral fits) with the presence and properties of QPOs prominent in power-density spectra of two low-mass X-ray binary black hole systems. Specifically, we tested a selection of tree-based classical machine learning models using engineered features derived from energy spectra to predict QPO properties for fundamental QPOs in the black hole GRS 1915+105, and such derived features as well as raw rebinned energy spectra to characterize fundamental and harmonic QPOs in the black hole MAXI J1535-571. Additionally, we trained classification algorithms on the same data to predict the presence/absence of QPOs, as well as the multiclass QPO state of MAXI J1535-571 observations. We compared the performance of the machine learning models against each other, and found extra trees to perform best in all regression situations for both sources. Additionally, we compared every model against simplistic linear (regression) and logistic (classification) models as well, finding the machine learning models outperformed their linear counterpart in all regression cases, with linear regression notably struggling to correctly identify observations lacking QPOs. The main findings from this study are:
1. All tested regression models yielded significantly better results on MAXI 1535-571 versus GRS 1915+105 data, despite the latter having 6x more data with QPOs and no issue with QPO absent observations. We attributed this to the multitude of unusual variability classes unique to GRS 1915+105 among Huppenkothen et al. (2017b).
2. Kolmogorov-Smirnov tests on permutations of QPO parameter residuals showed that the best fitting regression model, Extra Trees, does not favor any particular QPO parameter and instead predicts for all with equal accuracy, including those for harmonics.
3. Using rebinned raw spectral data as opposed to KSPEC derived features resulted in significantly worse performance for regression, binary classification, and multiclass classification on MAXI J1535-571 observations.
4. To enhance computational efficiency and ensure importance credibility, we calculated TreeShap feature importances immune to multicollinearity and found that for processed input features, extra trees determined the most significant features for GRS 1915+105 to be net count rate and hardness ratio, whereas the same model predicting for MAXI J1535-571 found diskbb normalization most important, which suggests a dependence on physical inner disk radius in this case.
5. We found almost all the rebinned channels which are the most important in determining the parameterizations of QPOs in regression are also those that are most important in determining their presence versus absence in classifying MAXI J1535-571 energy spectral data. Furthermore, for energy spectra, we found hard channels are the most important for both regression and classification, which aligns with the understanding of higher energy QPO manifestation above peak disk temperatures
Figure 12: A pairplot displaying the pairwise relationships between engineered input and Lorentzian QPO output paramters for all GRS 1915+105 data. The letters in \(A-F\) correspond to the net count rate, hardness ratio, asymptotic power-law photon index, nthcomp normalization, inner-disk temperature, and diskbb normalization features, respectively.
* We have proposed future applications of these methods that range from extending the input feature space they are tested on (e.g. scattering fraction and inclination) to moving from single source to source/source-type heterogeneous samples to achieve our original goal of inter-object generalizations since in this paper we have introduced and laid the foundation for these methods on individual objects.
Finally, we based our work on our QPOML Python library, from input and output matrix construction and preprocessing, to hyperparameter tuning, model evaluation, and plot generation, which were all conveniently streamlined for application and both (i) executed as "under-the-hood" as possible while remaining user accessible; and (ii) easily extendable to any number of QPOs and any number of scalar observation features for any number of observations from any number of sources. This library is available on GitHub.
## 8 Acknowledgements
M.M. acknowledges support from the research program Athena with project number 184.034.002, which is (partly) financed by the Dutch Research Council (NUD). We also thank Virginia A. Cuneo for a helpful conversation early in this work, Michael Corcoran and Craig Gordon for assistance with some early technical issues. Finally, we thank Travis Austen with help recovering a significant amount of our work from a damaged virtual machine disk, and Brandon Barrios for Windows Subsystem for Linux advice. This work was made possible by the _NICER_ and _RXTE_ missions, as well as data from the High Energy Astrophysics Science Archive Research Center (HEARSARC) and NASA's Astrophysics Data System Bibliographic Services. This work has been advised by AstroAI.
## 9 Data Availability
The data used for MAXI J1535-571 are available at the _NICER_ archive ([https://heasarc.gsfc.nasa.gov/docs/nicer/nicer_archive.html](https://heasarc.gsfc.nasa.gov/docs/nicer/nicer_archive.html)), and those for GRS 1915+105 belong to their corresponding authors and are available at the following references Zhang et al. (2020) and Zhang et al. (2022). The software used for energy spectral data analysis can be accessed from the HEASARC website ([https://heasarc.gsfc.nasa.gov/Iheasoft/download.html](https://heasarc.gsfc.nasa.gov/Iheasoft/download.html)). The QPOML code repository can be accessed via GitHub
_Facilities: NICER, RXTE Software: Software:_ AstroPy (Astropy Collaboration et al., 2013, 2018), Keras (Chollet et al., 2015), Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), Pandas (Wes McKinney, 2010), SciencePlots (Garret, 2021), SciPy (Virtanen et al., 2020), scikit-learn (Pedregosa et al., 2011), and seaborn (Waskom, 2021).
|
2310.19084 | Roles of Scaling and Instruction Tuning in Language Perception: Model
vs. Human Attention | Recent large language models (LLMs) have revealed strong abilities to
understand natural language. Since most of them share the same basic structure,
i.e. the transformer block, possible contributors to their success in the
training process are scaling and instruction tuning. However, how these factors
affect the models' language perception is unclear. This work compares the
self-attention of several existing LLMs (LLaMA, Alpaca and Vicuna) in different
sizes (7B, 13B, 30B, 65B), together with eye saccade, an aspect of human
reading attention, to assess the effect of scaling and instruction tuning on
language perception. Results show that scaling enhances the human resemblance
and improves the effective attention by reducing the trivial pattern reliance,
while instruction tuning does not. However, instruction tuning significantly
enhances the models' sensitivity to instructions. We also find that current
LLMs are consistently closer to non-native than native speakers in attention,
suggesting a sub-optimal language perception of all models. Our code and data
used in the analysis is available on GitHub. | Changjiang Gao, Shujian Huang, Jixing Li, Jiajun Chen | 2023-10-29T17:16:40Z | http://arxiv.org/abs/2310.19084v1 | # Roles of Scaling and Instruction Tuning in Language Perception:
###### Abstract
Recent large language models (LLMs) have revealed strong abilities to understand natural language. Since most of them share the same basic structure, i.e. the transformer block, possible contributors to their success in the training process are scaling and instruction tuning. However, how these factors affect the models' language perception is unclear. This work compares the self-attention of several existing LLMs (LLMa, Alpaca and Vicuna) in different sizes (7B, 13B, 30B, 65B), together with eye saccade, an aspect of human reading attention, to assess the effect of scaling and instruction tuning on language perception. Results show that scaling enhances the human resemblance and improves the effective attention by reducing the trivial pattern reliance, while instruction tuning does not. However, instruction tuning significantly enhances the models' sensitivity to instructions. We also find that current LLMs are consistently closer to non-native than native speakers in attention, suggesting a sub-optimal language perception of all models. Our code and data used in the analysis is available on GitHub.
## 1 Introduction
Large language models (LLMs), e.g., GPT-4, Chat-GPT and Vicuna, have shown nearly human-level understanding of text inputs, indicating better human-like language perception compared with their predecessors such as BERT Devlin et al. (2019) and GPT-2 Radford et al. (2019). However, the mechanism behind such improvement is largely unknown. One way to interpret such mechanism is comparing model computation processes to human data. For example, prior work compared language models (mainly GPT-2) with human neuroimaging data during language comprehension, suggesting that models with higher human resemblance also perform better in NLP tasks. Hasson et al. (2020); Schrimpf et al. (2021). However, given recent breakthroughs in LLMs, it remains to be tested whether the newest LLMs still align with human perception data. Based on the training pipeline of current LLMs, there are two possible sources for the potential change: the scaled model/data size and the aligning process after pretraining, such as instruction tuning Ouyang et al. (2022); Wei et al. (2022); Sanh et al. (2022). This paper aims to provide a further understanding of LLMs' mechanism by investigating the roles of these two factors in language understanding.
In this paper, we analyze self-attention as a channel to the language perception ablility of LLMs, because it is the key mechanism of transformer language models, and it naturally resembles the human attention mechanism in language perceiving Merkx and Frank (2021). The attention patterns under investigation includes those from trending open-sourced LLMs with different sizes, i.e., 7B, 13B, 30B, 65B, and different training stages, i.e., pretrained (LLaMA) and instruction-tuned Alpaca and Vicuna) models. Given these models are all trained mainly with English data, we also compare the attention patterns of LLMs with human saccade (a form of eye-movement) data, including both native speakers (L1) and non-native learners (L2) of English Li et al. (2019) to see if the models correlates more with L1 subjects.
Our analysis is three-fold. First, we compare the general attention distributions of LLMs to measure the impact of these two varying factors on the model (SS3.1). Second, we perform linear regressions (LRs) between the model self-attention and human saccade matrices to see how human-like they are, and how the resemblance is affected by the two factors (SS3.2). Third, we analyze the relation between the given attention and some trivial patterns, revealing the difference between different levels of language perception abilities (SS3.3). The main findings are:
* Scaling significantly affects the general attention distribution on plain text, while instruction tuning has limited effect. However, instruction tuning enhances the model's sensitivity to instructions (SS5.1).
* Higher human resemblance is significantly correlated to better language modeling. Scaling improves the resemblance by scaling law Henighan et al. (2020), while instruction tuning reduces it. All models have higher resemblance to L2 rather than to L1, suggesting further room for the improvement in language perception (SS5.2).
* L2 saccade has higher dependency on trivial attention patterns than L1, indicating a more rigid way of perception. Scaling significantly lowers LLMs' reliance on trivial attention patterns, while instruction tuning does not (SS5.3).
## 2 Related Work
Self-attention analysisis a common way to interpret transformer models. Several studies has shown that it is correlated to linguistic information, such as syntactic or semantic structures Marecek and Rosa (2018); Raganato and Tiedemann (2018); Voita et al. (2019); Clark et al. (2019). There are also some attempts and debates to explain the models' prediction with attention Jain and Wallace (2019); Wiegreffe and Pinter (2019); Serrano and Smith (2019); Vashishth et al. (2019). Different from them, we use self-attention patterns as a measurement of the language perception abilities of LLMs. Also, some new ways to interpret model self-attention have been proposed, such as Attention Flow Abnar and Zuidema (2020), vector norm Kobayashi et al. (2020, 2021), ALTI Ferrando et al. (2022), Value Zeroing Mohebbi et al. (2023), and logit analysis Ferrando et al. (2023). These methods can raise the interpretability of model attention w.r.t. linguistic structural information, but are not tested with human eye-tracking data. Also, some of them suffer from huge computational cost. Instead, we choose to use the raw attention scores, as it is produced in the models' original computational processes, and they already correlate the human data, demonstrating their interpretability.
Instruction tuned LLMsachieve better performances in performing tasks Ouyang et al. (2022); Wei et al. (2022); Sanh et al. (2022). It seems that the LLMs are better at understanding human instructions. However, there is little discussion on how the instruction tuning process affects the language perception.
Eye-movement in the human reading process has drawn attention in the computational linguistic field. It is proposed that scanpaths, i.e. tracks of eye-movement, reveals characteristics of the parsing processes and relates to working memory during reading (von der Malsburg and Vasishth, 2013), and the irregularity of scanpaths is affected by sentence structures and age (von der Malsburg et al., 2015). Such work provides theoretical ground for our analysis on saccade data.
Joint study of language models and human eye-movementis also receiving growing interest. Prior studies have shown the correlation between language processing of human and artificial neural networks Kell et al. (2018), especially pretrained language models such as GPT-2 Caucheteux et al. (2021, 2022); Schrimpf et al. (2021); Lamarre et al. (2022). Especially, Oh and Schuler (2023, 2023) explored the fit of model surprisal to human reading times, noting the effect of models' sequencial memorization and dataset size. Hollenstein and Beinborn (2021) studied the correlation between relative word importance derived from models and humans, and Morger et al. (2022) extended that research to a multilingual setting, as well as analysis on model attention. Some studies also use human attention as supervision to assist specific tasks Sood et al. (2020); Bansal et al. (2023), or use sequencial models to predict eye-tracking data Chersoni et al. (2022). Different from them, we use human attention as references for the language perception abilities.
## 3 Methods
To show the effect of scaling and instruction tuning on different models, we compare the self-attention scores of different LLMs given the same input, by viewing model attention as probability distributions and calculating the general attention divergence based on Jensen-Shannon divergence.
To analyze model self-attention, we take the human saccade as a reference, and design a human resemblance metric based on linear regression scores.
In addition, we select several trivial, context-free attention patterns, and design a trivial pattern reliance metric to help demonstrate the difference
between models and human subject groups.
### General Attention Divergence
We assesses the mean Jensen-Shannon (J-S) divergence between attention from different models on all sentences. For two attention matrices \(A,\ B\in\mathbb{R}^{n_{\mathrm{word}}\times n_{\mathrm{word}}}\), the J-S divergence is:
\[D_{\mathrm{JS}}(A,B)= \frac{1}{2}\sum_{i=1}^{n_{\mathrm{word}}}[D_{\mathrm{KL}}(A_{i} \|B_{i})\] \[+D_{\mathrm{KL}}(B_{i}\|A_{i})]\]
where \(D_{\mathrm{KL}}\) is the Kullback-Leibler (K-L) divergence, \(A_{i}\) and \(B_{i}\) are the \(i\)-th rows in the two matrices. Note the attention matrices are re-normalized to keep the sum of every row to 1.0. We calculate the J-S divergence between heads-averaged attention scores per sentence, and demonstrate the mean values. For models with the same number of layers, the divergence is calculated layerwise; For models with different numbers of layers, we divide their layers into four quarters and calculate the divergence quarter-wise.
To identify high and low divergence, a reference value is required. We propose to use the attention divergence between two models that share the same structure, size, and training data, but have small difference in training strategies as the reference. A divergence value higher than the reference suggests relatively large change attention, while one lower than that indicates small (but not necessarily negligible) change.
**Divergence by Scaling and Instruction Tuning** Models with different scales or training stages are compared with the above divergence to show the effect of different factors.
Sensitivity to InstructionsTo evaluate the effect of instruction tuning, we measure a given model's sensitivity to instructions by the same general attention divergence metric, but on different input, i.e., plain input and input with instructions. If the divergence between them is significantly higher than reference, we say the model has high sensitivity to instructions.
To construct instructed text, we attach two prefixes to plain text sentences: "Please translate this sentence into German:", and "Please paraphrase this sentence:". They are within the reported capacity of most LLMs, and require no extra information to follow. Then, we collect model attention on the two corpora within the original sentence spans (re-normalized) to represent non-instruction and instruction scenarios. Also, as a control group, we attached a noise prefix made of 5 randomly sampled English words "Cigarette first steel convenience champion.", and calculate the divergence between prefixed and plain text to account for the effect of a meaningless prefix.
### Human Resemblance
#### 3.2.1 Human Resemblance Calculation
Given the model self-attention and the human saccade data in matrix form for each sentence, we first extract the lower-triangle parts of the matrices (marking right-to-left attendance), because prior studies have found that such eye-movements in reading are related to working memory efficiency [20] and parsing strategies [21], and the auto-regressive LLMs only have right-to-left self-attention. Then, we flatten the extracted parts of each sentence attention matrix, and concatenate them to get attention vectors \(\{v_{\mathrm{human}}^{i}\}\) for each subject \(i\), and \(\{v_{\mathrm{model}}^{j,k}\}\) for head \(k\) in model layer \(j\). We then stack \(\{v_{\mathrm{model}}^{j,k}\}\) along the attention heads and get a matrix \(\{V_{\mathrm{model}}^{j}\}\) for the \(j\)-th layer. The LR for model layer \(j\) and subject \(i\) is defined as:
\[\arg\min_{w,b}\left\|V_{\mathrm{model}}^{j\top}w+b-v_{\mathrm{human}}^{i} \right\|_{2}^{2}\]
This means we take the linear combination of attention scores from all heads in a layer to predict the human saccade. Note that the model attention here are not re-normalized, in order to preserve the relative weight w.r.t. the > token. The regression scores \(R_{i,j}^{2}\) (ranging from 0 to 1, larger value means better regression) are averaged over subjects to represent the human resemblance of layer \(j\), and the highest layer score is considered as the regression score of the whole model, \(R_{\mathrm{model}}^{2}\).
Because the human saccade pattern varies across individuals, there exists a natural upper bound for human resemblance of models, i.e., the human-human resemblance, or the inter-subject correlation. To calculate this correlation within the two language groups, L1 and L2, we take the group mean to regress over each individual data independently, and use the average regression score \(R_{\mathrm{inter}}^{2}\) as the inter-subject correlation for that group. The final human resemblance score is \(R_{\mathrm{model}}^{2}/R_{\mathrm{inter}}^{2}\), ranging from 0 to 100%.
#### 3.2.2 Human Resemblance vs. Next-Token Prediction
A previous study finds a positive correlation between the next-token predication (NTP) performance and the "Brain Score" of neural networks (Schrimpf et al., 2021), which supports the intuition that the more human-like models are, the better generation they output. To test this finding in our setting, we calculated the per-token prediction loss (the negative log likelihood loss normalized by sequence lengths) of the models employed in this study on the Reading Brain dataset, and calculate the Pearson's Correlation between their prediction loss and max layerwise human resemblance scores.
### Trivial Pattern Reliance
#### 3.3.1 Trivial Pattern Reliance of Models
We select three representative trivial patterns of transformer attention: attending to the first word in the sentence, attending to the previous word (Vig and Belinkov, 2019), and self-attending (Clark et al., 2019), and analyze the effect of scaling and instruction tuning on models' reliance on them.
First, we represent the trivial patterns as binary adjacent matrices, where attending relations are marked with 1, others with 0, and use their flattened lower-triangle parts as the independent variables. The model attention are similarly processed to be the dependent variables. Then, an LR is fit between the two, and the regression scores are collected to represent the trivial pattern reliance of each model layer.
#### 3.3.2 Trivial Pattern Reliance of Humans
This research compares the models' resemblance to L1 and L2 humans because intuitively, L1 is better than L2 in language understanding. To test this intuition, we compare their reliance on the three trivial patterns to see whether their attention mode differs. Similar to model attention, the flattened human saccade vectors are used as the dependent variable to fit the LRs with the trivial patterns.
## 4 Experiment Settings
### Data
We use the Reading Brain dataset (Li et al., 2019) for both the text input and the human data. The dataset includes 5 English STEM articles. Each article has 29.6\(\pm\)0.68 sentences of 10.33\(\pm\)0.15 words. It also includes human eye-tracking and fMRI data recorded synchronously during self-paced reading on the articles. The human subjects are 52 native English speakers (L1) and 56 non-native learners of English (L2). We choose this dataset for the following reasons: (1) The text is presented to the subjects sentence by sentence instead of page by page, making in-sentence reading patterns clearer. (2) It has synchronous eye-tracking and fMRI data, which allows both behavioral- and neural-level analysis. In this article we mainly focus on the behavioral part, but we also have some preliminary results on the fMRI analysis (See Appendix D).
We take the saccade number, i.e. times of eye-movements from a word to another, instead of saccade duration, to represent the human reading attention, because this reduces the effect of other factors such as word length. The saccade data of a human subject for a given sentence is a \(n_{\mathrm{word}}\times n_{\mathrm{word}}\) matrix. Figure 1 gives examples of individual and group mean saccades.
### LLMs
We employ LLaMA (Touvron et al., 2023) and its instruction-tuned versions, Alpaca (Taori et al., 2023) and Vicuna (Chiang et al., 2023) in this research. We use them because: (1) The model sizes and different instruction-tuning methods covers the majority of current open-sourced LLMs. (2) These models are being widely used and customized globally. Thus, analysis on them is representative and meaningful.
LLaMAis a series of pretrained causal language models with parameter sizes 7B, 13B, 30B and 65B, trained on over 1T publicly available text tokens, and reaches state-of-the-art on most LLM benchmarks (Touvron et al., 2023). The training corpus is mainly English. We use all 4 sizes of LLaMA, which covers the sizes of most current
Figure 1: Examples of individual and group mean saccade of the sentence ”Could humans live on Mars some day?”.
open-sourced LLMs. We also use the 774M GPT-2 Large (Radford et al., 2019) to represent smaller pretrained models in the scaling analysis.
Alpacais fine-tuned from the 7B LLaMA model on 52K English instruction-following demonstrations generated by GPT-3 (Brown et al., 2020) using self-instruct (Wang et al., 2023). To analyze the effect of scaling, we also trained a 13B version of Alpaca using the official data and training strategy of Alpaca. Our 13B Alpaca model scores 43.9 and 46.0 on the MMLU dataset in zero-shot and one-shot setting respectively, proving the soundness of our fine-tuning. (The corresponding scores are 40.9 and 39.2 for the official Alpaca 7B model.1)
Footnote 1: Based on this GitHub repository for LLM evaluation.
Vicunamodels are fine-tuned from the 7B and 13B LLaMA models on 70K user-shared conversations with ChatGPT. The data contains instruction and in-context learning samples in multiple languages, thus the Vicuna models can also be viewed as instruction-tuned.
### LLMs Attention Collection
To obtain model attention, text is provided to the models sentence by sentence to perform next token predict, and we collect the attention tensors in each layer. This resembles the human self-paced task-free reading process, where only one sentence is visible at a time to the subjects, and the natural human behavior is to predict the next word (Kell et al., 2018; Schrimpf et al., 2021).
Because models use the SentencePiece tokenization (Kudo and Richardson, 2018), but the human saccade data is recorded in words, the attention matrices need to be reshaped. Following Clark et al. (2019) and Manning et al. (2020), we take the sum over the "to" tokens, and the average over the "from" tokens in a split word. Also, the <s> token is dropped as it is not presented to human subjects.
Reference of DivergenceWe use the attention divergence of Vicuna v0 and Vicuna v1.1 13B as the reference values. The two versions share the same training data and strategy, but have minor difference in separators and loss computation2. We use the divergence calculated on 13B Vicunas across all divergence analyses, because it has lower mean and variance than the 7B one, making our comparison fairer and stricter.
Footnote 2: According to notes in the Vicuna GitHub repository.
## 5 Results
### General Attention Divergence
#### 5.1.1 Scaling Significantly Changes the General Attention
The general attention divergence between models in different sizes is shown in Figure 2. The results shows that LLaMA, Alpaca and Vicuna all reveal divergence significantly higher than the reference values when scaled from 7B to 13B. This means scaling causes large change in the overall attention distribution.
#### 5.1.2 Instruction Tuning Has Limited Effect on General Attention
The general attention divergence between pretrained and instruction tuned models is shown in Figure 3. It can be observed that only Vicuna 13B brings divergence above the reference values, while all the others don't. The same result can also be observed on instruction-prefixed sentences. This suggests that instruction tuning can only bring limited change to the models' general attention distribution. This can also be verified by other results in the following parts.
#### 5.1.3 Instruction Tuning Enhances Sensitivity to Instructions
The divergence between the models' attention obtained on the original and instruction-attached sentences are shown in Figure 4. There are two major observations. Firstly, LLaMA, Alpaca and Vicuna all show divergence significantly above the reference values between the two types of sentences, especially in the shallower layers, which means they all have sensitivity to instructions. Secondly, while the divergence of LLaMA drops in the deeper layers, the divergence of Alpaca and Vicuna does
Figure 2: Mean J-S divergence between 7B and 13B model attention in each layer quarter, measured on non-instruction sentences.
not go down, but rather up, suggesting higher sensitivity to instructions in the deeper layers. This effect is not observed in the noise-prefixed scenario (See Appendix A).
From these two observations, we can conclude that the pretrained LLaMA has already obtained some sensitivity to instructions, i.e. to change its attention mode when it meets instructions, and this sensitivity is further strengthened by instruction tuning, which makes this sensitivity higher in the deeper layers. Because in causal language models like LLaMA, the last few layers are related to generation, this effect of instruction tuning may bring more suitable generation for a certain instruction.
### Human Resemblance
#### 5.2.1 Higher Human Resemblance, Lower Prediction Loss
The per-token NTP loss and the maximum layer-wise human resemblance scores are compared in Table 1 and Figure 5. One can tell from it that, within the considered range of parameter sizes, the human resemblance is negatively correlated with the NTP loss, where the resemblance goes higher nearly linearly with the loss going lower. The Pearson's correlation is \(-0.875\) for L1 (\(p<0.002\)) and \(-0.917\) for L2 (\(p<0.0005\)), supporting a significant and strong negative linear correlation.
This result shows that, the human resemblance we defined is positively related to the language modeling performance of the LLMs considered in this research, giving a practical meaning to our human resemblance analysis. Also, based on this result, we know the pursuit for higher human resemblance is consistent with the target of training better LLMs. Similarly, factors negatively impacting the human resemblance could also harm the models' language modeling performance, such as instruction tuning brings lower human resemblance as well as higher NTP loss.
#### 5.2.2 Scaling Enhances Human Resemblance
The layerwise human resemblance of pretrained models in different scales are shown in Figure 6, where the LLaMAs are very different from GPT-2 Large in two ways. First, the layerwise human resemblance of the LLaMAs are much higher than GPT-2, but the gaps among the LLaMAs are small; Second, human resemblance of GPT-2 drop quickly in the deeper layers, while it remains high across all layers in the LLaMAs. one can refer from these
\begin{table}
\begin{tabular}{c c c c c} \hline Size & Name & Loss & L1(\%) & L2(\%) \\ \hline
774M & GPT-2 & 0.3264 & 34.99 & 40.03 \\ \hline \multirow{3}{*}{7B} & LLaMA & 0.2408 & 53.04 & 62.44 \\ & & Lapaca & 0.2646 & 52.51 & 61.71 \\ & Vicuna & 0.2593 & 51.90 & 61.19 \\ \hline \multirow{3}{*}{13B} & LLaMA & 0.2406 & 55.66 & 64.20 \\ & & 0.2634 & 55.05 & 64.46 \\ & & 0.2847 & 54.26 & 61.31 \\ \hline
30B & LLaMA & 0.2372 & 63.16 & 69.40 \\ \hline
65B & LLaMA & 0.2375 & 64.05 & 70.07 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between NTP loss and the human resemblance scores
Figure 4: Attention divergence on the original and instruction-attached text of 7B and 13B models.
Figure 3: General attention divergence between pretrained and instruction-tuned models in 7B and 13B, measured on non-instruction sentences.
results that LLaMA has undergone a phase change in terms of human-like attention from the level of GPT-2. However, the scaling from 7B to 30B does not cause another phase change.
Table 2 and Figure 7 show the max layer-wise human resemblance increases linearly while the model parameter size increases exponentially, where the Pearson's correlation is 0.989 for L1 (\(p<0.002\)) and 0.964 for L2 (\(p<0.01\)). This agrees with the scaling law of LLMs (Henighan et al., 2020).
This result shows that, within the scope of this study, scaling significantly enhances the human resemblance. If the scaling law continues to take effect, the human resemblance is expected to reach 98.80% for L2 and 88.82% for L1 at the scale of 100B. However, because the law is expected to fail after reaching a threshold (Hestness et al., 2017), the relation between human resemblance and model scale is expected to change before reaching that scale.
#### 5.2.3 Instruction Tuning Harms Human Resemblance
Table 3 shows the max layerwise human resemblance of the pretrained and instruction tuned models. It shows that the instruction tuned Alpaca and Vicuna models have lower max layerwise human resemblance to both L1 and L2 than LLaMA in both sizes, which suggests that instruction tuning causes decline in the human resemblance of attention. However, the decline is minor compared with the increase brought by scaling, which agrees with the limited effect of instruction tuning.
We also conduct relative t-tests to detect significant difference in layerwise human resemblance (\(p=0.05/n_{\text{test}}\)). The numbers of significant layers are shown in Table 4, and the complete lists of layers are in Table 6, 7 in Appendix C). The tables show that instruction tuning enhances or reduces human resemblance on about the equal numbers of layers on the 7B models, but reduces human resemblance to much more layers on the 13B models. Also, the layers with significantly lower resemblance are distributed across all layers. This result means that, while the total effect of instruction tuning is small, the drop in human resemblance caused by it is significant and widespread.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Model} & \multicolumn{2}{c}{Resemblance (\%)} \\ Name & Size & L1 & L2 \\ \hline GPT-2 & 774M & 34.99 & 40.03 \\ & 7B & 53.04 & 62.44 \\ LLaMA & 13B & 55.56 & 64.20 \\ & 30B & 63.16 & 69.40 \\ & 65B & **64.05** & **70.07** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Max layerwise human resemblance of models ranging from 774M to 65B.
Figure 5: The correlation between the per-token negative log likelihood loss and the max layerwise human resemblance of LLMs.
Figure 6: Layerwise L1 and L2 human resemblance for different sized pretrained models.
Figure 7: The change of max layerwise human resemblance caused by the scaling of model sizes, where the x-axis is log scaled.
To sum up, instruction tuning does small but significant harm to the human resemblance of LLMs, which in turn may bring potential damage to their language perception. This challenges the common assumption that instruction tuning can activate the model's ability to perceive language with human-aligned behaviors. However, this negative effect could be compensated by scaling, for scaling increases the human resemblance in a larger degree. This provides a novel reason for further scaling.
#### 5.2.4 All Models Show Higher L2 Resemblance
Comparing the L1 and L2 human resemblance of all models (LLaMA, Alpaca and Vicuna, 7B and 13B) in Table 2 and Table 3, an advantage of L2 over L1 can be observed steadily. Independent t-test also shows that, all models considered in this research show significantly higher resemblance to L2 than to L1 (\(p=0.05/n_{\mathrm{test}}\) to avoid false positives). This means the models are closer to non-native English learners than to native English speakers in attention, though they are all trained mainly on English data. Furthermore, this trend is not reversed by the scaling or instruction tuning within the scope of this research, which suggests that it is a innate feature of LLMs. We will look further into this phenomenon in the next section.
### Trivial Pattern Reliance
#### 5.3.1 L2 Relies More on Trivial Patterns
The results of LR scores between trivial patterns and L1 and L2 human saccade is shown in Table 5, where L2's score is higher than L1's in minimum, maximum and mean values. Independent t-test also supports a significant advantage of L2 over L1 on the regression scores (\(p<5\times 10^{-8}\)). This means one significant difference between L1 and L2 English speakers is that L2 people show more trivial and fixed attention mode than L1 people while reading, which suggests a weaker language understanding ability as those patterns contain no linguistic or factual information.
This finding can help us interpret the difference between the models' L1 and L2 human resemblance. Note that all models considered in this research are trained mainly on English, but show higher attention resemblance to L2 subjects, this result suggests that their language perception is not ideal.
#### 5.3.2 Scaling Reduces Trivial Pattern Reliance
The relative difference between 13B and 7B models in trivial pattern reliance is shown in Figure 8, where the decline in the deeper layer quarters is substantial across all models. There is also a consistent increase in the second quarter, but the amplitude is small, so the total trivial pattern reliance is reduced. This phenomenon is also observed between LLaMA 13B and 30B (Figure 11 in Appendix B). This means scaling can effectively reduce the trivial pattern reliance, especially in the deeper layers, indicating a more contextualized and adaptive language perception.
#### 5.3.3 Instruction Tuning Increases Trivial Pattern Reliance
Figure 9 shows the relative difference between pre-trained and instruction tuned models in trivial pattern reliance, where the tuned ones show reliance gain in the deeper layers. There is also a small drop in the second quarter, but the total trivial pattern reliance is increasing. Also, total increase here is
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{Model} & \multicolumn{3}{c}{Higher} & \multicolumn{2}{c}{Lower} \\ Size & Name & L1 & L2 & L1 & L2 \\ \hline \multirow{2}{*}{7B} & Alpaca & 3 & 8 & 4 & 7 \\ & Vicuna & 9 & 10 & 8 & 9 \\ \hline \multirow{2}{*}{13B} & Alpaca & 4 & 7 & 16 & 25 \\ & Vicuna & 5 & 8 & 14 & 16 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Numbers of Alpaca and Vicuna layers with significantly higher or lower human resemblance compared with LLaMA in the same sizes.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{2}{c}{Model} & \multicolumn{2}{c}{Resemblance(\%)} \\ Size & Name & L1 & L2 \\ \hline \multirow{2}{*}{7B} & LLaMA & 53.04 & 62.44 \\ & Alpaca & 52.51 & 61.71 \\ & Vicuna & 51.90 & 61.19 \\ \hline \multirow{2}{*}{13B} & LLaMA & 55.66 & 64.20 \\ & Alpaca & 55.05 & 63.46 \\ & Vicuna & 54.26 & 61.31 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Max layerwise human resemblance scores of pretrained and instruction tuned models. This shows instruction tuning causes small decline in the max layer-wise human resemblance.
\begin{table}
\begin{tabular}{c c c c c} \hline Group & Min & Max & Mean & SE \\ \hline L1 & 0.0048 & 0.1525 & 0.0473 & 0.0037 \\ L2 & 0.0134 & 0.2165 & 0.0894 & 0.0060 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Regression scores of human saccade on the trivial patterns, across L1 and L2 subjects. SE stands for standard errors.
also smaller in amplitude than the reduction caused by scaling, again supporting the limit effect of instruction tuning.
Also, one can tell from Figure 8 and Figure 9 that scaling and instruction tuning changes the trivial pattern reliance in opposite directions. However, because the effect of instruction tuning is also smaller than that of scaling, as long as the model size continues to grow, this gain in trivial pattern reliance brought by instruction tuning could be compensated or even covered, differeing another reason for further scaling.
## 6 Conclusion
This research evaluates the effects of scaling and instruction tuning on LLMs' attention. We find that scaling effectively changes the general attention distribution, enhances the human resemblance, and reduces the trivial pattern reliance, while instruction tuning does the opposite, but also increases the models' sensitivity to instructions. Furthermore, we find current open-sourced LLMs closer to non-native English speakers in language perception, which contains more trivial attention patterns. To the best of our knowledge, we are the first to analyze the effect of instruction tuning on the human resemblance of model attention. We hope this can inspire future work on analyzing LLMs.
## Limitations
One key limitation of this study is that we cannot apply our current method to closed-sourced LLMs such as ChatGPT, for their layerwise attention scores are unavailable. With the rapid development of open LLMs, it is hopeful that we can examine these findings on the largest and most advanced LLMs in the near future.
Besides, though we demonstrate the difference in LLMs' resemblance to L1 and L2 people, and partially explained the effect by trivial pattern reliance, it is not enough to explain the underlying cause or consequences of this phenomenon. We do have some observations, such as the inter-subject correlation in the L1 group (\(0.1560\pm 0.0073\)) is lower than L2 (\(0.2463\pm 0.0090\)), and L1 subjects tend to look back to pervious words less frequently than L2 subjects. However, the underlying mechanism is still unclear. We hope to dive deeper into this non-native resemblance in our following study.
Another limitation of this work comes from the way we obtain model attention. We used the attention scores when predicting the next single token, which makes sense. However, the attention patterns could be different in the following decoding steps. Also, it is possible that combining attention scores across multiple layers can improve the analysis. We will investigate this in our future work.
In addition, the Reading Brain dataset we used in this research is monolingual, which does not reflect the language perception process in the cross-lingual setting, which is also important for the application of LLMs. We also hope to examine our findings in the cross-lingual scenario in the future.
## Ethics Statement
The authors declare no competing interests. The human behavioral data we use is publicly available and does not contain personal information of the subjects.
## Acknowledgements
We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. We thank Yunzhe Lv and Wenhao Zhu for finetuning the Alpaca 13B model. This work is supported by National Science Foundation of China (No. 62376116, 62176120), the Liaoning Provincial Research Foundation for Basic Research (No. 2022-KF-26-02).
Figure 8: The difference between trivial pattern reliance of 7B and 13B model attention.
Figure 9: The difference in trivial pattern reliance of attentions in pretrained and instruction-tuned models. |
2303.15786 | HOICLIP: Efficient Knowledge Transfer for HOI Detection with
Vision-Language Models | Human-Object Interaction (HOI) detection aims to localize human-object pairs
and recognize their interactions. Recently, Contrastive Language-Image
Pre-training (CLIP) has shown great potential in providing interaction prior
for HOI detectors via knowledge distillation. However, such approaches often
rely on large-scale training data and suffer from inferior performance under
few/zero-shot scenarios. In this paper, we propose a novel HOI detection
framework that efficiently extracts prior knowledge from CLIP and achieves
better generalization. In detail, we first introduce a novel interaction
decoder to extract informative regions in the visual feature map of CLIP via a
cross-attention mechanism, which is then fused with the detection backbone by a
knowledge integration block for more accurate human-object pair detection. In
addition, prior knowledge in CLIP text encoder is leveraged to generate a
classifier by embedding HOI descriptions. To distinguish fine-grained
interactions, we build a verb classifier from training data via visual semantic
arithmetic and a lightweight verb representation adapter. Furthermore, we
propose a training-free enhancement to exploit global HOI predictions from
CLIP. Extensive experiments demonstrate that our method outperforms the state
of the art by a large margin on various settings, e.g. +4.04 mAP on HICO-Det.
The source code is available in https://github.com/Artanic30/HOICLIP. | Shan Ning, Longtian Qiu, Yongfei Liu, Xuming He | 2023-03-28T07:54:54Z | http://arxiv.org/abs/2303.15786v3 | # HOICLP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models
###### Abstract
Human-Object Interaction (HOI) detection aims to localize human-object pairs and recognize their interactions. Recently, Contrastive Language-Image Pre-training (CLIP) has shown great potential in providing interaction prior for HOI detectors via knowledge distillation. However, such approaches often rely on large-scale training data and suffer from inferior performance under few/zero-shot scenarios. In this paper, we propose a novel HOI detection framework that efficiently extracts prior knowledge from CLIP and achieves better generalization. In detail, we first introduce a novel interaction decoder to extract informative regions in the visual feature map of CLIP via a cross-attention mechanism, which is then fused with the detection backbone by a knowledge integration block for more accurate human-object pair detection. In addition, prior knowledge in CLIP text encoder is leveraged to generate a classifier by embedding HOI descriptions. To distinguish fine-grained interactions, we build a verb classifier from training data via visual semantic arithmetic and a lightweight verb representation adapter. Furthermore, we propose a training-free enhancement to exploit global HOI predictions from CLIP. Extensive experiments demonstrate that our method outperforms the state of the art by a large margin on various settings, e.g. +4.04 mAP on HICO-Det. The source code is available in [https://github.com/Artanic30/HOICLP](https://github.com/Artanic30/HOICLP).
## 1 Introduction
Human-Object Interaction (HOI) detection, which aims to localize human-object pairs and identify their interactions, is a core task towards a comprehensive understanding of visual scenes. It has attracted increasing interest in recent years for its key role in a wide range of applications, such as assistive robots, visual surveillance and video analysis [3, 4, 9, 11]. Thanks to the development of end-to-end object detectors [5], recent research [23, 28, 29, 37, 49] has made remarkable progress in localizing human-object instances in interaction. Nonetheless, the problem of identifying interaction classes between human-object pairs remains particularly challenging. Conventional strategies [7, 23, 37, 49] simply learn a multi-label classifier and typically require large-scale annotated data for training. As such, they often suffer from long-tailed class distributions and a lack of generalization ability to unseen interactions.
Recently, Contrastive Vision-Language Pre-training [33] has been explored to address such open-vocabulary and zero-shot learning problems as its learned visual and linguistic representations demonstrate strong transfer ability in various downstream tasks. In particular, recent work on open-vocabulary detection utilizes knowledge distillation to transfer CLIP's object representation to object detectors [10, 12, 14, 31, 45, 52]. Such a strategy has been adopted in the work of HOI detection, including GEN-VLKT [28] and EoID [43], which leverage CLIP's knowledge to tackle the long-tail and zero-shot learning in the HOI tasks.
Despite their promising results, it remains an open ques
Figure 1: **Data efficiency comparison and verb distribution analysis.** In panel (a), we increase training data from \(5\%\) to \(100\%\) and show the result of HOICLP and GEN-VLKT. In panel (b), the dots indicate the mean mAP and length of vertical line indicate the variance of mAP for verbs grouped by sample number.
tion how we effectively transfer CLIP knowledge to the HOI recognition task it involves compositional concepts composed of visual objects and interactions. First, as pointed out in [8, 35], the commonly-adopted teacher-student distillation objective is not aligned with improving the generalization of student models. In addition, as shown in Figure 1, we empirically observe that the knowledge distillation in learning HOIs (e.g., GEN-VLKT) typically requires a substantial amount of training data, which indicates its _low data efficiency_. Furthermore, knowledge distillation often suffers from _performance degradation in zero-shot generalization_ as it lacks training signal for unseen classes which is critical to inherit knowledge from the teacher model.
To address those challenges, we propose a novel strategy, dubbed HOICLIP, for transferring CLIP knowledge to the HOI detection task in this work. Our design ethos is to directly retrieve learned knowledge from CLIP instead of relying on distillation and to mine the prior knowledge from multiple aspects by exploiting the compositional nature of the HOI recognition. Moreover, to cope with the long-tail and zero-shot learning in verb recognition under low data regime, we develop a verb class representation based on visual semantic arithmetic, which does not require large amount of training data as in knowledge distillation based methods. Our methods enable us to improve the data efficiency in HOI representation learning and achieve better generalization as well as robustness.
Specifically, our HOICLIP framework learns to retrieve the prior knowledge from the CLIP model from three aspects: 1) _Spatial feature_. As the feature location is key to the detection task, we fully exploit the visual representation in CLIP and extract features only from informative image regions. To this end, we utilize CLIP's feature map with spatial dimensions and develop a transformer-based interaction decoder that learns a localized interaction feature with cross-modal attention. 2) _Verb feature_. To address the long-tailed verb-class problem as shown in Figure 1, we develop a verb classifier focusing on learning a better representation for the verbs. Our verb classifier consists of a verb feature adapter [13, 20, 36, 51] and a set of class weights computed via visual semantic arithmetic [38]. We enhance the HOI prediction by fusing the outputs of the verb classifier and the common interaction classifier. 3) _Linguistic feature_. To cope with the very rare and unseen class for HOI prediction, we adopt a prompt-based linguistic representation for HOIs and build a zero-shot classifier for the HOI classification [42]. This classifier branch requires no training and we integrate its output with the HOI classifier during model inference.
We evaluate our HOICLIP on two representative HOI detection datasets, HICO-DET [6] and V-COCO [15]. To validate HOICLIP, we perform extensive experiments under fully-supervised setting, zero-shot setting and data-efficient setting. The experiment results demonstrate the superiority of our methods: HOICLIP achieves competitive performance across all three settings, outperforming previous state-of-the-art methods on the zero-shot setting by 4.04 mAP and improving the data efficiency significantly.
The main contributions of our paper can be summarized as follows:
* To our best knowledge, HOICLIP is the **first work** to utilize query-based knowledge retrieval for efficient knowledge transfer from the pre-trained CLIP model to HOI detection tasks.
* We develop a fine-grained transfer strategy, leveraging regional visual features of HOIs via cross-attention and a verb representation via visual semantic arithmetic for more expressive HOI representation.
* We further improve the performance of HOICLIP by exploiting zero-shot CLIP knowledge without additional training.
## 2 Related work
**HOI Detection.** The HOI detection task mainly involves three sub-problems, including object detection, human-object pairing and interaction recognition. Previous HOI detection methods can be categorized into two-stage and one-stage paradigm. The two-stage [22, 25, 26, 39, 40, 54] paradigm methods use an independent detector to obtain locations and classes of objects, followed by specifically-designed modules for human-object association and interaction recognition. A typical strategy is to use graph-based methods to extract relation information to support interaction understanding [39, 46]. The one-stage paradigm instead detects the human-object pairs with interaction directly without a need for stage-wise processing. Recently, several HOI methods inspired [23, 28, 29, 37, 43, 44, 47, 49, 55, 55] by Transformer-based Detectors [5] have achieved promising performance. In particular, GEN-VLKT [28] further designs a two-branch pipeline to provide a parallel forward process, and uses separated query for human and object instead of the unified query used in CDN. RLIP [55] propose a pre-training strategy for HOI detection based on image captions. Our method builds on the top of the transformer-based HOI detection strategy and focuses on improving interaction recognition.
**Exploiting Vision-language Models.** Recent breakthroughs in Vision-Language Models (VLM) [21, 33] demonstrate a promising transfer ability to downstream tasks. The visual representations learned from natural language supervision pave the way for zero-shot and open vocabulary tasks [10, 12, 14, 24, 31, 38, 45, 52]. Pioneer works [14] transfer the VLMs to open vocabulary object detection through knowledge distillation. Inspired by this idea, recent research [28, 43] adopts the same strategy for
HOI detection. Previous efforts to transfer VLM to detection tasks can be summarized into two aspects: (1) Prior knowledge integration through texts, which initializes classifiers with labels' text embedding from CLIP; (2) Feature (or logits) level knowledge distillation, which guides the learned features (or logit predictions) to align with image feature embedded by CLIP (or logits predicted by zero-shot CLIP). In this work, we propose a novel strategy for transferring VLM knowledge to HOI detection tasks. Different from the above methods, we directly retrieve related information from CLIP, leading to superior performance and higher data efficiency.
**Zero-shot HOI Detection.** The target of zero-shot HOI detection is to detect and recognize HOI categories absent from training data. Due to the compositionality of HOIs, annotations of all possible HOI combinations are impractical. Therefore, the zero-shot HOI detection setting is important for application in real-world scenarios. Previous work [1, 16, 17, 18, 19, 30, 32] tackle such a challenge in a compositional manner, which disentangle reasoning on actions and objects during training. This makes it possible to recognize unseen \(\langle\)human, object, verb\(\rangle\) combinations during inference. Due to breakthroughs in VLMs [33], recent research [28, 43] focuses on transferring knowledge from VLMs to recognize unseen HOI concepts and achieve a promising performance gain on the zero-shot setting. Our work aims to explore a more efficient multi-facet strategy for knowledge transfer from VLMs in the zero-shot HOI.
## 3 Method
In this section, we introduce our HOICLIP framework for efficient CLIP knowledge transfer to HOI detection and preserving the generalization ability. We depict the overall architecture of our model in Section 3.1, followed by three key aspects of our transfer method. In Section 3.2, we introduce the query-based knowledge retrieval strategy for efficient visual knowledge transfer. In Section 3.3, we present our verb representation adapter and verb classifier extraction for verb knowledge transfer. In Section 3.4, we develop a training-free enhancement for visual-linguistic knowledge transfer. Finally in Section 3.5, we describe our training and inference pipeline.
### Overall Architecture
The overall architecture of our HOICLIP is illustrated in Figure 2. We first adopt the transformer-based end-to-end object detector [5] to localize the humans and objects. Specifically, given an input image \(I\), we use a transformer encoder to obtain a spatial image feature map \(V_{d}\), followed by instance decoder and interaction decoder to accomplish instance detection and interaction recognition, respectively. Inspired by GEN-VLKT [28], the instance decoder takes two groups of queries as the input for human and object respectively, namely human query \(Q_{h}\), and object query \(Q_{o}\). The output object queries \(O_{o}\in R^{N_{q}\times C_{e}}\) and human queries \(O_{h}\in R^{N_{q}\times C_{e}}\) in the last decoder layer are used to predict human bounding box \(B_{h}\in R^{N_{q}\times 4}\), object bounding box \(B_{o}\in R^{N_{q}\times 4}\) and object class \(C_{o}\in R^{N_{q}\times K_{o}}\), where \(K_{o}\) is the number of object classes.
Given the human and object features, we then introduce a novel interaction decoder to perform interaction recognition, in which we utilize the information from the previous extracted feature map \(V_{d}\) and from a spatial feature map \(V_{s}\) generated by CLIP, and perform a knowledge integration via a cross-attention module. Subsequently, a verb adapter extracts the action information to augment the interaction representation and recognition. A linear classifier takes the output of the interaction decoder to predict the HOI category, which is further enhanced by a training-free classifier
Figure 2: **Architecture of HOICLIP.** Given an image, HOICLIP encodes it with a detection encoder and CLIP encoder. The instance decoder localizes human and object pairs using features from the detection encoder. The interaction decoder leverages features from both the encoder and extract interaction representation. The verb adapter extracts verb representation based on the interaction representation.
using CLIP's linguistic features.
### Query Based Knowledge Retrieval
In this part, we describe the design of query-based interaction knowledge retrieval starting from revisiting the pipeline of Zero-Shot CLIP image classification.
Zero-shot CLIPCLIP extract dual-modality features by a visual encoder and text encoder. The visual encoder consists of a backbone \(\mathrm{VisEnc}(\cdot)\) and a projection layer \(\mathrm{Proj}(\cdot)\). The visual backbone extracts visual spatial feature \(V_{s}\in R^{H_{s}\times W_{s}\times C_{s}}\), which is fed into the projection layer to obtain a global visual feature \(V_{g}\in R^{D}\). The text encoder \(\mathrm{TextEnc}(\cdot)\) extracts global text representation \(T_{g}\in R^{D\times K}\) for each category where \(K\) is the number of classes. The classification \(S\in R^{K}\) is computed as follow:
\[T_{g}=\mathrm{TextEnc}(T_{K}), \tag{1}\] \[V_{g}=\mathrm{Proj}(V_{s}),\;\;V_{s}=\mathrm{VisEnc}(I),\] (2) \[S=T_{g}^{\mathrm{T}}V_{g}, \tag{3}\]
where \(T_{g}\) and \(V_{g}\) are L2 normalized features, and \(T_{K}\) is the sentences describing the \(K\) categories. The matrix multiplication computes the cosine similarity.
Interaction Decoder with Knowledge IntegrationTo predict HOI category for a pair of human and object queries, we generate a set of interaction queries \(Q_{inter}\in R^{N_{q}\times C_{s}}\) by feeding the human and object features \(O_{h}\) and \(O_{o}\) to a projection layer. To fully exploit CLIP knowledge, we propose to retrieve interaction features from CLIP that better align with the prior knowledge in classifier weights. In detail, we preserve the CLIP spatial feature \(V_{s}\) and project detection visual feature \(V_{d}\) to the same dimension as \(V_{s}\):
\[Q_{inter}=\mathrm{Pool}(O_{o},O_{h})W_{i}+b_{i} \tag{4}\] \[V_{d}^{\prime}=V_{d}W_{p}+b_{p} \tag{5}\]
where \(W_{i}\), \(b_{i}\), \(W_{p}\), \(b_{p}\) are projection parameters, \(V_{d}^{\prime}\in R^{H_{s}\times W_{s}\times C_{s}}\) and \(\mathrm{Pool}\) takes average.
To guide interaction queries \(Q_{inter}\in R^{N_{q}\times C_{s}}\) to explore informative regions in both \(V_{s}\) and \(V_{d}\), we design a cross attention module for knowledge integration and its architecture is showed in Figure 3. The \(Q_{inter}\) is first updated by self-attention, and then fed into a cross-attention module with \(V_{s}\) and \(V_{d}^{\prime}\) respectively and obtain two output features. Finally, we sum up the outputs and feed it into a feed-forward network. Formally,
\[Q_{inter} =\mathrm{SelfAttn}(Q_{inter}), \tag{6}\] \[C_{inter} =\mathrm{CrossAttn}(Q_{inter},V_{s}),\] (7) \[D_{inter} =\mathrm{CrossAttn}(Q_{inter},V_{d}^{\prime}),\] (8) \[Q_{inter} =\mathrm{FFN}(C_{inter}+D_{inter}) \tag{9}\]
where the \(V_{s}\), \(V_{d}^{\prime}\) are the key and value respectively, and \(Q_{inter}\) is the query in the shared cross attention. To extract final interaction representation \(O_{inter}\in R^{N_{q}\times D}\), we adopt the same projection operation as CLIP to convert the output of cross attention into the CLIP feature space as follows,
\[O_{inter}=\mathrm{Proj}(Q_{inter}). \tag{10}\]
The representation will be used for interaction classification based on a zero-shot classifier introduced in Section 3.4.
In this way, we leverage the object and human information from the instance decoder to retrieve interaction representation from the spatial feature map of CLIP and visual features from the detector. This query-based knowledge retrieval design allows us to achieve efficient representation learning and strong generalization capabilities.
### Verb Class Representation
In this subsection, we introduce a novel pipeline to extract global verb class representation and a verb classifier built from CLIP features to cope with label imbalance.
Visual Semantic ArithmeticIn order to better capture fine-grained verb relations from naturally imbalanced HOI annotations, we build a verb classifier through visual semantic arithmetic, which represents the global verb distribution of the training dataset. Here we hypothesize that the verb class representation can be derived from the difference of the global visual feature of an HOI and the global visual feature of its object. The concept is illustrated in Figure 4.
Specifically, we use the smallest region covering objects and human bounding boxes to represent an HOI triplet. Then we define \(\mathtt{OBJ}_{j}\) as a set containing all instances of object class \(j\). Additionally, we use the tuple \((i,j)\) to indi
Figure 3: **Structure of Knowledge Integration Cross Attention.** Interaction queries first go through a self-attention layer. Then, it’s fed into two shared cross-attention layers with \(V_{s}\) and \(V_{d}\). The outputs are summed up and fed into a feed-forward network.
cate an HOI category, where \(i\) and \(j\) stand for the class of verb and object respectively. Similarly, we define \(\mathbb{HOI}_{(i,j)}\) as a set containing all instances of HOI category \((i,j)\). For both HOI and object regions, we use the CLIP image encoder to obtain their visual features, then adopt a projector to map the features into a global feature space. Formally, given a region \(R\), we compute its feature as follows:
\[f(R)=\mathrm{Proj}(\mathrm{VisEnc}(R)) \tag{11}\]
The representation of verb class \(k\) is computed by taking the difference of averaged HOI and object region features:
\[E_{h}^{k,j}=\mathrm{L2Norm}(\sum_{R_{m}\in\mathbb{HOI}_{k,j}}f(R_{m})) \tag{12}\]
\[E_{o}^{j}=\mathrm{L2Norm}(\sum_{R_{n}\in\mathbb{OBI}_{j}}f(R_{n})) \tag{13}\]
\[E_{v}^{k}=\mathrm{L2Norm}(\sum_{n\in(k,\cdot)}(E_{h}^{k,n}-E_{o}^{n})) \tag{14}\]
where \(\mathrm{L2Norm}\) stands for L2 normalization operation and \(E_{h}^{k,j}\), \(E_{o}^{j}\) are the computed HOI and object representations. The extracted verb class representations are prior knowledge of verb concepts from CLIP and used as verb classifier below, which are denoted as \(E_{v}\in R^{K_{v}\times D}\) where \(K_{v}\) is the number of the verb category.
Verb AdapterTo use the verb class representation for classification, we design a light-weight adapter [20, 36] module to extract verb feature \(O_{verb}\in R^{N_{q}\times D}\) based on the interaction feature \(O_{inter}\). Specifically, we use an MLP to map the interaction feature into verb feature \(O_{verb}\in R^{N_{q}\times D}\), and compute the verb class scores as follows,
\[O_{verb}=\mathrm{MLP}(Q_{inter}), \tag{15}\]
\[S_{v}=O_{verb}E_{v}^{\mathrm{T}} \tag{16}\]
where the verb logits \(S_{v}\) is computed as the cosine similarity between verb feature \(O_{verb}\) and verb class representation \(E_{v}\). In this way, we leverage the prior knowledge in the visual encoder of CLIP to extract a verb classifier from training data and design a verb adapter for better verb representation. This design generates fine-grained verb information, which benefits HOI prediction.
### Zero-shot HOI Enhancement
Finally, we introduce an HOI classifier generated by the prior knowledge in CLIP text encoder, which provides a training-free Enhancement for HOI classification.
Specifically, we build a zero-shot HOI classifier by exploiting the visual-linguistic alignment learned by CLIP, in which the label descriptions embedded by CLIP text encoder \(\mathrm{TextEnc}\) is used as the classifier weights. Similar to [28, 43], we convert each HOI category to a sentence with a hand-crafted template, "A photo of a person [Verb-ing] a [Object]". The templates are fed into the CLIP text encoder \(\mathrm{TextEnc}\) to obtain an HOI classifier \(E_{inter}\in R^{K_{h}\times D}\) where \(K_{h}\) is the number of HOI categories.
To leverage the zero-shot CLIP knowledge, we compute a set of additional HOI logits from the global visual feature of the image \(V_{g}\) and the HOI classifier \(E_{inter}\). To filter out low confidence prediction, we only keep top \(K\in[0,K_{h}]\) scores. Formally,
\[S_{zs}=\mathrm{TopK}(V_{g}E_{inter}^{\mathrm{T}}) \tag{17}\]
where the \(\mathrm{Topk}\) is the operation that select HOI logits with top \(K\) score and \(S_{zs}^{i}\) indicate score for \(i^{th}\) HOI category. The updated \(S_{zs}\) is a training-free HOI prediction with high confidence, which leverages the zero-shot CLIP knowledge to benefit tail-class prediction.
Given the zero-shot HOI classifier \(E_{inter}\), we also use it to generate an interaction prediction score based on the interaction representation \(O_{inter}\) computed in Section 3.2,
\[S_{inter}=O_{inter}E_{inter}^{\mathrm{T}}, \tag{18}\]
which will be integrated with two other classification scores as described below.
### Inference and Training
In this subsection, we present the details of the training and inference pipeline of our framework.
TrainingDuring training, we obtain the training HOI logits \(S_{t}\) by combining HOI prediction \(S_{inter}\) and verb prediction \(S_{v}\),
\[S_{t}=S_{inter}+\alpha\cdot S_{v} \tag{19}\]
where \(\alpha\in R\) is a weighting parameter. For the bipartite matching process, we follow the previous HOI detector [23, 28, 37, 49] based on the DETR framework and use the Hungarian algorithm to assign ground truth to prediction. The matching cost consists of human and object bounding box regression loss, object classification loss, interaction-over-union loss, and HOI classification loss. Auxiliary losses are used on intermediate outputs of decoder layers.
Figure 4: **Illustration of Visual Semantic Arithmetic** The object and HOI representations are extracted by encoding cropped regions of the object and HOI. Then, verb representation is obtained by HOI representation minus object representation.
InferenceThe zero-shot HOI prediction \(S_{zs}\) is used in inference time. The final HOI logits \(S_{i}\) is obtained by,
\[S_{i}=S_{inter}+\alpha\cdot S_{v}+S_{zs} \tag{20}\]
Following previous methods [28], we use the object scores \(C_{o}\) from the instance decoder to compute the HOI triplet score, which can be written as
\[score^{n}=S_{i}^{n}+C_{o}^{m}\cdot C_{o}^{m} \tag{21}\]
where \(n\) is the HOI category index and \(m\) is the object category index corresponding with \(n^{th}\) HOI category. Finally, triplet NMS is applied to top-K HOI triplets according to the confidence score.
## 4 Experiments
In this section, we introduce a series of experimental analyses and comprehensive ablation studies to demonstrate the effectiveness of our method.
### Experimental setting
**Datasets.** We conduct our experiments on two public benchmarks HICO-DET [6] and V-COCO [15]. HICO-Det contains 47,776 images (38,118 for training and 9,658 for testing). The annotations of HICO-Det conclude 600 categories of HOI triplets, which are formed from 80 object categories and 117 action categories. Among the 600 HOI categories, there are 138 categories with less than 10 training instances, defined as Rare, and the other 462 categories are defined as Non-Rare.
**Evaluation Metric.** We follow the settings of previous work [6, 28, 29, 37, 49] to use the mean Average Precision (mAP) as an evaluation metric. We define an HOI triplet prediction as a true-positive example if the following criteria are met: 1) The IoU of the human bounding box and object bounding box are larger than 0.5 w.r.t, the GT bounding box; 2) the interaction category predicted is accurate.
**Zero-shot Construction.** Following previous work [1, 17, 18, 19, 28], we construct our zero-shot setting experiments in four manners: Rare First Unseen Combination (RF-UC), Non-rare First Unseen Combination (NF-UC), Unseen Verb (UV), Unseen Object (UO) and Unseen Combination(UC). Specifically, the UC indicates that all action categories and object categories are included during training, but some HOI triplets (i.e. combinations) are missing. Under the RF-UC setting, the tail HOI categories are selected as unseen classes, while the NF-UC uses head HOI categories as unseen classes. The UV setting and UO setting indicate that some action categories and object categories are not concluded in the training set respectively. For RF-UC and NF-UC, we select 120 HOI categories as unseen classes. For UV, HOI categories involving 12 randomly selected objects among 80 object categories are defined as unseen classes. For UV, HOI categories involving 20 randomly selected verb categories are not given during training. For UC, we follow the unseen combination setting in [2, 30, 34].
**Implementation Details.** For a fair comparison with previous methods [28, 29, 37, 49], we use ResNet-50 as our backbone feature extractor and ViT-32/B CLIP variant. The number of layers of the transformer encoder and transformer decoder is set to 3. The number of queries is set to 64. We train HOICLIP with a batch size of 16 with optimizer AdamW and a weight decay of \(10^{-4}\). The total number of training epochs is set to 90. For the first 60 epochs, we set the initial learning rate to be \(10^{-4}\) and use a learning rate drop for the last 30 epochs. The parameters are initialized with MS-COCO-trained DE
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Percentage & 100\% & 50\% & 25\% & 15\% & 5\% \\ \hline GEN-VLKT [28] & 33.75 & 26.55 & 22.14 & 20.40 & 15.84 \\ HOICLIP & **34.69** & **30.88** & **28.44** & **27.07** & **22.64** \\ Gain(\%) & 2.96 & 16.30 & 28.46 & 32.69 & 42.92 \\ \hline \multicolumn{5}{c}{Performance on All Categories} \\ \hline GEN-VLKT [28] & 29.25 & 18.94 & 14.04 & 13.84 & 13.31 \\ HOICLIP & **31.30** & **26.05** & **25.47** & **24.59** & **21.94** \\ Gain(\%) & 7.00 & 37.53 & 81.41 & 77.67 & 64.84 \\ \hline \multicolumn{5}{c}{Performance on Rare Categories} \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Fractional Data Experiments**.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Method & Type & Unseen & Seen & Full \\ \hline Shen et al. [34] & UC & 10.06 & 24.28 & 21.43 \\ Bansal et al. [2] & UC & 9.18 & 24.67 & 21.57 \\ ConsNet [30] & UC & 13.16 & 24.23 & 22.01 \\ HOICLIP & UC & **25.53** & **34.85** & **32.99** \\ \hline VCL [17] & RF-UC & 10.06 & 24.28 & 21.43 \\ ATL [18] & RF-UC & 9.18 & 24.67 & 21.57 \\ FCL [19] & RF-UC & 13.16 & 24.23 & 22.01 \\ GEN-VLKT [28] & RF-UC & 21.36 & 32.91 & 30.56 \\ HOICLIP\({}^{\dagger}\) & RF-UC & 23.48 & 34.47 & 32.26 \\ HOICLIP & RF-UC & **25.53** & **34.85** & **32.99** \\ \hline VCL [17] & NF-UC & 16.22 & 18.52 & 18.06 \\ ATL [18] & NF-UC & 18.25 & 18.78 & 18.67 \\ FCL [19] & NF-UC & 18.66 & 19.55 & 19.37 \\ GEN-VLKT [28] & NF-UC & 25.05 & 23.38 & 23.71 \\ HOICLIP\({}^{\dagger}\) & NF-UC & 25.71 & 27.18 & 26.88 \\ HOICLIP & NF-UC & **26.39** & **28.10** & **27.75** \\ \hline ATL\({}^{*}\)[18] & UO & 5.05 & 14.69 & 13.08 \\ FCL\({}^{*}\)[19] & UO & 0.00 & 13.71 & 11.43 \\ GEN-VLKT [28] & UO & 10.51 & 28.92 & 25.63 \\ HOICLIP\({}^{\dagger}\) & UO & 9.36 & 30.32 & 26.82 \\ HOICLIP & UO & **16.20** & **30.99** & **28.53** \\ \hline GEN-VLKT [28] & UV & 20.96 & 30.23 & 28.74 \\ HOICLIP\({}^{\dagger}\) & UV & 23.37 & 31.65 & 30.49 \\ HOICLIP & UV & **24.30** & **32.19** & **31.09** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Zero-shot performance comparison with state-of-the-art methods on HICO-DET.** We use RF-UC and NF-UC to represent rare first and non-rare first unseen combination settings respectively. UO is short for unseen object setting, and UV indicates unseen verb setting. \(*\) means only the detected boxes are used without object identity information from the detector. \(\dagger\) indicates HOICLIP without training-free enhancement.
classifiers are fixed during training. We conduct our experiments with a batch size of 8 on 2 NVIDIA A40 GPUs.
### Experiments on Partial Data
To verify the data efficiency of HOICLIP, we reduce the training data to 50%, 25%, 15% and 5% respectively, and compare our performance with state-of-the-art method [28] trained with those partial data settings. We show the performance on all HOI categories and rare HOI categories respectively in Table 1 where Visual Semantic Arithmetic is performed on the whole datasets. We provide the results of partial data Visual Semantic Arithmetic in Supplementary Materials. In each partial training data scenario, HOICLIP significantly outperforms its opponent, especially on the rare HOI categories. HOICLIP maintains a satisfactory and stable performance as the percentage of available data decreases while other methods suffer a dramatic degradation due to insufficient training data. We observe HOICLIP achieves a significant performance gain of more than 81.41% on rare categories under \(25\%\) fractional data scenario. The performance demonstrates the efficiency and robustness of our methods for knowledge transfer.
### Analysis under Regular Setting
To fully understand the characteristics of HOICLIP, we conduct experiments on various zero-shot settings, RF-UC, NF-UC, UV, and UO. We compare HOICLIP with several state-of-the-art methods. The results are shown in Table 2, which demonstrates the superior performance of HOICLIP. Compared with GEN-VLKT [28], we achieve an impressive +4.04 mAP gain under NF-UC settings for all categories and a +5.69 mAP improvement on rare categories under the UO setting.
In addition, to further demonstrate the effectiveness of HOICLIP, we compare the performance of HOICLIP with the state of the art on the default HOI detection setting. We show the performances on dataset HICO-DET in Table 3. HOICLIP achieves a competitive performance and especially a +1.87 mAP improvement in rare categories compared with GEN-VLKT. For dataset V-COCO, the results are presented in Table 4. We outperform previous methods in Scenario 1 with a new state-of-the-art performance of 63.5 AP. The improvement is less significant than that on HICO-DET given the relatively small scale of V-COCO.
**Verb Classifier Extraction** We explore different methods to extract verb representation in the following experiments. A naive alternative is to describe human action using sentences similar to the way used in CLIP. For example, we can describe the action _ride_ by a sentence: "a photo of a person riding". Having generated representation for each HOI category using CLIP, we also can obtain the representation of a verb by taking the average of all HOI representations involving the verb. For all three approaches, we use the representation of all verbs to initialize the weight of the verb adapter. We show our experiment results in Table 6. We see that our visual semantic arithmetic strategy obtain a performance gain of 1.16 mAP compared with the sentence-describing method and 1.45 mAP compared with the HOI average method in the full classes setting.
**Hyperparameter Selection** In this part, we discuss the choice of hyperparameters for each module. Different from the previous work, we use a separate validation set, the details of which are provided in the supplementary material. We train our model with different \(S_{v}\) weights and find that the weight of \(0.5\) achieves the best performance. We ablate the choice of \(S_{v}\) weight \(\alpha\) and the results are shown in Table 7. We also choose the value of \(k\) used in training-free enhancement in the same way. We find that \(k=10\) achieves the best performance. All results are shown in Table 7.
**Model Efficiency Analysis** We evaluate HOICLIP and the prior art GEN-VLKT on a single NVIDIA A100 GPU. The inference time of HOICLIP (**55.52** ms/img) is comparable to GEN-VLKT (**52.80** ms/img), and the small gap is from the additional forward through the CLIP encoder. Given the performance improvement, such a trade-off seems worthwhile. The additional parameters in HOICLIP only lead to a slight increase in inference cost. Moreover, for the model training, the number of trainable parameters in GEN- VLKT is 129.8M under its regular setting as its CLIP module is fine-tuned while our HOICLIP has 66.1M trainable parameters due to the fixed CLIP visual encoder.
### Visualization
We visualize the prediction result and attention maps to demonstrate the characteristics of our method in Figure 7. The attention maps are from the cross-attention module for knowledge integration in the interaction decoder. We observe the attention map from CLIP visual feature focus on broader interaction-related regions while the attention map detection backbone only emphasizes the object regions.
## 5 Conclusion
We have presented HOICLIP, a novel framework for transferring visual linguistic model knowledge to HOI detection. HOICLIP retrieves CLIP knowledge in a query-based manner and leverages the spatial visual feature for efficient and robust interaction representation learning. We also extract a verb classifier from the prior knowledge in CLIP through visual semantic arithmetic and introduce a verb adapter for deeper interaction understanding. To further improve the model's generalization ability, we adopt a training-free enhancement for HOI classification via the text feature of CLIP. Extensive experiments demonstrate superior performance in both fully-supervised and zero-shot scenarios with high data efficiency.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Full & Rare & Non-rare \\ \hline \multicolumn{4}{l}{“A photo of person doing”} & 33.38 & 29.67 & 34.49 \\ \multicolumn{4}{l}{Average of HOI representation} & 33.09 & 28.29 & 34.52 \\ \hline \multicolumn{4}{l}{Visual semantic arithmetic} & **34.54** & **30.50** & **35.75** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Verb Classifier Extraction.** The choice of approach to generating verb representation.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{5}{c}{Choice of \(\alpha\)} & \multicolumn{5}{c}{Choice of Top \(K\)} \\ \hline Weight & Full & Rare & Non-rare & K & Full & Rare & Non-rare \\ \hline
0.00 & 31.16 & 20.96 & 34.12 & 0 & 31.65 & 22.92 & 34.18 \\
0.25 & 31.13 & 21.50 & 33.93 & 5 & 32.25 & 25.14 & 34.32 \\
**0.50** & **31.65** & **22.92** & **34.18** & **10** & **32.26** & **24.50** & **34.51** \\
0.75 & 30.88 & 22.82 & 33.22 & 15 & 32.00 & 23.79 & 34.38 \\
1.00 & 30.33 & 24.80 & 32.72 & 20 & 31.96 & 23.72 & 34.35 \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Hyperparameter Selection.** Experiments are conducted in the validation set to search for the best hyperparameters.
Figure 5: **Visualization of predictions.** The columns indicate the input image \((a)\), prediction results\((b)\), attention maps from detection backbone\((c)\) and CLIP spatial feature\((d)\) in interaction decoder. |
2306.10020 | The Life and Death of Software Ecosystems | Software ecosystems have gained a lot of attention in recent times. Industry
and developers gather around technologies and collaborate to their advancement;
when the boundaries of such an effort go beyond certain amount of projects, we
are witnessing the appearance of Free/Libre and Open Source Software (FLOSS)
ecosystems.
In this chapter, we explore two aspects that contribute to a healthy
ecosystem, related to the attraction (and detraction) and the death of
ecosystems. To function and survive, ecosystems need to attract people, get
them on-boarded and retain them. In Section One we explore possibilities with
provocative research questions for attracting and detracting contributors (and
users): the lifeblood of FLOSS ecosystems. Then in the Section Two, we focus on
the death of systems, exploring some presumed to be dead systems and their
state in the afterlife. | Raula Gaikovina Kula, Gregorio Robles | 2023-05-28T23:43:19Z | http://arxiv.org/abs/2306.10020v1 | # The Life and Death of Software Ecosystems
###### Abstract
Software ecosystems have gained a lot of attention in recent times. Industry and developers gather around technologies and collaborate to their advancement; when the boundaries of such an effort go beyond certain amount of projects, we are witnessing the appearance of Free/Libre and Open Source Software (FLOSS) ecosystems.
In this chapter, we explore two aspects that contribute to a _healthy_ ecosystem, related to the attraction (and detraction) and the death of ecosystems. To function and survive, ecosystems need to attract people, get them on-boarded and retain them. In Section One we explore possibilities with provocative research questions for attracting and detracting contributors (and users): the lifeblood of FLOSS ecosystems. Then in the Section Two, we focus on the death of systems, exploring some presumed to be dead systems and their state in the afterlife.
## 1 Attractors (and Detractors) to Floss Projects
A contributing component to the sustainability (i.e., life) of a FLOSS project is its ability to attract new development. Although keeping current contributors is equally important, projects risk failure if they are unable to attract a healthy amount of new developers to provide rejuvenation and aid in project evolution, especially in response to ever-changing external forces (i.e., impactful events, new technologies, vulnerabilities and rivals) that affect FLOSS projects. In this section, we discuss (1) the different forces of attraction (and detraction) that influence contributors to participate in specific projects, (2) the effect of these forces at the ecosystem level, and finally present (3) three provocative research questions to further our understanding of attracting contributors to a project.
### Forces of Attraction (and Detraction)
We classify known forces of attraction as either motivation-related, environmental, or a combination of the two. Internal project-driven campaigns usually revolve around marketing strategies to attract developers. A study by Storey et al. [8] showed that communities of FLOSS projects are shaped through social and communication channels (sometimes referred to as social coding). Recently Aniche et al. [3] confirmed that news channels also play an important role in shaping and sharing knowledge among developers. Hence, owners of projects could boost their social presence through participation on recent topics from news aggregators such as reddit1, Hacker News2 and slashdot3. For instance, a project may employ new or well-known or recognizable trademarks that are trending in the news. Social media outlets and other communication channels can be leveraged to improve project attractiveness (i.e., innovative posts on Q&A forums such as StackOverflow4 and social media endorsements and collaborations through twitter or facebook). Recently, analytical indicators of project health or fitness are aimed at increasing the appeal of a project. In detail, the emergence of online collaboration platforms GitHub, GitLab and BitBucket, with specific features such as pull requests, forks, and stars depicts the fitness of a project.
Footnote 1: [https://www.reddit.com](https://www.reddit.com)
Footnote 2: [https://news.ycombinator.com](https://news.ycombinator.com)
Footnote 3: [https://slashdot.org](https://slashdot.org)
Footnote 4: [https://stackoverflow.com](https://stackoverflow.com)
Other motivations are driven by external forces. Hata et al. [6] used game theory to identify three strategies that is likely to incite contributions. The authors suggest that improving the code writing mechanisms (i.e., wikis, offical webpage, contributing and coding guidelines and using multi-language formats). Secondly, in terms of monetary incentives, sites such as bountysource website5 allow developers to be hired as bountyhunters to fix specialized bugs in a project. Finally, the impact of innovations such as social coding, introduced by online collaborations of GitHub has attracted attention of developers. A lesser explicit form of motivations is driven by a third-party with their own interests. For instance, a company may allocate employees or provide monetary incentives to support (i.e., keep alive) a project of interest. This is especially in cases where a third-party is interested in stimulating further feature development of an existing product that they are invested in.
Footnote 5: [https://www.bountysource.com/](https://www.bountysource.com/)
Failing projects provide insights into some environmental forces that detract developers from making contributions. A study by Coelho and Valente [4] found the following reasons for failing projects: usurped by competitor, obsolete project, lack of time and interest, outdated technologies, low maintainability, conflicts among developers, legal problems, and acquisition. To mitigate these detractors, the authors propose three strategies to rejuvenate contributions in failing FLOSS projects. Firstly, projects are encouraged to improve their stability by _moving towards an organization account instead of a personal account_. Secondly, failing projects are encouraged to _transfer the project to new maintainers_. This is especially needed if the current maintainers' activity has been deteriorating over time. Finally, the project is encouraged to _accept new core developers_. This organizational factor aims to rejuvenate and ignite fresh ideas, giving new life to the project.
### Forces at the Ecosystem Level
To date, existing works performed their analysis in respect to individual projects. At a higher level of abstraction, there exists cases where the forces of attraction (and detractions) in several projects in an ecosystem are triggered by a common event. For instance, several studies [2, 5, 7] investigated the eventful case of the JavaScript "left-pad" incident (see [1]), where removal of a trivial library package caused major breakages in thousands of projects including
notable JavaScript frameworks like babel and react.
Other examples of impactful events at the ecosystem level include responses to wide-spreading high risk security vulnerabilities (i.e., ShellShock, Heartbleed and Poodle), rivaling technologies (i.e., battles between competing frameworks for specific programming domains such as PHP6 and JavaScript7) and inadequacies in the current situation. As an example, current inadequacies could be realized when a change in management occurs (i.e., such as change of the middle-man in InnerSource8). Changes in management (i.e., especially the single movement of a key contributor) may set off a chain series of attract and distraction forces that leave behind a rippling effect across the ecosystem. We theorize that these forces impact ecosystem sustainability, especially if affected projects act as hubs within that ecosystem.
Footnote 6: A hlog post for 2018 best PHP frameworks at [https://coderasye.com/best-php-frameworks-for-web-developers/](https://coderasye.com/best-php-frameworks-for-web-developers/)
Footnote 7: A hlog that shows the trend changes between rival JavaScript frameworks[https://stackoverflow.blog/2018/01/11/brutal-lifecycle-javascript-frameworks/](https://stackoverflow.blog/2018/01/11/brutal-lifecycle-javascript-frameworks/)
Footnote 8: InnerSource takes the lessons learned from developing FLOSS and applies them to the way companies develop software internally. Taken from [https://paypal.github.io/InanSourceCommons/](https://paypal.github.io/InanSourceCommons/)
### Provocative Research Questions
To conclude this section, we formulate a set of provocative research questions to further our understanding of attraction and detraction forces:
* **What are the strengths and successes of known attractor strategies to FLOSS projects?** We have identified many attraction forces. Understanding the strength and success of these different attractors will assist us to treat projects that may be suffering with attracting new contributors to their projects.
* **How often are these attractor strategies practiced in the real-world and in respect to different ecosystems?** It is unknown to what extent and the frequency by which these strategies are practiced by practitioners in recent times. Furthermore, we are unclear of the environmental and ecosystem conditions required to sustain these attraction forces.
* **What are the implications and impact of these forces of attraction at the ecosystem level?** We theorize that attraction forces may impact the overall ecosystem. However, it is unclear the extent by which these forces of attraction may affect the sustainability of the overall ecosystem itself.
## 2 On the Death of Ecosystems
Software ecosystems have gained a lot of attention in recent times. Industry and developers gather around technologies and collaborate to their advancement; when the boundaries of such efforts go beyond certain amount of projects, we are witnessing the appearance of a software ecosystem. Software ecosystems are complex in nature, as many stakeholders are involved. There are for sure key people (e.g., Guido van Rossum in Python) and projects (such as MySQL in the MySQL ecosystem), but activity follows a decentralized pattern, more in the fashion of stigmergic process as known for instance from colonies of ants.
In this section we want to focus on the death of software ecosystems. While it is known that many FLOSS projects are discontinued, to the knowledge of the authors we haven't found any research on the topic on software ecosystems. We define as the death of a project as having no activity in it for a long period as done in other research works. So, a dead software ecosystems have would have no activity. It should be noted that other definitions of death could be proposed. One may think of having no users, a loss of interest in the software industry, a decrease in developers and
developer interest, etc.
On the other hand, we are not looking after projects, which are defined (i.e., they have a goal) and concrete software solution that has an organizational and logistic structure (a known website, repository, mailing list, etc.). Software ecosystems are built of many projects, which co-ordinate themselves (or not) but that have a relationship that is in general technological (although other types of ecosystems such as the (entire) Apache ecosystem orchestrates around collaboration).
### 1 Research Questions
Current research literature has so far focused mainly in successful FLOSS systems, to see how they are articulated and organized, in order to derive lessons learned out of these. Our method will be exploratory and based on case studies. Specifically, we want to address following RQs:
* **RQ\({}_{1}\). What do we know of dead ecosystems?** We want to approach our study based on real cases of ecosystems that were so in the past, but that are now inactive. So, as a first step, we performed an unstructured search for dead ecosystems, by asking participants in the workshop and then by looking in the web (mainly in the webpages of its projects and in Wikipedia) for more information. The output of this research question is a list of dead ecosystems on which the subsequent RQs will be addressed.
* **RQ\({}_{2}\). Why are these ecosystems dead?** Once we have identified dead ecosystems in RQ1, we would like to dig into the reasons why these have become inactive. In this regard, we would like to see if the cause of the inactivity can be technology (e.g., becoming an outdated technology), economic (e.g., failure of funding), legal (i.e., patent or license issues), among others. As input of information we will use Google searches on the Internet.
* **RQ\({}_{3}\). What can we learn from dead ecosystems?** Once we have identified dead ecosystems (RQ1) and have further information into what causes are behind its death (RQ2), our goal is to see if we can extract major insight into the topic. The final goal is, of course, to help software ecosystems to stay "healthy".
### 2 Findings
Based on research questions in the prior section, this section we discuss and present the findings of each research question.
#### RQ\({}_{1}\) What do we know of dead ecosystems?
During the seminar in Shonan, participants were asked informally regarding open ecosystems that have been discontinued. After much discussion, as shown in Table 1, the following dead systems arose from the discussions.
#### Control Versioning System (CVS)
CVS is a version control system, an important component of Source Configuration Management (SCM)9. Using it, you can record the history of sources files, and documents. The last version of CVS was published in 2008 (see
#### FirefoxOS
Firefox OS was a mobile operating system, based on HTML5 and the Linux kernel, available for several platforms. It was developed by Mozilla Corporation under the support of other companies and a large community of volunteers from around the world. The operating system was designed to allow HTML5 applications to communicate directly with device hardware using JavaScript and Open Web APIs10.
Footnote 10: although an official website is not found, the blog of one of the key engineers is an example of its existence [https://medium.com/@bfrancis/the-story-of-firfox-on-ch5bf796&fb](https://medium.com/@bfrancis/the-story-of-firfox-on-ch5bf796&fb)
In December 2015, Mozilla announced it would stop development of new Firefox OS smartphones, and in September 2016 announced the end of development.
Apache Geronimo11 is a FLOSS application server developed by the Apache Software Foundation and distributed under the Apache license. IBM announced May 14, 2013 that it would withdraw and discontinue support of Apache Geronimo(see [http://www-01.ibm.com/common/ssi/rep_ca/1/897/ENUS913-081/ENUS913-081.PDF](http://www-01.ibm.com/common/ssi/rep_ca/1/897/ENUS913-081/ENUS913-081.PDF)). This was also communicated through their website and mailing lists.
Footnote 11: website available at [http://gorenimo.apache.org/](http://gorenimo.apache.org/)
#### Maemo
Maemo 12 is a development platform for handheld devices based on debian GNU / Linux. Maemo is mostly based on open-source code and has been developed by Maemo Devices within Nokia in collaboration with many FLOSS projects such as the Linux kernel, Debian, and GNOME.
Footnote 12: website available at [http://mamo.org/intro/](http://mamo.org/intro/)
At the Mobile World Congress 2010, Intel and Nokia announced that they would unite their Linux-based platforms into a single product, called MeeGo. The Linux Foundation canceled MeeGo in September 2011 in favor of Tizen. An emerging Finnish company, Jolla, took Mer, a successor based on the MeeGo community, and created a new operating system: Salifish OS, and launched a new smartphone at the end of 2013.
#### RQ\({}_{2}\) Why are these ecosystems dead?
We have investigated what happened to the projects presented in RQ\({}_{1}\), to see if there is any continuation. In this regard, we investigate whether or not the original project is still alive, and if there have been any forks (i.e., others
\begin{table}
\begin{tabular}{l|c|c} \hline System Name & Brief Description & Discontinued Date \\ \hline Concurrent Versioning System (CVS) & version control & May, 2008 \\ FireFoxOS & mobile operating system & Dec, 2015 \\ Apache Geronimo & application server & May, 2013 \\ Maemo & mobile development platform & Feb, 2010 \\ \hline \end{tabular}
\end{table} TABLE I: Summary of the studied dead ecosystems
have taken the source code base and have evolved the software independently). As shown in Table 2, new projects emerged in the aftermath of the dying ecosystem.
### Cvs
Although the CVS project was discontinued, we find that due to the development of the Microsoft Windows, Linux, Solaris, HPUX, I5os and Mac OS X ports, evidence shows that CVS has split off into a separate project named CVSNT13, which is under current, active development (i.e., the latest update as of writing was April 2018).
Footnote 13: website available at [https://www.march-harc.com/cvspro/](https://www.march-harc.com/cvspro/)
### FirefoxOS
After the discontinuation of Firefox OS, several variants of the OS have emerged. Panasonic will continue to develop the operating system for use in their Smart TVs, which runs My Home Screen, powered by the Firefox OS. Acadine Technologies has derived their H5OS from Firefox OS as well. Li Gong, the founder of the company, has overseen the development of Firefox OS while serving as president of the Mozilla Corporation. Alcatel OneTouch GO FLIP uses a fork called KaiOS14. In addition, in July 2017, it was reported that Indian telecom operator Jio would be launching new feature phone with OS derived from Firefox OS and the apps are purely in HTML5 and CSS.
Footnote 14: website at [https://www.kaiostech.com/](https://www.kaiostech.com/)
### Apache Geronimo
The development of Apache Geronimo ceased around 2013, after its 3.0.1 release, when IBM and Oracle stopped to support the project in favor of their own technologies. Geronimo is not a single technology, but is the sum of many components, like Apache Tomcat15, Apache EJB16, Apache Derby17, among others. Many of these components are used in the implementation components of other frameworks as can be seen from [http://arjan-tijms.omnifaces.org/2014/05/imple](http://arjan-tijms.omnifaces.org/2014/05/imple).
Footnote 15: website at [http://tomea.apache.org/tomcat-ejb.html](http://tomea.apache.org/tomcat-ejb.html)
### Maemo
In February 2010, the Maemo project from Nokia merged with Moblin to create the MeeGo mobile software platform under the umbrella of the Linux Foundation. However, the Maemo community continued to be active in Maemo. That is the reason why Nokia transferred the Maemo ownership first to the Hilldon Foundation, and then to a German association called Maemo Community e.V. The last general assembly of this association has been in October 2017.
\begin{table}
\begin{tabular}{l|c} \hline System Name & Example Emergent Projects \\ \hline Concurrent Versioning System (CVS) & CVSNT \\ FireFoxOS & Panasonic variant, H5OS, KaiOS, Jio \\ Apache Geronimo & Tomcat, EJB, Derby \\ Maemo & MeeGo, Tizan, Mer \\ \hline \end{tabular}
\end{table}
Table 2: Emergent Projects after the death of the ecosystem
MeGeo18 was cancelled in September 2011, although a community-driven successor called Mer19 was formed that. A Finnish start-up, Jolla, chose in 2013 Mer as the basis of the Salifish OS operating system for their Jolla Phone smartphones. Another Mer derivative called Nemo Mobile is also currently developed actively.
Footnote 18: A variant of MeGeo is Tizen [https://www.tizen.org/](https://www.tizen.org/)
Footnote 19: website as [http://www.nezproject.org/](http://www.nezproject.org/)
#### RQ\({}_{3}\) What can we learn from dead ecosystems?
There is little to learn from dead ecosystems, because software ecosystems, at least those that are FLOSS, don't diel In our quest for dead ecosystems, what we have found are that ecosystems that have been abandoned have evolved (if not completely, at least partially) with a given name. This means, that organizations and names are the ones that may disappear, but the technology can be found years later in other projects and developments. There are two main factors that may concur to explain this situation:
1. **Forks originating from the dead ecosystem.** The first one is the right to fork that exists (and is used) in FLOSS development. Although forking (i.e., splitting the community by taking the technology under a new name) is historically not welcome in the FLOSS community, it is understood in certain contexts. One of these situations is when the project is abandoned.
2. **Technological advancements.** The second one is related to the development of technologies. This requires time, much human labor and is maintenance intensive. A software is not only its development and its community. It is as well the number of tests and maturity that it has achieved. Successful FLOSS ecosystems have invested a large amount of effort in becoming mature. Even if its key players lose their interest in the technology and the community seems to shrink, there is always the source code, that is result of that effort. In addition, the investment in time and learning of other technologies results in inertia by those who are familiar with the technology. With ecosystems that have a large community, the probabilities of even a minor part of this community still interested in continuing with development is very high.
### 2.3 Conclusions
FLOSS ecosystems are still too young to draw conclusions from our investigation, but as far as we have analyzed we have not found any (well-known) FLOSS ecosystem that can be considered dead (i.e., completely abandoned). For one or the other reason, the original software has evolved into other systems and communities and still serves, even if the importance of the project is not the one that used to be.
A lesson learned from our analysis is that if organizations want sustainability of a technology or application, they should strive for the ecosystem way. This is a lesson that could be of interest for consortia, public bodies, and companies wanting to set a standard. The network effects of developing a long-lasting software ecosystem is the probability that at least a small portion of the community keeps it alive. We have seen that this is the case from outdated technologies (like CVS) to hardware-linked software (such as Maemo).
As there is a growing interest of corporations in FLOSS, such as the one that can be found in OpenStack, OW2, WebKit, among others, we are sure that the future will allow to have further examples of ecosystems and analyse how they evolve, even when their main promoters abandon. |
2304.10881 | Hear Me Out: A Study on the Use of the Voice Modality for Crowdsourced
Relevance Assessments | The creation of relevance assessments by human assessors (often nowadays
crowdworkers) is a vital step when building IR test collections. Prior works
have investigated assessor quality & behaviour, though into the impact of a
document's presentation modality on assessor efficiency and effectiveness.
Given the rise of voice-based interfaces, we investigate whether it is feasible
for assessors to judge the relevance of text documents via a voice-based
interface. We ran a user study (n = 49) on a crowdsourcing platform where
participants judged the relevance of short and long documents sampled from the
TREC Deep Learning corpus-presented to them either in the text or voice
modality. We found that: (i) participants are equally accurate in their
judgements across both the text and voice modality; (ii) with increased
document length it takes participants significantly longer (for documents of
length > 120 words it takes almost twice as much time) to make relevance
judgements in the voice condition; and (iii) the ability of assessors to ignore
stimuli that are not relevant (i.e., inhibition) impacts the assessment quality
in the voice modality-assessors with higher inhibition are significantly more
accurate than those with lower inhibition. Our results indicate that we can
reliably leverage the voice modality as a means to effectively collect
relevance labels from crowdworkers. | Nirmal Roy, Agathe Balayn, David Maxwell, Claudia Hauff | 2023-04-21T10:50:44Z | http://arxiv.org/abs/2304.10881v1 | # Hear Me Out: A Study on the Use of the Voice Modality for Crowdsourced Relevance Assessments
###### Abstract.
The creation of relevance assessments by human assessors (often nowadays crowdworkers) is a vital step when building IR test collections. Prior works have investigated assessor quality & behaviour, and tooling to support assessors in their task. We have few insights though into the impact of a document's _presentation modality_ on assessor efficiency and effectiveness. Given the rise of voice-based interfaces, we investigate whether it is feasible for assessors to judge the relevance of text documents via a voice-based interface. We ran a user study (\(n=49\)) on a crowdsourcing platform where participants judged the relevance of short and long documents--sampled from the _TREC Deep Learning_ corpus--presented to them either in the text or voice modality. We found that: _(i)_ participants are _equally_ accurate in their judgements across both the text and voice modality; _(ii)_ with increased document length it takes participants significantly longer (for documents of length \(>120\) words it takes almost twice as much time) to make relevance judgements in the voice condition; and _(iii)_ the ability of assessors to ignore stimuli that are not relevant (i.e., _inhibition_) impacts the assessment quality in the voice modality--assessors with higher inhibition are significantly more accurate than those with lower inhibition. Our results indicate that we can reliably leverage the voice modality as a means to effectively collect relevance labels from crowdworkers.
Relevance Assessment; Cognitive Ability; Crowdsourcing; +
Footnote †: [leftmargin=*] *This research has been supported by _NWO UDID project SearchX_ (639.022.722) and _NWO_ project _Appsala_ (05.013.027).
[http://dx.doi.org/10.1145/359618.3591694](http://dx.doi.org/10.1145/359618.3591694)
+
Footnote †: [leftmargin=*] *This research has been supported by _NWO UDID project SearchX_ (639.022.722) and _NWO_ project _Appsala_ (05.013.
back and forth over a piece of information as compared to reading it on-screen (Konstan et al., 2016; Sohn et al., 2017; Sohn et al., 2018). Voice interfaces also demand greater cognitive load when compared to text interfaces for processing information (Konstan et al., 2016; Sohn et al., 2018; Sohn et al., 2018). These are exacerbated as the amount of information to be conveyed increases in size (Sohn et al., 2018; Sohn et al., 2018). Understanding how these factors affect the relevance judgement process can help us design tasks for assessors with a wide range of abilities and for different document presentation modalities. While there exists various measures for cognitive abilities, we selected two--_working memory_ (someone's ability to hold information in short-term memory) (Konstan et al., 2016) and _inhibition_ (someone's ability to ignore or inhibit attention to stimuli that are not relevant) (Konstan et al., 2016)--which have been shown to play an important role in speech understanding (Konstan et al., 2016; Sohn et al., 2018; Sohn et al., 2018). We posit that they will also be crucial in the relevance judgement process, especially when documents are presented in the voice modality. Taken together, we investigate the following research questions.
1. [leftmargin=*,noitemsep,nolistsep]
2. _How does the modality of document presentation (text vs. voice) affect an assessor's relevance judgement in terms of accuracy, time taken, and perceived workload?_
3. _How does the length of documents affect assessors' ability to judge relevance?_ Specifically, we look into the main effect of document length and the effect of its interplay with presentation modality.
4. _How do the cognitive abilities of an assessor_ (with respect to their working memory and inhibition) affect their ability to judge relevance?_ Specifically, we look into the main effect of the cognitive abilities and the effect of their interplay with the presentation modality.
To answer these questions, we conducted a quantitative user study (\(n=49\)) on the crowdsourcing platform Prolific. Participants judged the relevance of 40 short and long documents sampled from the passage retrieval task data of the 2019 & 2020 _TREC Deep Learning (DL) track_(Konstan et al., 2016; Sohn et al., 2018). Our findings are summarised as follows.
* [leftmargin=*,noitemsep,nolistsep]
* Participants judging documents presented in the voice modality were _equally_ accurate as those judging them in the text modality.
* As documents got longer, participants judging documents in voice modality took significantly longer than those in text modality. For documents of length greater than 120 words, the former took twice as much time with less reliable judgements.
* We also found that inhibition--or a participant's individual ability to ignore or inhibit attention to stimuli that are not relevant--impacts relevance judgements in voice modality. Indeed, those with higher inhibition were significantly more accurate than their lower inhibition counterparts.
Overall, our results indicate that we _can_ leverage the voice modality to effectively collect relevance labels from crowdworkers.
## 2. Related Work
### Relevance Judgement Collection
The general approach for gathering relevance assessments for large document corpora (large enough that a full judgement of all corpus documents is not possible) was established by TREC in the early 1990s (Konstan et al., 2016). Given a set of information needs, a pooled set of documents based on the top-\(k\) results of (ideally) a wide range of retrieval runs are assessed by topic experts. This method is typically costly and does not scale up (Brandt et al., 2016) once the number of information needs or \(k\) increases. In the last decade, creating test collections using crowdsourcing via platforms like Prolific or _Amazon Mechanical Turk (AMT)_ have been shown to be a less costly yet reliable alternative (Brandt et al., 2016; Sohn et al., 2018; Sohn et al., 2018). While the potential of crowdsourcing for more efficient relevance assessment has been acknowledged, concerns have been raised regarding its quality--as workers might be too inexperienced, lack the necessary topical expertise, or be paid an insufficient salary. In turn, these issues may lead them to completing the tasks to a low standard (Sohn et al., 2018; Sohn et al., 2018; Sohn et al., 2018). Aggregation methods (e.g., majority voting) can be used as effective countermeasures to improve the reliability of judgements (Sohn et al., 2018; Sohn et al., 2018).
There are a number of factors that have been shown to affect the relevance judgement process. Scholer et al. (Scholer et al., 2018) observed that participants exposed to non-relevant documents at the start of a judgement session assigned higher overall relevance scores to documents than when compared to those exposed to relevant documents. Damessie et al. (Damesie et al., 2018) found that for easier topics, assessors processed documents more quickly, and spent less time overall. Document length was also shown to be an important factor for judgement reliability. Hagerty (Hagerty, 2018) found that the precision and recall of abstracts judged increased as the abstract lengths increased (30, 60, and 300 words). In a similar vein, Singhal et al. (Singhal et al., 2018) observed that the likelihood of a document being judged relevant by an assessor increased with the document length. Chandar et al. (Chandar et al., 2018) found that shorter documents that are easier to understand provoked higher disagreement, and that there was a weak relationship between document length and disagreement between the assessor. In terms of time spent for relevance judgement, Konstan et al. (Konstan et al., 2016) and Shinoda (Shinoda, 2018) asserted that there is no significant correlation between time and document length. On the other hand, Smucker et al. (Smucker et al., 2018) found participants took more time to read, as document length increased (from \(\sim\)10s for 100 words, to \(\sim\)25s for 1000 words).
### Voice Modality
Voice-based crowdsourcing has been shown to be more accessible for people with visual impairments (Konstan et al., 2016; Sohn et al., 2018), or those from low resource backgrounds (Shinoda et al., 2018). It can also provide greater flexibility to crowdworkers by allowing them to work in brief sessions, enabling multitasking, reducing effort required to initiate tasks, and being reliable (Sohn et al., 2018; Sohn et al., 2018). However, information processing via voice is inherently different compared to when it is presented as text. The use of voice has been often shown to lead to a higher cognitive load (Sohn et al., 2018; Sohn et al., 2018). Individuals also exhibit different preferences. For example, Trippas et al. (Trippas et al., 2018) observed that participants preferred longer summaries for text presentation. For voice however, shortened summaries were preferred when the queries were single-faceted. Although their study did not measure the accuracy of judgements against a ground truth, what participants considered the most relevant was similar across both conditions (text vs. voice presentation). Furthermore, the voice modality can leverage its own unique characteristics for information presentation. For instance, Chuklin et al. (Chuklin et al., 2018) varied the prosody features (pauses, speech rate, pitch) of sentences containing answers to factoid questions. They found that emphasising the answer phrase with a lower speaking rate and higher pitch increased the perceived level of information conveyed.
Concerning the collection of relevance assessments, Tombros and Crestani (Tombros and Crestani, 2017) found in their lab study that participants were more accurate and faster in judging relevance when the list of documents (with respect to a query) were presented as text on screen as compared to when they were read out to the participants--either in person, or via telephone. It should however be noted that this work was conducted more than two decades ago--barely ten years after the invention of the Web, when the now common voice assistants and voice-enabled devices were long to be developed.
The work closest to ours is the study by Vtyurina et al. (Vtyurina et al., 2017), who presented crowdworkers with five results of different ranks from _Google_--either in text or voice modality. They asked their participants to select the two most useful results and the least useful one. They observed that the relevance judgements of participants in the text condition were significantly more consistent with the true ranking of the results than those who were presented with five audio snippets. The ability to identify the most relevant result was however _not_ different between the experimental cohorts. This study did not consider the effect of document length or cognitive abilities of participants on their relevance judgement performance, which is what we explore.
### Cognitive Abilities
Prior works have explored how the cognitive abilities of assessors impact relevance judgements. Davidson (Davidson, 2018) observed that openness to information--measured by a number of cognitive style variables such as open-mindedness, rigidity, and locus of control--accounted for approximately 30% of the variance in relevance assessments. Scholer et al. (Scholer et al., 2018) found that assessors with a higher need for cognition (i.e., a predisposition to enjoy cognitively demanding activities) had higher agreement with _expert_ assessors, and took longer to judge compared to their lower need for cognition counterparts. Our work focuses on _working memory_ and _inhibition_.
_Working Memory (WM)_ refers to _an individual's capacity for keeping information in short-term memory even when it is no longer perceptually present_(Scholer et al., 2018). This ability plays a role in higher-level tasks, such as reading comprehension (Scho et al., 2018) and problem solving (Vtyurina et al., 2017). MacFarlane et al. (MacFarlane et al., 2018) observed that participants with dyslexia--a learning disorder characterised by low working memory--judged fewer text documents as non-relevant when compared to participants without the learning disorder. They posited that it might be cognitively more demanding to identify text documents as non-relevant for the cohort with dyslexia. With regards to processing speech, High **WM** has also been shown to be helpful in adapting to distortion of speech signals caused by background noise (Scholer et al., 2018). Rudner et al. (Rudner et al., 2018) and Stenback (Stenback, 2018) observed high **WM** individuals perceived less effort while recognising speech from noise.
_Inhibition (IN)_ refers to the capacity to regulate attention, behaviour, thoughts, and/or emotions by overriding internal impulses or external _'hure'_--and maintaining focus on what is appropriate or needed (Scholer et al., 2018). To our knowledge, prior studies have not investigated the effect of **IN** on the relevance assessment process. High **IN** has been shown to help in speech recognition, especially in adverse conditions like the presence of background noise (Stenback, 2018; Scholer et al., 2018).
A significant number of prior works have explored various aspects related to the process of relevance assessment. This work however considers the novel effect of document length and the cognitive abilities of assessors to explore the utility of the voice modality with regards to judging relevance.
## 3. Methodology
To address our three research questions outlined in SS1, we conducted a crowdsourced user study. The study participants were asked to judge the relevance of _Query/Passage (Q/P)_ pairings, where passages were presented either in the form of text (i.e., a piece of text) or voice (i.e., an audio clip). In our study, passage **presentation modality** is a _between-subjects_ variable. We also controlled the **length of passages**; this is a _within-subjects variable_ to ensure that participants judged passages of varying lengths. The _independent variables_**working memory** and **inhibition** allow us to estimate the impact of the cognitive abilities of the participants on the accuracy of their judgements, time taken and perceived workload.
### Study Overview
Figure 1 presents an overview of the user study design.1 The diagram highlights the main tasks that study participants undertook. Lasting approximately 32 minutes for text and 40 minutes for voice, the study consisted of four main parts: (_i_) the _pre-task survey_ (SS3.6); (_ii_) the _cognitive ability tests_ (SS3.3); (_iii_) the _judgements_ (SS3.4); and (_iv_) the _post-task survey_ (SS3.5).
Footnote 1: Note that circles refer to superimposed labels on the illustration in Figure 1.
After agreeing to the terms of the study, participants completed a pre-task survey (SS3.7). This survey included demographics questions, including questions about their familiarity with voice assistants--as reported in SS3.6. Participants would then move onto two _psychometric tests_; as outlined in SS3.3, these tests measured their cognitive abilities with respect to working memory (SS3.7) and inhibition (SS3.7). Participants undertook a short practice task to help them familiarise themselves with the interface for each test.
After the psychometric tests, participants moved to the main part of the study: judging Q/P pairings (SS3.7). The experimental system first assigned the participants to either text or voice randomly (SS3.4). Based on the assigned condition, participants then judged a total of 42 Q/P pairings presented to them in a random order to mitigate the effect of topic ordering (Scholer et al., 2018; Scholer et al., 2018) (SS3.2)--40 were selected
Figure 1. A high-level overview of the user study protocol, including approximate times for participants to complete each component. Refer to §3.1 for mappings to the letters highlighting key aspects of the study procedure.
from the _2019 and 2020 TREC Deep Learning (DL) track_, and the remaining two acted as a _sanity check_ (**SC**) 1. The 40 passages belonged to different _answer length buckets_ SS3.2 2. Finally, the participants would be taken to the post-task survey 1.
Footnote 1: The sanity check questions were: (_i_) _Who was the lead vocalist of Queen?_, with the answer passage being perfectly relevant; and (_ii_) _What is the difference between power-lifting and weighting?_, with the answer passage being non-relevant.
### Query/Passage Pairings
As mentioned, we obtained the Q/P pairings from the 2019 and 2020 TREC DL track--specifically the passage retrieval task (Hid
letter sequence in correct order) in short-term memory when it is no longer perceptually present.
_Inhibition._ To measure inhibition, we used the _Stroop test_ which was first introduced in 1935 (S
omitted the _'physical demand'_ question from the survey as it was not relevant to our task.7 Participants responded to the five NASA TLX questions using a seven-point scale with labelled endpoints (from _"poor"_ to _"good"_ for performance and from _"low"_ to _"high"_ for the remaining four).
Footnote 7: This was also done in prior studies, such as the study reported by Vyurimé et al. (Vyurimé et al., 2018)
_Measuring Participant Performance._ We also computed the _accuracy_ of our participants in the relevance judgement tasks. Accuracy was calculated in terms of how many Q/P pairs participants judged _correctly_--that is, their relevance judgement matching the ground truth from the QRELs. We also aggregated relevance judgements of participants on each Q/P pairing based on majority voting, as done by Kutlu et al. (Kutlu et al., 2019) to observe if collective judgements are more accurate. We used Krippendorf's alpha (\(\alpha\)) to measure internator agreement (as used by Damesise et al. (Damesise et al., 2018)). Lastly, we calculated Cohen's kappa (\(\kappa\)) (Cohn et al., 2010; Damesise et al., 2011; Damesise et al., 2011) which measures the agreement of judgements with ground truth by considering chance.
### Participant Demographics
We conducted an _a-priori_ power analysis using _G-power_(Kutlu et al., 2019) to determine the minimum sample size required to test our **RQs**. The results indicated that the required sample size--to achieve 95% power for detecting an effect of 0.25, with two groups (modality) and five measurements (passage length)--is 46. As such, we recruited 50 participants from the Prolific platform. We disqualified one participant as they failed to correctly judge our sanity check query/passage pairs (SS3.2). Our \(n=49\) (25 for text, 24 for voice) participants were native English speakers, with a 98% approval rate on the platform--a minimum of 250 prior successful task submissions, and self-declared as having no issues in seeing colour. Participants were required to use a desktop/laptop device in order to control for variables that might affect results of the Stroop and OSPAN tests on other (smaller) devices. From our participants, 22 identified as female, 24 as male, with 3 declining to disclose this information. The mean age of our participants was 38 (_min. 22, max. 69_). With respect to the highest completed education level, 28 possessed a Bachelors (or equivalent), nine has a Masters (or equivalent), ten had a high school degree, and two had a PhD (or equivalent). We also asked participants how often they used a smart speaker to search for information, and listening to the provided answer--to which 13 reported daily usage, 20 reported usage on a weekly basis, and 16 said never. Participants were paid at the rate of GBP\(\ell\)11/hour, a value that is greater than the 2022-2023 _[outside London]_ UK Real Living Wage.
## 4. Results and Discussion
This section presents the results of our experiments pertaining to our three **RQs**. First, we provide details on the statistical tests we conducted, and how we utilised the cognitive ability tests to divide participants into _low_- and _high-ability_ groups.
_Statistical Tests._ For our analyses8, we conducted a series of independent sample _t_-tests with Bonferroni correction (\(\alpha=0.05\)) to observe if the modality of presentation has a significant effect on our dependent variables--accuracy of relevance judgements, the time taken to judge, and the perceived workload (**RQ1**). We also conducted a series of mixed factorial ANOVA tests (where modality of presentation is _a between-subjects_ variable, and passage length is a _within subjects_ variable) to observe if presentation modality, passage length, or the interaction between them have a significant effect on accuracy of relevance judgement and time taken (**RQ2**). Lastly, we conducted a series of three-way ANOVA tests to observe if the two user dispositions--working memory and inhibition--or their interaction with modality of presentation have a significant effect on the three dependent variables (**RQ3**). For **RQ2** and **RQ3**, we followed up the ANOVA with pairwise Tukey tests with Bonferroni correction (\(\alpha=0.05\)) to observe where significant differences lay. In the case where no significant difference was observed between the two conditions, we used equivalence testing between conditions through the _two one-sided t-tests (TOST)_ procedure. The upper and lower bounds for the TOST was set at 7.5% (-\(\Delta\)L = \(\Delta\)U = 7.5) for accuracy, as Xu et al. (Xu et al., 2018) observed that LtR models were robust to errors of up to 10% in the dataset (we used 7.5% for _conservativeness_). For each scale of NASA-TLX, we set -\(\Delta\)L = \(\Delta\)U = 2.04, following Lee et al. (Xu et al., 2018), who used a bound of \(\pm\)18 on a 100-point NASA TLX. For our seven-point scale, it translates to \(\pm\)2.08 according to the formula of Hertzum (Hertzum, 2018).
Footnote 8: All data and code pertaining to our analyses are released.
_Cognitive Ability Scores and High vs. Low Ability Groups._ To examine the effect of a participant's cognitive abilities on relevance
Figure 2. Composition screenshot of both the text and voice interfaces used by participants for judging query-passage pairs. Circled numbers correspond to the same in the narrative, found in §3.4.
judgement accuracy (**RQ3**), we performed a median split of the scores obtained by the participants in the OSPAN (_min. 0_, _max._, 50, _mean_\(=25.4(\pm 12)\), _median_\(=22\)) and Stroop test (_min. \(=-300\)_, _max._\(=650\), _mean_\(=171.25(\pm 184)\), _median_\(=170\)) respectively. The mean scores of our participants for working memory and inhibition were within one standard deviation of the reference mean scores as reported in (Borda et al., 2017), validating our methodology. Participants were thus divided into a high- and low-ability group for each of working memory (based on OSPAN test scores) and inhibition (based on Stroop test scores). Note that for inhibition, a low test score indicates high ability. Prior studies have also analysed the effects of different cognitive abilities by dividing participants into low/high ability groups using a median split (Borda et al., 2017; Borda et al., 2017; Borda et al., 2018; Borda et al., 2019).
### RQ1: Modality of Passage Presentation
Table 3 presents the main results for **RQ1**. There was no significant difference in judgement accuracy (row **I**, Table 3) between participants in text and those in voice (\(t(47)=0.97,p=0.33\)). TOST revealed that accuracy of judgements across both conditions were _equivalent_ (\(p=0.02\)). The inter-annotator agreement (\(\alpha\)) was slightly higher in text. When using majority voting to aggregate relevance judgements (on average we had eight judgements per Q/P pair in each condition), we found that the accuracy increased from 68% and 66% to 79% and 76% respectively for text and voice (**II**, Table 3). This observation is in line with prior work (Shen et al., 2017), which shows that aggregating judgements from several assessors is more reliable than a single untrained assessor. Cohen's \(\kappa\) also increased with majority voting for both experimental conditions, indicating an increase in judgement reliability. Participants also showed similar trends of relevance judgement accuracy per relevance label category for both experimental conditions. As shown in Figure 3, participants in both conditions were most accurate in judging _'relevant'_ passages (in line with findings by Alonso and Mizzaro (2018)), followed by 'non-relevant' passages. 'Somewhat relevant' passages were most difficult to judge as participants in both conditions judged them correctly about half the time. With respect to the time taken to judge (**III**, Table 3), judgements in text were made significantly faster (\(t(47)=-4.93,p<0.001\)) than in voice.
In terms of workload measured using NASA-TLX, there was no significant difference in averages between the two cohorts in terms of perceived mental demand, effort, and temporal demand (**IV-VI**, Table 3). The TOST procedure revealed equivalent scores (\(p<0.05\)) provided by participants for these three items of the NASA-TLX scale. For the other dimensions of NASA-TLX questionnaire, participants in text reported they felt significantly more frustrated (**VII**, Table 3) while performing the task than those in voice (\(t(47)=4.69,p<0.001\)). Participants in voice also reported significantly higher perceived performance (**VIII**, Table 3) when compared to the former (\(t(47)=-3.60,p<0.001\)).
Overall, we found that participants listening to voice passages were equally accurate to their text counterparts. Vtyurina et al. (2019) also observed that the probability of participants to identify the most relevant document was the same for both text and voice conditions. However, the authors implemented a different task design to ours. Their participants were presented with a list of results, and were significantly better at identifying the correct order of relevance when the summaries were presented in text modality. Insofar as to acknowledging the difference in task design, our observations with regards to the accuracy of participants with respect to relevance judgements across modalities are found to be partially in line with those of Vtyurina et al. (2019). We also observed that voice participants perceived a lower or equal workload when compared to those of text, in contrast to the other study's findings (Vtyurina et al., 2019). This can be attributed to their study setup. Contrary to ours, their presentation modality was a _within-subjects_ variable. Our results indicate the proficiency of participants with both modalities for the given design of the task.
### RQ2: Passage Length
Table 4 presents results related to **RQ2**. Like modality of presentation, passage length or its interaction with presentation modality did not have a significant effect on the relevance judgement accuracy (comparing rows **Ia** and **Ib**, Table 4). The TOST procedure revealed that for **XS** (\(p=0.01\)) and **L** (\(p=0.001\)) passages, judgement accuracy was _equivalent_ across both conditions. Aggregating
\begin{table}
\begin{tabular}{l l||c|c} & **Metrics** & text & voice \\ \hline \hline
**I** & **Accuracy \(\star\)** & 68.40(\(\pm 9.15\))\% & 65.94(\(\pm 8.56\))\% \\ & \(\alpha,\kappa\) & 0.41, 0.61 & 0.37, 0.54 \\ \hline
**II** & **Majority Voting Acc.** & 79.1\% & 75.8\% \\ & \(\kappa\) & 0.76 & 0.71 \\ \hline
**III** & **Time/Rel. Judge.(sec.)**\(\dagger\) & 17.56(\(\pm 9.08\)) & 29.54(\(\pm 7.85\)) \\ \hline
**IV** & **Mental demand\(\star\)** & 4.68(\(\pm 1.60\)) & 4.83(\(\pm 1.37\)) \\
**V** & **Effort\(\star\)** & 4.88(\(\pm 1.88\)) & 4.00(\(\pm 1.50\)) \\
**VI** & **Temporal Demand\(\star\)** & 4.04(\(\pm 1.86\)) & 3.08(\(\pm 1.82\)) \\
**VII** & **Frustation\(\dagger\)** & 3.96(\(\pm 2.07\)) & 1.83(\(\pm 0.82\)) \\
**VIII** & **Performance\(\dagger\)** & 4.16(\(\pm 1.93\)) & 5.67(\(\pm 0.70\)) \\ \end{tabular}
\end{table}
Table 3. RQ1: Effect of modality of passage presentation on accuracy of relevance judgement, time taken per judgement in seconds and perceived workload (IV-VIII) per participant. We also report Krippendorff’s \(\alpha\) and Cohen’s \(\kappa\) for accuracy. \(\dagger\) indicates significant difference in between the two conditions according to independent sample t-test. \(\star\) indicates the corresponding metric is equivalent for both conditions based on the TOST procedure.
Figure 3. Accuracy of relevance judgements per label category for both text and voice. Diagonals represent percentage of time the true labels were _correctly_ predicted by participants. Here, R = RELEVANT, SR = SOMEWHAT-RELEVANT, NR = NON-RELEVANT and IDK = _I do not know._
judgements via majority voting increased relevance judgement accuracy across all passage lengths for both text and voice conditions (comparing rows **Ia-IIa** and **Ib-IIb**, Table 4). However, for **XL** passages (**IIa-IIb**, Table 4), the difference in accuracy after majority voting was more than 10% (with text being more accurate). We also observed a higher difference in Cohen's \(\kappa\) and Krippendorff's \(\alpha\) for **XL** passages between the text and voice conditions. These results indicated a higher inter-annotator agreement and reliability of judgements for text compared to participants in voice with regards to **XL** passages.
With respect to the time taken for judging, we have already seen (Section 4.1) that presentation modality significantly affected the time to judge. Mixed factorial ANOVA showed that passage length had a significant main effect (\(F=21.6,p=3.3e^{-15}\)) on the time taken to assess. A post-hoc test revealed a significant difference in the time taken to judge of the following pairs of passage lengths (with the latter passage length category taking more time): **XS-M (\(p=0.02\))**, **XS-L (\(p<0.001\))**, **XS-XL (\(p<0.001\))**, **S-XL(\(p<0.001\))** and **M-XL(\(p=0.001\))**. There was also a significant interaction effect between passage length and presentation modality on the amount of time taken. Pairwise Tukey test revealed that except for **XS** passages, judging relevance in voice took significantly longer for participants as compared to doing the same in text (**bold numbers**, row **III**, Table 5). In voice (**IIb**, Table 5), it took participants significantly longer to judge relevance, as passages (audio clips) increased in length. Superscripts (in Table 4) indicate which pairs of passage length were significantly different in voice in terms of time taken per judgement.
In summary, we did not observe a significant difference in relevance judgement accuracy across different passage lengths in both conditions. We observed judging relevance of **XS** passages was _equivalent_ in terms of accuracy and time taken across both text and voice. However, for **XL** passages, relevance judgements in text were more reliable (indicated by majority voting accuracy, \(\alpha\) and \(\kappa\) when compared to that in voice). There was no clear trend between passage length and assessor agreement observed in contrast to findings from [12], possibly due to differences in the type of documents assessed. Although it took longer on average to judge a lengthier passage in text, there was no significant difference in terms of the time taken to judge relevance of different passage lengths (a similar trend as observed in [37, 65]). For longer passages, participants in voice took significantly longer to judge relevance than in text. For **XL** passages, we found that participants were taking twice as long in voice when compared to text.
_Why does it take longer for participants to judge longer passages in the voice condition?_ In order to control for confounding variables, we did not let participants speed up the audio clips, nor did we provide them with a seeker bar to skip ahead. We found evidence that participants moved on to the next Q/P pairing as soon as they were satisfied with their assessment. Indeed, they did not wait for the audio clip to finish playing before moving on to the next Q/P pair for longer passages (Figure 4 (a)). We also let participants mark the relevance of a passage in voice only after 50% of the audio clip had been played (Section 3.1). However, as seen from Figure 4 (b), participants took longer to judge relevance (rather than right at the 50% mark). For **XL** passages, it was at the 66% of the audio clip on average. This suggests that it indeed took more time for participants
\begin{table}
\begin{tabular}{c||c|c|c|c|c|c} \multicolumn{1}{c||}{**Metrics**} & \multicolumn{1}{c|}{**Mode**} & \multicolumn{4}{c}{**Passage Length**} \\ \cline{3-8} & & **XS** & **S** & **M** & **L** & **XL** \\ \hline \hline \multirow{4}{*}{**I**} & text & 66.7(\(\pm\)19.0)\(\star\) & 74.5(\(\pm\)14.7) & 66.5(\(\pm\)17.5) & 61.0(\(\pm\)18.0)\(\star\) & 74.0(\(\pm\)19.0) \\ & \(\alpha,\kappa\) & 0.37, 0.57 & 0.51, 0.67 & 0.43, 0.55 & 0.29, 0.50 & 0.44, 0.68 \\ \cline{2-8} & voice & 67.7(\(\pm\)21.9)\(\star\) & 64.06(\(\pm\)15.0) & 72.4(\(\pm\)19.4) & 61.5(\(\pm\)13.9)\(\star\) & 64.0(\(\pm\)16.7) \\ & \(\alpha,\kappa\) & 0.39, 0.56 & 0.44, 0.49 & 0.49, 0.63 & 0.27, 0.51 & 0.35, 0.48 \\ \hline \multirow{4}{*}{**II**} & text & 75 & 83 & 79 & 75 & 92 \\ & \(\kappa\) & 0.73 & 0.81 & 0.74 & 0.73 & 0.91 \\ \cline{2-8} & voice & 79 & 79 & 79 & 67 & 79 \\ & \(\kappa\) & 0.78 & 0.78 & 0.73 & 0.62 & 0.76 \\ \hline \multirow{4}{*}{**III**} & text & 14.11(\(\pm\)6.3) & 15.25(\(\pm\)7.7) & 15.15(\(\pm\)6.8) & 21.39(\(\pm\)12.41) & 21.86(\(\pm\)12.7) \\ \cline{2-8} & voice & 17.3(\(\pm\)5.0)\({}^{m,\kappa\kappa}\) & 25.47(\(\pm\)11.8)\({}^{m}\) & 28.45(\(\pm\)14.8)\({}^{m,\kappa\kappa}\) & **31.04**(\(\pm\)6.15)\({}^{m,\kappa\kappa}\) & **45.39**(\(\pm\)9.6)\({}^{m,\kappa\kappa}\) \\ \end{tabular}
\end{table}
Table 4: RQ2: Effects of passage length and presentation modality on accuracy of relevance judgements (with Krippendorff’s \(\alpha\), Cohen’s \(\kappa\)) and time taken. A **bold number** indicates that the metric for the corresponding presentation modality is significantly more than that for the other modality for the particular passage length. \({}^{xs,s,m,l,xl}\) indicates significant difference (within the same experimental condition) compared to **XS**, **S**, **M**, **L**, **XL** passage lengths. \(\star\) indicates equivalence between the two conditions.
Figure 4: The trend of voice participants judging relevance w.r.t. time taken for passages of various length: _(a)_ % of time participants listened to the entire audio clip; and _(b)_ at what point was relevance judged (as a % of audio clip length).
in voice compared to text to assimilate the information and come to a judgement decision for longer passages.
### RQ3: Assessor Cognitive Abilities
Table 5 contains the results for our third research question. Here, \(\mathbf{\check{\check{}}}\) indicates a significant effect (\(p<0.05\)) on the particular dependent variable, and \(\mathbf{\check{\check{}}}\) indicates no significant effect.
None of the independent variables--modality of passage presentation (**PM**), working memory (**WM**), and inhibition (**IN**)--had a significant main effect on judgement accuracy. The interaction between the **IN** of participants and presentation modality (**IN x PM**) had a significant effect on the accuracy (F = 4.89, \(p=0.03\)). Pairwise Tukey test revealed that in voice participants with higher **IN** performed significantly better than those with lower **IN** (\(70.5\pm 7.2\%\) vs. \(59.5\pm 4.8\) %). The post-hoc test (\(p=0.01\)) also revealed participants with low **IN** performed significantly better in text than those in voice (\(70.0\pm 9.5\) % vs. \(59.5\pm 4.8\) %). We found significant main effects of PM on the time taken to judge relevance (F = \(22.17,p<0.001\)), reaffirming findings from Section 4.1 and Section 4.2.
With respect to the perceived workload, working memory had significant main effects on perceived temporal demand (F = \(7.88,p=0.01\)). A post-hoc test (\(p<0.001\)) revealed that participants with high **WM** reported significantly less temporal demand as compared to those with low **WM** (\(2.5\pm 1.3\) vs. \(4.6\pm 1.7\) respectively). **IN** also had significant main effects on perceived temporal demand (F = \(7.4,p=0.01\)). A post-hoc test (\(p<0.001\)) revealed that participants with high **IN** reported significantly less temporal demand as compared to those with low **IN** (\(2.74\pm 1.4\) vs. \(4.59\pm 1.9\), respectively). Presentation modality had significant main effects on perceived frustration (F = \(8.36,p=0.008\)) and performance (F = \(5.83,p=0.02\))--confirming observations from Section 4.1--with participants in voice reporting a lower workload. Lastly, the interaction between **WM** and presentation modality (**WM x PM**) had a significant effect on perceived effort for the task (F = \(5.1,p=0.03\)). Post-hoc tests revealed that participants with high **WM** felt that judging using text required significantly more effort when compared to those in voice (\(p=0.001\)).
In summary, we found that **IN** is a more important trait than **WM**, specifically for relevance judgement accuracy in the voice modality. Low **IN** participants in the voice condition were less accurate--since we _did not control for the audio device of the participants_, and consequently not for the background noise they were subjected to, low **IN** participants in voice were less effective in focusing on the passages while judging relevance (Yuan et al., 2017; Wang et al., 2017). We leave exploring the effect of background noise as future work. In our study, the interplay between cognitive abilities and modality of presentation on perceived workload had different effects. High **IN** and **WM** participants felt less temporal demand. High **WM** in text felt more perceived effort compared to those in voice. Our results imply that we should design tasks for collecting relevance assessments to match the preference and abilities of crowdworkers (Yuan et al., 2017; Wang et al., 2017).
## 5. Conclusions
We explored the feasibility of using voice as a modality to collect relevance judgements of query-passage pairs. We investigated the effect of passage length and the cognitive abilities of participants on judgement accuracy, the time taken, and perceived workload.
**RQ1** On average, the relevance judgement accuracy was equivalent across both text and voice. Participants also perceived equal or less workload in voice when compared to text.
**RQ2** For **XS** passages, the performance and time taken for relevance judgements was _equivalent_ between both voice and text. As passages increased in length, it took participants significantly longer to make relevance judgements in the voice condition; for **XL** passages voice, participants took twice as much time and the judgements were less reliable compared to text.
**RQ3** Inhibition impacted the relevance judgement accuracy in the voice condition--participants with higher inhibition were significantly more accurate than those with lower inhibition.
Our results from **RQ1** suggest that we can leverage the voice modality for this task. **RQ2** points to the possibility of designing hybrid tasks, where we can use the voice modality for judging shorter passages and text for longer passages. The results of **RQ3** showed that selecting the right participants for the relevance judgement task is important. We should be mindful to personalise the task to match the preference and abilities of crowdworkers (Yuan et al., 2017; Wang et al., 2017).
There are several open questions for future work. We did not provide participants with the option to speed-up voice passages--_does letting them speed-up or skip passage parts reduce time for longer passages without reducing accuracy?_ We also did not test the limit of length--_how long can documents be for equal accuracy in the text and voice modality?_ Future work should also explore mobile devices for playing voice passages--_can we collect relevance judgements by offering more flexibility to crowdworkers?_ Lastly, since asking to provide rationales for judgements has been shown to
\begin{table}
\begin{tabular}{l||c|c|c||c|c} & **PM (Presentation)** & **WM (Working Memory)** & **IN (Inhibition)** & **WMxPM** & **INxPM** \\ \hline \hline
**1 Accuracy** & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) \\ \hline
**1 Time Taken (sec.)** & \(\mathbf{\check{}}\) (F = \(22.17,p<0.001\)) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) \\ \hline
**III Mental Demand** & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) \\ \hline
**IV Effort** & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) \\ \hline
**V Temporal Demand** & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) (F = \(7.88,p=0.01\)) & \(\mathbf{\check{}}\) (F = \(7.39,p=0.01\)) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) \\ \hline
**VI Frustration** & \(\mathbf{\check{}}\) (F = \(8.36,p=0.008\)) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) \\ \hline
**VII Performance** & \(\mathbf{\check{}}\) (F = \(5.83,p=0.02\)) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) & \(\mathbf{\check{}}\) \\ \hline \end{tabular}
\end{table}
Table 5. RQ3: Summary of main effects of _Presentation Modality (PM)_, _Working Memory (WM)_, _Inhibition (IN)_, and effects of the interaction of WM and IN with PM on accuracy of relevance judgement, time taken, and perceived workload. A \(\mathbf{\check{}}\) indicates significant effect of a 3-way ANOVA test (\(p<0.05\)) on the particular dependent variables and \(\mathbf{\check{}}\) indicates no significant effect.
improve relevance judgement accuracy of crowdworkers in the text modality [39], exploring the effects of rationale in voice-based relevance judgements should be a worthwhile endeavour.
|
2305.16101 | Quantum Random Number Generator Based on LED | Quantum random number generators (QRNGs) produce random numbers based on the
intrinsic probabilistic nature of quantum mechanics, making them true random
number generators (TRNGs). In this paper, we design and fabricate an embedded
QRNG that produces random numbers based on fluctuations of spontaneous emission
and absorption in a Light-Emitting Diode (LED). To achieve a robust and
reliable QRNG, we compare some usual post-processing methods and select the
finite impulse response (FIR) method for a real-time device. This device could
pass NIST tests, the generation rate is 1 Mbit/s and the randomness of the
output data is invariant in time. | Mohammadreza Moeini, Mohsen Akbari, Mohammad Mirsadeghi, Hamid Reza Naeij, Nima Haghkish, Ali Hayeri, Mehrdad Malekian | 2023-05-25T14:31:32Z | http://arxiv.org/abs/2305.16101v3 | # Quantum Random Number Generator Based on LED
###### Abstract
Quantum Random Number Generators (QRNGs) produce random numbers based on the intrinsic probability nature of quantum mechanics, making them True Random Number Generators (TRNGs). In this paper, we design and fabricate an embedded QRNG that produces random numbers based on fluctuations of spontaneous emission in a Light-Emitting Diode (LED). Additionally, a new perspective on the randomness of the recombination process in a LED is introduced that is consistent with experimental results. To achieve a robust and reliable QRNG, we compare some usual post-processing methods and select the best one for a real-time device. This device could pass NIST tests, the output speed is 1 Mbit/s and the randomness of the output data is invariant in time.
**Keywords**: Real-Time Quantum Random Number Generator, Spontaneous Emission, Beer-Lambert Law
## I Introduction
This century can be considered the beginning of the rapid development and dissemination of quantum information technology in almost all scientific and utility fields. Meanwhile, random numbers play an important role in many aspects of information technology [1; 2; 3]. Applications of random numbers include symmetric key cryptography [4], Monte Carlo simulation [5], transaction protection [6], and key distribution systems [7], will be more important in the era of quantum technology.
Traditionally, pseudo-random number generators (pseudo-RNGs) were based on deterministic algorithms but could not generate truly random numbers with information-theoretically provable randomness. On the other hand, quantum random number generators (QRNGs) can generate truly random numbers from the essentially probabilistic nature of quantum processes and can also provide higher bit rates instead of physical random number generators [8].
To date, various practical protocols for QRNGs have been proposed, such as QRNG-based photon counting [9; 5; 10], Raman scattering [11], vacuum fluctuations [12; 13], amplified spontaneous emission [14], radioactive decay [15], and laser phase noise [16; 10]. References [17; 18; 19] are available to the reader for further information.
In this work, we experimentally demonstrate a simple, inexpensive, and real-time QRNG based on the spontaneous emission of an LED and easily accessible at high speed.
In the following, we will discuss the theoretical background of our QRNG device in section II. Then, we will demonstrate the physical setup and introduce our post-processing procedures in section III. Finally, we will present and discuss the experiment outcomes in the section IV.
## II Theory
### Theoretical Background
Our study aims to amplify the random fluctuations in LED light output for use in a QRNG. Before exploring the experimental setup, we show that the light intensity variations in an LED are intrinsically random and probabilistic by analyzing the light-emitting mechanism in an LED in 2 steps. In the first step, we illustrate the broadening wavelength in LED using Fermi's golden rule, then justify the temporal light intensity variation using the Beer-Lambert law.
In the case of an LED, we can apply Fermi's golden rule to derive the transition probability between the conduction and valence bands, as expressed [20]:
\[w_{\rm cv}=\Big{|}\frac{1}{i\hbar}\int_{0}^{t}\langle\psi_{v}|H_{\rm int}| \psi_{c}\rangle e^{i\omega_{\rm cv}t^{\prime}}dt^{\prime}\Big{|}^{2} \tag{1}\]
Here, \(\psi_{c}\) and \(\psi_{v}\) represent arbitrary states in the conduction and valence bands respectively, and \(\hbar\omega_{\rm cv}\) is their energy difference. To determine the transition probability in an LED, the applied voltage on the LED, represented by \(H_{\rm int}=-eE=V\), where \(E\) denotes the applied electric field and \(V\) is the LED voltage [21].
Fermi's golden rule can explain the phenomenon of frequency broadening in an LED, but it may not be able to describe the changes in intensity over time. To bridge this gap, we rely on the Beer-Lambert law, which connects Fermi's golden rule to temporal intensity fluctuations. According to the Beer-Lambert law expressed in
Eq. (2), the intensity of light \(I_{0}\) passing through an object decreases exponentially with the passed distance
\[I=I_{0}e^{-\alpha L} \tag{2}\]
where \(\alpha\) represents the absorption coefficient of the object and \(L\) denotes the distance traveled by the light. As shown in Fig. 1(a), for each photon generated by an LED, the distance traveled may vary. Thus, Eq. (2) shows that there would be fluctuations in light intensity if there is a variation in \(\alpha\) or \(L\). To describe the light intensity fluctuations, we analyzed \(\alpha\) and \(L\) parameters. To derive the \(\alpha\) in an LED, we focus on the photon generation
Figure 1: (a) The process of light emission from an LED involves the application of a suitable voltage, which causes electrons and holes to recombine randomly in the active region. It is worth noting that the length of output for each photon is different. (b) Schematic of the built QRNG. The enclosure size is 16 \(\times\) 12 \(\times\) 6 cm\({}^{3}\). (c) Diagram of QRNG. The LED’s energy diagram illustrates the density of state for electrons and holes, with a varying absorption coefficient for each photon due to different frequencies. After emission, the photon is absorbed by the photodiode, and the output electrical signal is amplified, then digitized by an ADC and processed through a microprocessor.
in the active region. The interaction Hamiltonian for a generated photon into the active region is [20]:
\[H_{\mathrm{int}}^{\mathrm{photon}}=-E_{0}\mathrm{cos}(\mathbf{q}\mathbf{r}- \omega t)\mathbf{e}_{q}\mathbf{d} \tag{3}\]
where \(E_{0}\), \(\mathbf{q}\), \(\omega\), and \(\mathbf{e}_{q}\) are the electric field amplitude, wavenumber, angular frequency, and polarization unit vector of the generated photon, respectively, and \(\mathbf{d}=-\,\mathrm{e}\,\mathbf{r}\) denotes the light-induced electric field dipole moment. Restricting the treatment to the dipole approximation, we obtain:
\[\langle\psi_{c}|H_{\mathrm{int}}|\psi_{v}\rangle=-\frac{E_{0}}{2}(e^{-i\omega t }+e^{i\omega t})\mathbf{d}_{\mathrm{cv}} \tag{4}\]
which \(\mathbf{d}_{\mathrm{cv}}=\langle\psi_{c}|\mathbf{e}_{q}\mathbf{d}|\psi_{v}\rangle\) is the dipole element for transition between state \(\psi_{c}\) and \(\psi_{v}\). On the other hand, the absorption coefficient can be defined as [21]:
\[\alpha=\frac{w_{\mathrm{cv}}\hbar\omega}{tS} \tag{5}\]
where \(S\) represents the Poynting vector, and \(w_{\mathrm{cv}}/t\) is the transition probability per unit of time or the transition rate probability. Regarding Eqs. (1), (3)-(5), one can derive \(\alpha\) as [20]:
\[\alpha=\frac{\omega}{\pi n_{b}c}|\mathbf{d}_{\mathrm{cv}}|^{2}\int_{0}^{ \infty}4\pi k^{2}\delta(\hbar\omega_{\mathrm{cv}}-\hbar\omega)\;dk \tag{6}\]
Eq. (6) gives the absorption coefficient in terms of the density of states and the dipole element where \(n_{b}\) and c, are the background refractive index and the speed of light, respectively.
In the next step, to investigate the effect of \(\alpha\) on the light intensity, we assume \(L\) is constant. This assumption is valid since some generated photons are closer to the LED aperture, and some are farther. In this manner, we define an effective optical length \(L_{\mathrm{eff}}\) for the photons, and regarding Eq. (2) we achieve:
\[I=I_{0}e^{-\alpha L_{\mathrm{eff}}} \tag{7}\]
This suggests that light intensity fluctuations may arise from \(\alpha\) variations, and if \(\alpha\) has a normal distribution, \(I\) should have a log-normal distribution. On the other hand, the relation between \(\alpha\) and wavelength \(\lambda\) for different materials is [21]:
\[\alpha=\frac{4\pi\kappa}{\lambda} \tag{8}\]
where \(\kappa\) is the extinction coefficient. For any LED which \(\alpha L_{\mathrm{eff}}<<1\), we can approximate [21]:
\[I\approx I_{0}(1-\alpha L_{\mathrm{eff}}) \tag{9}\]
This indicates that the light intensity follows the distribution of the absorption coefficient. Considering Eqs. (5)-(6), the \(\alpha\) depends on the transition rate probability and the energy of the generated photons. Both of these terms are random and make fluctuations in the light intensity. For example, wavelength broadening has been shown in Fig. 1(c), which can directly cause a variation in the light intensity. Thus the intensity fluctuations in an LED are inherently probable and cannot be predicted, making it a reliable source in QRNG.
### Spontaneous Emission
This section discusses cases where LED noise is more dominant than other noises. We aim to obtain the conditions in which the Signal to Noise Ratio (SNR) and min-entropy of the QRNG device are at their highest values. To determine the SNR for LED noise, we use the following formula:
\[SNR=\frac{A_{\mathrm{PD}}}{A_{\mathrm{CS}}} \tag{10}\]
\(A_{\mathrm{PD}}\) is the amplitude of the signal received from LED through the photodetector (PD) and illustrates the quantum signal, and \(A_{\mathrm{CS}}\) is the amplitude of overall classical noise, assuming shot noise is the primary noise source. It should be mentioned that classical noise is measured when the LED is not operating. \(A_{\mathrm{PD}}\) is directly related to the light intensity that reaches the PD:
\[A_{\mathrm{PD}}\approx I_{0}\alpha L_{\mathrm{eff}} \tag{11}\]
By substituting Eq. (11) into Eq (10), we get:
\[SNR=C_{0}\frac{I_{0}\alpha L_{\mathrm{eff}}}{A_{\mathrm{CS}}} \tag{12}\]
which \(C_{0}\) is a constant. The numerator of Eq. (12) is affected only by \(\alpha\) since other parameters are constant. Eq. (1) shows that \(\alpha\) is determined only by voltage, and since the LED voltage is constant, the numerator does not change across different currents. However, \(A_{\mathrm{CS}}\) increases as the current increases [21]. Thus, increasing the current leads to a decrease in both the SNR and the min-entropy in the LED-based QRNG, and decreasing the current should increase the SNR and min-entropy.
Our experimental results, which will be described in section IV, are in agreement with the above theoretical framework.
## III Experimental Methods
### Physical Setup
The integrated optical and electronic components into an enclosure and the conceptual design are shown in Fig. 1(b) and 1(c), respectively. The detailed design
block diagram of the QRNG module comprises three main parts. First, the quantum entropy source (LED). Second, the amplification of the signal received from LED by an amplifier (AMP). Finally, digitizing the amplified signal by an Analog-to-Digital Converter (ADC) with 12-bit resolution and 4 MSa/s sample rate and applying a post-processing procedure using a microprocessor. As demonstrated in Fig. 2(a), the temporal waveforms of signal and noise are measured, indicating that the quantum signal is dominant. The output signal oscillates irregularly, and shows good randomness of intensity fluctuation. The signal distribution is given by the green histogram in Fig. 2(b), and the Poisson distribution and symmetry are demonstrated by comparing them with the red fitting curve [22].
### Post-Processing Procedures
The purpose of standard RNGs is to generate a random uniform string, in which the raw numbers are processed to obtain a good-quality output with a uniform distribution. Thus, we aimed to convert the Poisson distribution of the random number generator to a uniform distribution using post-processing, and we explored three different methods to achieve this. The first approach involved a simple XOR (S-XOR) processing technique, where a single number from the ADC was XORed with the \(k_{\text{th}}\) number following it. The second approach was the modified XOR (M-XOR) method, where the output of the ADC was XORed with a shift-rotated version of itself and then XORed with the \(k_{\text{th}}\) number following it. The last approach that is effective in increasing the entropy of random numbers is the finite impulse response (FIR) method [23]. The FIR method involves taking a weighted sum of past input samples, as described by Eq. (13), to analyze the input signal. This can help to reduce any unwanted noise or interference in the signal, leading to a higher-quality output. Studies have shown that using the FIR method can also increase the min-entropy, or the lowest possible entropy, of the generated random numbers. This is because the method effectively extracts randomness from the raw input data, producing a more unpredictable output [24]. This technique consists in transforming a raw integer sample \(x(n)\) into an unbiased one \(y(n)\), by means of the relation:
\[y(n)=\sum_{i=0}^{M}b_{i}x(n-i) \tag{13}\]
where \(b_{i}=\frac{M!}{i!(M-i)!}\) and M is the number of raw samples. After every technique, we have taken m-Least Significant Bit (m-LSB).
In the next section, we report the results of SNR at different currents and demonstrated the obtained results for different post-processing techniques.
## IV Results and Discussion
One of the objectives of this experiment was to optimize the min-entropy of a QRNG setup. The min
Figure 2: (a) Temporal waveforms of the LED and amplifier circuit’s quantum signal and classic noise, where quantum noise is the prevailing factor. The green curve indicates the quantum signal, while the blue curve represents the amplified classical noise (shot noise and environmental noise), and the orange curve illustrates the amplified battery noise. (b) The histogram distribution of the signal voltage is represented by the green bars, with the dashed lines indicating a well-fitted Gaussian distribution.
entropy is a measure of the randomness of a sequence of numbers, and to obtain high-quality random numbers, we should maximize the min-entropy [4; 5]. To achieve this, we varied the current supplied to the LED and measured the SNR of the quantum and classical signals. Our findings demonstrated that at low currents, the quantum signal dominates, while at higher currents, the classical noise dominates, as shown in Fig. 3(a) and predicted by Eq. (12). Therefore, we determined the optimal current range where the quantum signal is the strongest and the classical noise is minimal. By maximizing the SNR in this range, we were able to achieve the highest min-entropy
Figure 4: The probability distribution of three post-processing approaches with different m-lsb values (m=6, m=7, m=8): (a) Simple XOR, (b) Modified XOR, and (c) FIR. The FIR method produces a perfectly uniform distribution, while the other two approaches exhibit some non-uniformity.
Figure 3: (a) The figure presents the ratio of quantum signal to classical noise, revealing that as the current decreases, the SNR increases as per theoretical predictions. (b) Min-entropy after FIR operation versus LED current bias. It shows the highest min-entropy occurs when the spontaneous emission is dominant.
in our setup. The min-entropy versus LED current has been shown in Fig. 3(b) which indicates that for lower currents, the SNR and min-entropy are at the highest values.
To establish a robust and reliable device we compare and evaluate three different post-processing procedures for real-time embedded systems that were introduced in the previous section. The suitability of these methods was assessed based on their processing time and ability to produce uniformly distributed random numbers that pass the National Institute of Standards and Technology (NIST) randomness test [25; 26]. The findings of this study are presented in Figs. 4 and 5, which illustrate the m-lsb values obtained through the application of three distinct post-processing methods, as well as the autocorrelation between the resultant data. As it has been shown in Fig. 5, the autocorrelation of post-processed data is below the standard deviation means the random data are not correlated [27]. The FIR method produced a uniform probability distribution at 10-lsb and passed the NIST randomness test, while the S-XOR and M-XOR methods produced uniformity at 5-lsb and 6-lsb, respectively. Table 1 presents the results of the NIST randomness test for each approach. Our findings indicate that the S-XOR approach did not pass some of the NIST tests, as presented in Table 1. Therefore, caution is necessary when relying solely on the uniform distribution achieved by the S-XOR approach.
The results assert that the FIR method is the most suitable approach for post-processing in real-time embedded systems. This method has been found to increase the min-entropy of the random data, thereby improving the reliability and robustness of the device. Following a comparative analysis of three alternative methods, we have selected the FIR method for use in our device. To further evaluate the performance of our device, we conducted a series of tests, including temperature tests to assess the impact of temperature on device performance, bias current tests to evaluate the effect of varying bias currents on device performance, and time duration tests to determine the stability of device performance over time. These tests have provided valuable insights into the effectiveness and reliability of the FIR method for post-processing in real-time embedded systems. The findings presented in Fig. 6 demonstrate the variation of the min-entropy of random
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Statistical Tests** & **FIR** & **M-XOR** & **S-XOR** \\ \hline \hline Frequency & Success & Success & Failed \\ Block Frequency & Success & Success & Failed \\ Runs & Success & Success & Failed \\ Longest Run & Success & Success & Failed \\ Rank & Success & Success & Success \\ FFT & Success & Success & Success \\ Non-Overlapping Template & Success & Success & Success \\ Overlapping Template & Success & Success & Success \\ Universal & Success & Success & Success \\ Linear Complexity & Success & Success & Success \\ Serial & Success & Success & Success \\ Approximate Entropy & Success & Success & Success \\ Cumulative Sums & Success & Success & Failed \\ Random Excursions & Success & Success & Failed \\ Random Excursions Variant & Success & Success & Failed \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance comparison of NIST results for S-XOR, M-XOR, and FIR methods.
Figure 5: The autocorrelation coefficient for four different datasets: raw data, S-XOR, M-XOR, and FIR. The standard deviation is denoted by the dashed line.
Figure 6: The figure illustrates the relationship between min-entropy and time, indicating the device’s dependable and consistent production of randomness over time.
data over time. The results show that the min-entropy remains almost constant, indicating that the fabricated device is robust and reliable over time. Additionally, a temperature test was performed to assess the device's reliability and robustness against varying temperatures which displays that the device performs well and maintains its reliability and robustness even under temperature fluctuations.
Finally, our QRNG device now reaches a data bit rate of 1 Mb/s and has the potential to attain higher rates by integrating additional LEDs in a parallel configuration.
## V Conclusion
We have fabricated a durable, low-power, cost-effective QRNG uses LED spontaneous emission. Our findings demonstrate that, at lower currents, the quantum noise dominates and increases the SNR and min-entropy. To the best of our knowledge, our work establishes the connection between Fermi's golden rule and the temporal fluctuations of light intensity of an LED for the first time. Moreover, we evaluated various post-processing methods and discovered that the FIR approach is the most reliable and yields the highest min-entropy outcomes. Our device maintains a stable min-entropy over time and under variable temperatures, operates at a real-time rate of 1 Mb/s, and has passed all the NIST tests. These results provide a promising route for building efficient and practical QRNGs with potentially high bit rates.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.